Compare commits

..

12 Commits

Author SHA1 Message Date
Devin AI
bfb578d506 fix: Add proper null checks for logger calls and improve type safety in LLM class
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-01-01 21:54:49 +00:00
Devin AI
5d3c34b3ea fix: Improve type annotations across multiple files
- Replace Optional[set[str]] with Union[set[str], None] in json methods
- Fix add_nodes_to_network call parameters in flow_visualizer
- Add __base__=BaseModel to create_model call in structured_tool
- Clean up imports in provider.py

Co-Authored-By: Joe Moura <joao@crewai.com>
2025-01-01 21:29:15 +00:00
Devin AI
8ec2eb7d72 fix(agent): improve token tracking and logging functionality
- Add proper debug, info, warning, and error methods to Logger class
- Ensure warnings and errors are always shown regardless of verbose mode
- Fix token process initialization and tracking in Agent class
- Update TokenProcess import to use correct class from agent_builder utilities

Co-Authored-By: Joe Moura <joao@crewai.com>
2025-01-01 20:59:39 +00:00
Devin AI
39bdc7e4d4 fix: Improve type annotations and add proper None checks
Co-Authored-By: Joe Moura <joao@crewai.com>
2024-12-31 23:24:01 +00:00
Devin AI
344fa9bbe5 Merge remote-tracking branch 'origin/pr-1833' into pr-1833
- Integrate latest changes from remote
- Keep LLM parameter handling improvements
- Maintain test fixes and token process utility

Co-Authored-By: Joe Moura <joao@crewai.com>
2024-12-31 23:13:28 +00:00
Devin AI
0fd0b5c74f fix: Improve LLM parameter handling and fix test timeouts
- Add proper model name extraction in LLM class
- Handle optional parameters correctly in litellm calls
- Fix Agent constructor compatibility with BaseAgent
- Add token process utility for better tracking
- Clean up parameter handling in LLM class

Co-Authored-By: Joe Moura <joao@crewai.com>
2024-12-31 23:12:13 +00:00
João Moura
090d5128cb Merge branch 'main' into pr-1833 2024-12-31 18:41:16 -03:00
Devin AI
dec255e87a fix: Improve type conversion for LLM parameters and handle None values properly
Co-Authored-By: Joe Moura <joao@crewai.com>
2024-12-31 20:45:25 +00:00
Devin AI
f75b07ce82 Add pytest.ini to force VCR to use recorded cassettes in CI
Co-Authored-By: Joe Moura <joao@crewai.com>
2024-12-31 20:36:02 +00:00
Brandon Hancock
cf2f21cbfb Fix failling ollama tasks 2024-12-31 12:01:03 -05:00
Brandon Hancock
c4a401b247 change litellm version 2024-12-31 11:28:29 -05:00
Brandon Hancock
b0d545992a Suppressed userWarnings from litellm pydantic issues 2024-12-31 11:19:48 -05:00
141 changed files with 39580 additions and 15476 deletions

1
.gitignore vendored
View File

@@ -21,4 +21,3 @@ crew_tasks_output.json
.mypy_cache
.ruff_cache
.venv
agentops.log

View File

@@ -101,8 +101,6 @@ from crewai_tools import SerperDevTool
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
agents_config = "config/agents.yaml"
@agent
def researcher(self) -> Agent:
return Agent(

View File

@@ -12,7 +12,7 @@ The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you
To use the CrewAI CLI, make sure you have CrewAI installed:
```shell Terminal
```shell
pip install crewai
```
@@ -20,7 +20,7 @@ pip install crewai
The basic structure of a CrewAI CLI command is:
```shell Terminal
```shell
crewai [COMMAND] [OPTIONS] [ARGUMENTS]
```
@@ -30,7 +30,7 @@ crewai [COMMAND] [OPTIONS] [ARGUMENTS]
Create a new crew or flow.
```shell Terminal
```shell
crewai create [OPTIONS] TYPE NAME
```
@@ -38,7 +38,7 @@ crewai create [OPTIONS] TYPE NAME
- `NAME`: Name of the crew or flow
Example:
```shell Terminal
```shell
crewai create crew my_new_crew
crewai create flow my_new_flow
```
@@ -47,14 +47,14 @@ crewai create flow my_new_flow
Show the installed version of CrewAI.
```shell Terminal
```shell
crewai version [OPTIONS]
```
- `--tools`: (Optional) Show the installed version of CrewAI tools
Example:
```shell Terminal
```shell
crewai version
crewai version --tools
```
@@ -63,7 +63,7 @@ crewai version --tools
Train the crew for a specified number of iterations.
```shell Terminal
```shell
crewai train [OPTIONS]
```
@@ -71,7 +71,7 @@ crewai train [OPTIONS]
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
Example:
```shell Terminal
```shell
crewai train -n 10 -f my_training_data.pkl
```
@@ -79,14 +79,14 @@ crewai train -n 10 -f my_training_data.pkl
Replay the crew execution from a specific task.
```shell Terminal
```shell
crewai replay [OPTIONS]
```
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example:
```shell Terminal
```shell
crewai replay -t task_123456
```
@@ -94,7 +94,7 @@ crewai replay -t task_123456
Retrieve your latest crew.kickoff() task outputs.
```shell Terminal
```shell
crewai log-tasks-outputs
```
@@ -102,7 +102,7 @@ crewai log-tasks-outputs
Reset the crew memories (long, short, entity, latest_crew_kickoff_outputs).
```shell Terminal
```shell
crewai reset-memories [OPTIONS]
```
@@ -113,7 +113,7 @@ crewai reset-memories [OPTIONS]
- `-a, --all`: Reset ALL memories
Example:
```shell Terminal
```shell
crewai reset-memories --long --short
crewai reset-memories --all
```
@@ -122,7 +122,7 @@ crewai reset-memories --all
Test the crew and evaluate the results.
```shell Terminal
```shell
crewai test [OPTIONS]
```
@@ -130,7 +130,7 @@ crewai test [OPTIONS]
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example:
```shell Terminal
```shell
crewai test -n 5 -m gpt-3.5-turbo
```
@@ -138,7 +138,7 @@ crewai test -n 5 -m gpt-3.5-turbo
Run the crew.
```shell Terminal
```shell
crewai run
```
<Note>
@@ -147,36 +147,7 @@ Some commands may require additional configuration or setup within your project
</Note>
### 9. Chat
Starting in version `0.98.0`, when you run the `crewai chat` command, you start an interactive session with your crew. The AI assistant will guide you by asking for necessary inputs to execute the crew. Once all inputs are provided, the crew will execute its tasks.
After receiving the results, you can continue interacting with the assistant for further instructions or questions.
```shell Terminal
crewai chat
```
<Note>
Ensure you execute these commands from your CrewAI project's root directory.
</Note>
<Note>
IMPORTANT: Set the `chat_llm` property in your `crew.py` file to enable this command.
```python
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
chat_llm="gpt-4o", # LLM for chat orchestration
)
```
</Note>
### 10. API Keys
### 9. API Keys
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.
@@ -190,7 +161,6 @@ The CLI will initially prompt for API keys for the following services:
* Groq
* Anthropic
* Google Gemini
* SambaNova
When you select a provider, the CLI will prompt you to enter your API key.

View File

@@ -35,8 +35,6 @@ class ExampleFlow(Flow):
@start()
def generate_city(self):
print("Starting flow")
# Each flow state automatically gets a unique ID
print(f"Flow State ID: {self.state['id']}")
response = completion(
model=self.model,
@@ -49,8 +47,6 @@ class ExampleFlow(Flow):
)
random_city = response["choices"][0]["message"]["content"]
# Store the city in our state
self.state["city"] = random_city
print(f"Random City: {random_city}")
return random_city
@@ -68,8 +64,6 @@ class ExampleFlow(Flow):
)
fun_fact = response["choices"][0]["message"]["content"]
# Store the fun fact in our state
self.state["fun_fact"] = fun_fact
return fun_fact
@@ -82,15 +76,7 @@ print(f"Generated fun fact: {result}")
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
Each Flow instance automatically receives a unique identifier (UUID) in its state, which helps track and manage flow executions. The state can also store additional data (like the generated city and fun fact) that persists throughout the flow's execution.
When you run the Flow, it will:
1. Generate a unique ID for the flow state
2. Generate a random city and store it in the state
3. Generate a fun fact about that city and store it in the state
4. Print the results to the console
The state's unique ID and stored data can be useful for tracking flow executions and maintaining context between tasks.
When you run the Flow, it will generate a random city and then generate a fun fact about that city. The output will be printed to the console.
**Note:** Ensure you have set up your `.env` file to store your `OPENAI_API_KEY`. This key is necessary for authenticating requests to the OpenAI API.
@@ -152,7 +138,7 @@ print("---- Final Output ----")
print(final_output)
````
```text Output
``` text Output
---- Final Output ----
Second method received: Output from first_method
````
@@ -221,17 +207,14 @@ allowing developers to choose the approach that best fits their application's ne
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class.
This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
Even with unstructured states, CrewAI Flows automatically generates and maintains a unique identifier (UUID) for each state instance.
```python Code
from crewai.flow.flow import Flow, listen, start
class UnstructuredExampleFlow(Flow):
class UntructuredExampleFlow(Flow):
@start()
def first_method(self):
# The state automatically includes an 'id' field
print(f"State ID: {self.state['id']}")
self.state.message = "Hello from structured flow"
self.state.counter = 0
@@ -248,12 +231,10 @@ class UnstructuredExampleFlow(Flow):
print(f"State after third_method: {self.state}")
flow = UnstructuredExampleFlow()
flow = UntructuredExampleFlow()
flow.kickoff()
```
**Note:** The `id` field is automatically generated and preserved throughout the flow's execution. You don't need to manage or set it manually, and it will be maintained even when updating the state with new data.
**Key Points:**
- **Flexibility:** You can dynamically add attributes to `self.state` without predefined constraints.
@@ -264,15 +245,12 @@ flow.kickoff()
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow.
By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
Each state in CrewAI Flows automatically receives a unique identifier (UUID) to help track and manage state instances. This ID is automatically generated and managed by the Flow system.
```python Code
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class ExampleState(BaseModel):
# Note: 'id' field is automatically added to all states
counter: int = 0
message: str = ""
@@ -281,8 +259,6 @@ class StructuredExampleFlow(Flow[ExampleState]):
@start()
def first_method(self):
# Access the auto-generated ID if needed
print(f"State ID: {self.state.id}")
self.state.message = "Hello from structured flow"
@listen(first_method)
@@ -323,91 +299,6 @@ flow.kickoff()
By providing both unstructured and structured state management options, CrewAI Flows empowers developers to build AI workflows that are both flexible and robust, catering to a wide range of application requirements.
## Flow Persistence
The @persist decorator enables automatic state persistence in CrewAI Flows, allowing you to maintain flow state across restarts or different workflow executions. This decorator can be applied at either the class level or method level, providing flexibility in how you manage state persistence.
### Class-Level Persistence
When applied at the class level, the @persist decorator automatically persists all flow method states:
```python
@persist # Using SQLiteFlowPersistence by default
class MyFlow(Flow[MyState]):
@start()
def initialize_flow(self):
# This method will automatically have its state persisted
self.state.counter = 1
print("Initialized flow. State ID:", self.state.id)
@listen(initialize_flow)
def next_step(self):
# The state (including self.state.id) is automatically reloaded
self.state.counter += 1
print("Flow state is persisted. Counter:", self.state.counter)
```
### Method-Level Persistence
For more granular control, you can apply @persist to specific methods:
```python
class AnotherFlow(Flow[dict]):
@persist # Persists only this method's state
@start()
def begin(self):
if "runs" not in self.state:
self.state["runs"] = 0
self.state["runs"] += 1
print("Method-level persisted runs:", self.state["runs"])
```
### How It Works
1. **Unique State Identification**
- Each flow state automatically receives a unique UUID
- The ID is preserved across state updates and method calls
- Supports both structured (Pydantic BaseModel) and unstructured (dictionary) states
2. **Default SQLite Backend**
- SQLiteFlowPersistence is the default storage backend
- States are automatically saved to a local SQLite database
- Robust error handling ensures clear messages if database operations fail
3. **Error Handling**
- Comprehensive error messages for database operations
- Automatic state validation during save and load
- Clear feedback when persistence operations encounter issues
### Important Considerations
- **State Types**: Both structured (Pydantic BaseModel) and unstructured (dictionary) states are supported
- **Automatic ID**: The `id` field is automatically added if not present
- **State Recovery**: Failed or restarted flows can automatically reload their previous state
- **Custom Implementation**: You can provide your own FlowPersistence implementation for specialized storage needs
### Technical Advantages
1. **Precise Control Through Low-Level Access**
- Direct access to persistence operations for advanced use cases
- Fine-grained control via method-level persistence decorators
- Built-in state inspection and debugging capabilities
- Full visibility into state changes and persistence operations
2. **Enhanced Reliability**
- Automatic state recovery after system failures or restarts
- Transaction-based state updates for data integrity
- Comprehensive error handling with clear error messages
- Robust validation during state save and load operations
3. **Extensible Architecture**
- Customizable persistence backend through FlowPersistence interface
- Support for specialized storage solutions beyond SQLite
- Compatible with both structured (Pydantic) and unstructured (dict) states
- Seamless integration with existing CrewAI flow patterns
The persistence system's architecture emphasizes technical precision and customization options, allowing developers to maintain full control over state management while benefiting from built-in reliability features.
## Flow Control
### Conditional Logic: `or`
@@ -737,4 +628,4 @@ Also, check out our YouTube video on how to use flows in CrewAI below!
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen
></iframe>
></iframe>

View File

@@ -4,6 +4,8 @@ description: What is knowledge in CrewAI and how to use it.
icon: book
---
# Using Knowledge in CrewAI
## What is Knowledge?
Knowledge in CrewAI is a powerful system that allows AI agents to access and utilize external information sources during their tasks.
@@ -34,20 +36,7 @@ CrewAI supports various types of knowledge sources out of the box:
</Card>
</CardGroup>
## Supported Knowledge Parameters
| Parameter | Type | Required | Description |
| :--------------------------- | :---------------------------------- | :------- | :---------------------------------------------------------------------------------------------------------------------------------------------------- |
| `sources` | **List[BaseKnowledgeSource]** | Yes | List of knowledge sources that provide content to be stored and queried. Can include PDF, CSV, Excel, JSON, text files, or string content. |
| `collection_name` | **str** | No | Name of the collection where the knowledge will be stored. Used to identify different sets of knowledge. Defaults to "knowledge" if not provided. |
| `storage` | **Optional[KnowledgeStorage]** | No | Custom storage configuration for managing how the knowledge is stored and retrieved. If not provided, a default storage will be created. |
## Quickstart Example
<Tip>
For file-Based Knowledge Sources, make sure to place your files in a `knowledge` directory at the root of your project.
Also, use relative paths from the `knowledge` directory when creating the source.
</Tip>
## Quick Start
Here's an example using string-based knowledge:
@@ -91,14 +80,7 @@ result = crew.kickoff(inputs={"question": "What city does John live in and how o
```
Here's another example with the `CrewDoclingSource`. The CrewDoclingSource is actually quite versatile and can handle multiple file formats including TXT, PDF, DOCX, HTML, and more.
<Note>
You need to install `docling` for the following example to work: `uv add docling`
</Note>
Here's another example with the `CrewDoclingSource`
```python Code
from crewai import LLM, Agent, Crew, Process, Task
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
@@ -146,218 +128,39 @@ result = crew.kickoff(
)
```
## More Examples
Here are examples of how to use different types of knowledge sources:
### Text File Knowledge Source
```python
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
# Create a text file knowledge source
text_source = CrewDoclingSource(
file_paths=["document.txt", "another.txt"]
)
# Create crew with text file source on agents or crew level
agent = Agent(
...
knowledge_sources=[text_source]
)
crew = Crew(
...
knowledge_sources=[text_source]
)
```
### PDF Knowledge Source
```python
from crewai.knowledge.source.pdf_knowledge_source import PDFKnowledgeSource
# Create a PDF knowledge source
pdf_source = PDFKnowledgeSource(
file_paths=["document.pdf", "another.pdf"]
)
# Create crew with PDF knowledge source on agents or crew level
agent = Agent(
...
knowledge_sources=[pdf_source]
)
crew = Crew(
...
knowledge_sources=[pdf_source]
)
```
### CSV Knowledge Source
```python
from crewai.knowledge.source.csv_knowledge_source import CSVKnowledgeSource
# Create a CSV knowledge source
csv_source = CSVKnowledgeSource(
file_paths=["data.csv"]
)
# Create crew with CSV knowledge source or on agent level
agent = Agent(
...
knowledge_sources=[csv_source]
)
crew = Crew(
...
knowledge_sources=[csv_source]
)
```
### Excel Knowledge Source
```python
from crewai.knowledge.source.excel_knowledge_source import ExcelKnowledgeSource
# Create an Excel knowledge source
excel_source = ExcelKnowledgeSource(
file_paths=["spreadsheet.xlsx"]
)
# Create crew with Excel knowledge source on agents or crew level
agent = Agent(
...
knowledge_sources=[excel_source]
)
crew = Crew(
...
knowledge_sources=[excel_source]
)
```
### JSON Knowledge Source
```python
from crewai.knowledge.source.json_knowledge_source import JSONKnowledgeSource
# Create a JSON knowledge source
json_source = JSONKnowledgeSource(
file_paths=["data.json"]
)
# Create crew with JSON knowledge source on agents or crew level
agent = Agent(
...
knowledge_sources=[json_source]
)
crew = Crew(
...
knowledge_sources=[json_source]
)
```
## Knowledge Configuration
### Chunking Configuration
Knowledge sources automatically chunk content for better processing.
You can configure chunking behavior in your knowledge sources:
Control how content is split for processing by setting the chunk size and overlap.
```python
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
source = StringKnowledgeSource(
content="Your content here",
chunk_size=4000, # Maximum size of each chunk (default: 4000)
chunk_overlap=200 # Overlap between chunks (default: 200)
```python Code
knowledge_source = StringKnowledgeSource(
content="Long content...",
chunk_size=4000, # Characters per chunk (default)
chunk_overlap=200 # Overlap between chunks (default)
)
```
The chunking configuration helps in:
- Breaking down large documents into manageable pieces
- Maintaining context through chunk overlap
- Optimizing retrieval accuracy
## Embedder Configuration
### Embeddings Configuration
You can also configure the embedder for the knowledge store. This is useful if you want to use a different embedder for the knowledge store than the one used for the agents.
You can also configure the embedder for the knowledge store.
This is useful if you want to use a different embedder for the knowledge store than the one used for the agents.
The `embedder` parameter supports various embedding model providers that include:
- `openai`: OpenAI's embedding models
- `google`: Google's text embedding models
- `azure`: Azure OpenAI embeddings
- `ollama`: Local embeddings with Ollama
- `vertexai`: Google Cloud VertexAI embeddings
- `cohere`: Cohere's embedding models
- `voyageai`: VoyageAI's embedding models
- `bedrock`: AWS Bedrock embeddings
- `huggingface`: Hugging Face models
- `watson`: IBM Watson embeddings
Here's an example of how to configure the embedder for the knowledge store using Google's `text-embedding-004` model:
<CodeGroup>
```python Example
from crewai import Agent, Task, Crew, Process, LLM
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
import os
# Get the GEMINI API key
GEMINI_API_KEY = os.environ.get("GEMINI_API_KEY")
# Create a knowledge source
content = "Users name is John. He is 30 years old and lives in San Francisco."
```python Code
...
string_source = StringKnowledgeSource(
content=content,
content="Users name is John. He is 30 years old and lives in San Francisco.",
)
# Create an LLM with a temperature of 0 to ensure deterministic outputs
gemini_llm = LLM(
model="gemini/gemini-1.5-pro-002",
api_key=GEMINI_API_KEY,
temperature=0,
)
# Create an agent with the knowledge store
agent = Agent(
role="About User",
goal="You know everything about the user.",
backstory="""You are a master at understanding people and their preferences.""",
verbose=True,
allow_delegation=False,
llm=gemini_llm,
)
task = Task(
description="Answer the following questions about the user: {question}",
expected_output="An answer to the question.",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
...
knowledge_sources=[string_source],
embedder={
"provider": "google",
"config": {
"model": "models/text-embedding-004",
"api_key": GEMINI_API_KEY,
}
}
"provider": "openai",
"config": {"model": "text-embedding-3-small"},
},
)
result = crew.kickoff(inputs={"question": "What city does John live in and how old is he?"})
```
```text Output
# Agent: About User
## Task: Answer the following questions about the user: What city does John live in and how old is he?
# Agent: About User
## Final Answer:
John is 30 years old and lives in San Francisco.
```
</CodeGroup>
## Clearing Knowledge
If you need to clear the knowledge stored in CrewAI, you can use the `crewai reset-memories` command with the `--knowledge` option.

View File

@@ -146,19 +146,6 @@ Here's a detailed breakdown of supported models and their capabilities, you can
Groq is known for its fast inference speeds, making it suitable for real-time applications.
</Tip>
</Tab>
<Tab title="SambaNova">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| Llama 3.1 70B/8B | Up to 131,072 tokens | High-performance, large context tasks |
| Llama 3.1 405B | 8,192 tokens | High-performance and output quality |
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks, multimodal |
| Llama 3.3 70B | Up to 131,072 tokens | High-performance and output quality|
| Qwen2 familly | 8,192 tokens | High-performance and output quality |
<Tip>
[SambaNova](https://cloud.sambanova.ai/) has several models with fast inference speed at full precision.
</Tip>
</Tab>
<Tab title="Others">
| Provider | Context Window | Key Features |
|----------|---------------|--------------|
@@ -243,9 +230,6 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
# llm: bedrock/amazon.titan-text-express-v1
# llm: bedrock/meta.llama2-70b-chat-v1
# Amazon SageMaker Models - Enterprise-grade
# llm: sagemaker/<my-endpoint>
# Mistral Models - Open source alternative
# llm: mistral/mistral-large-latest
# llm: mistral/mistral-medium-latest
@@ -509,21 +493,6 @@ Learn how to get the most out of your LLM configuration:
)
```
</Accordion>
<Accordion title="Amazon SageMaker">
```python Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
Example usage:
```python Code
llm = LLM(
model="sagemaker/<my-endpoint>"
)
```
</Accordion>
<Accordion title="Mistral">
```python Code

View File

@@ -134,23 +134,6 @@ crew = Crew(
)
```
## Memory Configuration Options
If you want to access a specific organization and project, you can set the `org_id` and `project_id` parameters in the memory configuration.
```python Code
from crewai import Crew
crew = Crew(
agents=[...],
tasks=[...],
verbose=True,
memory=True,
memory_config={
"provider": "mem0",
"config": {"user_id": "john", "org_id": "my_org_id", "project_id": "my_project_id"},
},
)
```
## Additional Embedding Providers
@@ -293,26 +276,6 @@ my_crew = Crew(
}
)
```
### Using VoyageAI embeddings
```python Code
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "voyageai",
"config": {
"api_key": "YOUR_API_KEY",
"model_name": "<model_name>"
}
}
)
```
### Using HuggingFace embeddings
```python Code

View File

@@ -31,7 +31,7 @@ From this point on, your crew will have planning enabled, and the tasks will be
#### Planning LLM
Now you can define the LLM that will be used to plan the tasks.
Now you can define the LLM that will be used to plan the tasks. You can use any ChatOpenAI LLM model available.
When running the base case example, you will see something like the output below, which represents the output of the `AgentPlanner`
responsible for creating the step-by-step logic to add to the Agents' tasks.
@@ -39,6 +39,7 @@ responsible for creating the step-by-step logic to add to the Agents' tasks.
<CodeGroup>
```python Code
from crewai import Crew, Agent, Task, Process
from langchain_openai import ChatOpenAI
# Assemble your crew with planning capabilities and custom LLM
my_crew = Crew(
@@ -46,7 +47,7 @@ my_crew = Crew(
tasks=self.tasks,
process=Process.sequential,
planning=True,
planning_llm="gpt-4o"
planning_llm=ChatOpenAI(model="gpt-4o")
)
# Run the crew

View File

@@ -23,7 +23,9 @@ Processes enable individual agents to operate as a cohesive unit, streamlining t
To assign a process to a crew, specify the process type upon crew creation to set the execution strategy. For a hierarchical process, ensure to define `manager_llm` or `manager_agent` for the manager agent.
```python
from crewai import Crew, Process
from crewai import Crew
from crewai.process import Process
from langchain_openai import ChatOpenAI
# Example: Creating a crew with a sequential process
crew = Crew(
@@ -38,7 +40,7 @@ crew = Crew(
agents=my_agents,
tasks=my_tasks,
process=Process.hierarchical,
manager_llm="gpt-4o"
manager_llm=ChatOpenAI(model="gpt-4")
# or
# manager_agent=my_manager_agent
)

View File

@@ -150,20 +150,15 @@ There are two main ways for one to create a CrewAI tool:
```python Code
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
class MyToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "What this tool does. It's vital for effective utilization."
args_schema: Type[BaseModel] = MyToolInput
description: str = "Clear description for what this tool is useful for, your agent will need this information to use it."
def _run(self, argument: str) -> str:
# Your tool's logic here
return "Tool's result"
# Implementation goes here
return "Result from custom tool"
```
### Utilizing the `tool` Decorator

View File

@@ -73,9 +73,9 @@ result = crew.kickoff()
If you're using the hierarchical process and don't want to set a custom manager agent, you can specify the language model for the manager:
```python Code
from crewai import LLM
from langchain_openai import ChatOpenAI
manager_llm = LLM(model="gpt-4o")
manager_llm = ChatOpenAI(model_name="gpt-4")
crew = Crew(
agents=[researcher, writer],

View File

@@ -23,7 +23,6 @@ LiteLLM supports a wide range of providers, including but not limited to:
- Azure OpenAI
- AWS (Bedrock, SageMaker)
- Cohere
- VoyageAI
- Hugging Face
- Ollama
- Mistral AI
@@ -33,7 +32,6 @@ LiteLLM supports a wide range of providers, including but not limited to:
- Cloudflare Workers AI
- DeepInfra
- Groq
- SambaNova
- [NVIDIA NIMs](https://docs.api.nvidia.com/nim/reference/models-1)
- And many more!

View File

@@ -1,14 +1,14 @@
---
title: Using Multimodal Agents
description: Learn how to enable and use multimodal capabilities in your agents for processing images and other non-text content within the CrewAI framework.
icon: video
icon: image
---
## Using Multimodal Agents
# Using Multimodal Agents
CrewAI supports multimodal agents that can process both text and non-text content like images. This guide will show you how to enable and use multimodal capabilities in your agents.
### Enabling Multimodal Capabilities
## Enabling Multimodal Capabilities
To create a multimodal agent, simply set the `multimodal` parameter to `True` when initializing your agent:
@@ -25,7 +25,7 @@ agent = Agent(
When you set `multimodal=True`, the agent is automatically configured with the necessary tools for handling non-text content, including the `AddImageTool`.
### Working with Images
## Working with Images
The multimodal agent comes pre-configured with the `AddImageTool`, which allows it to process images. You don't need to manually add this tool - it's automatically included when you enable multimodal capabilities.
@@ -108,7 +108,7 @@ The multimodal agent will automatically handle the image processing through its
- Process image content with optional context or specific questions
- Provide analysis and insights based on the visual information and task requirements
### Best Practices
## Best Practices
When working with multimodal agents, keep these best practices in mind:

View File

@@ -1,202 +0,0 @@
---
title: Portkey Observability and Guardrails
description: How to use Portkey with CrewAI
icon: key
---
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-CrewAI.png" alt="Portkey CrewAI Header Image" width="70%" />
[Portkey](https://portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) is a 2-line upgrade to make your CrewAI agents reliable, cost-efficient, and fast.
Portkey adds 4 core production capabilities to any CrewAI agent:
1. Routing to **200+ LLMs**
2. Making each LLM call more robust
3. Full-stack tracing & cost, performance analytics
4. Real-time guardrails to enforce behavior
## Getting Started
<Steps>
<Step title="Install CrewAI and Portkey">
```bash
pip install -qU crewai portkey-ai
```
</Step>
<Step title="Configure the LLM Client">
To build CrewAI Agents with Portkey, you'll need two keys:
- **Portkey API Key**: Sign up on the [Portkey app](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) and copy your API key
- **Virtual Key**: Virtual Keys securely manage your LLM API keys in one place. Store your LLM provider API keys securely in Portkey's vault
```python
from crewai import LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
gpt_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy", # We are using Virtual key
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_VIRTUAL_KEY", # Enter your Virtual key from Portkey
)
)
```
</Step>
<Step title="Create and Run Your First Agent">
```python
from crewai import Agent, Task, Crew
# Define your agents with roles and goals
coder = Agent(
role='Software developer',
goal='Write clear, concise code on demand',
backstory='An expert coder with a keen eye for software trends.',
llm=gpt_llm
)
# Create tasks for your agents
task1 = Task(
description="Define the HTML for making a simple website with heading- Hello World! Portkey is working!",
expected_output="A clear and concise HTML code",
agent=coder
)
# Instantiate your crew
crew = Crew(
agents=[coder],
tasks=[task1],
)
result = crew.kickoff()
print(result)
```
</Step>
</Steps>
## Key Features
| Feature | Description |
|:--------|:------------|
| 🌐 Multi-LLM Support | Access OpenAI, Anthropic, Gemini, Azure, and 250+ providers through a unified interface |
| 🛡️ Production Reliability | Implement retries, timeouts, load balancing, and fallbacks |
| 📊 Advanced Observability | Track 40+ metrics including costs, tokens, latency, and custom metadata |
| 🔍 Comprehensive Logging | Debug with detailed execution traces and function call logs |
| 🚧 Security Controls | Set budget limits and implement role-based access control |
| 🔄 Performance Analytics | Capture and analyze feedback for continuous improvement |
| 💾 Intelligent Caching | Reduce costs and latency with semantic or simple caching |
## Production Features with Portkey Configs
All features mentioned below are through Portkey's Config system. Portkey's Config system allows you to define routing strategies using simple JSON objects in your LLM API calls. You can create and manage Configs directly in your code or through the Portkey Dashboard. Each Config has a unique ID for easy reference.
<Frame>
<img src="https://raw.githubusercontent.com/Portkey-AI/docs-core/refs/heads/main/images/libraries/libraries-3.avif"/>
</Frame>
### 1. Use 250+ LLMs
Access various LLMs like Anthropic, Gemini, Mistral, Azure OpenAI, and more with minimal code changes. Switch between providers or use them together seamlessly. [Learn more about Universal API](https://portkey.ai/docs/product/ai-gateway/universal-api)
Easily switch between different LLM providers:
```python
# Anthropic Configuration
anthropic_llm = LLM(
model="claude-3-5-sonnet-latest",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="anthropic_agent"
)
)
# Azure OpenAI Configuration
azure_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_AZURE_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="azure_agent"
)
)
```
### 2. Caching
Improve response times and reduce costs with two powerful caching modes:
- **Simple Cache**: Perfect for exact matches
- **Semantic Cache**: Matches responses for requests that are semantically similar
[Learn more about Caching](https://portkey.ai/docs/product/ai-gateway/cache-simple-and-semantic)
```py
config = {
"cache": {
"mode": "semantic", # or "simple" for exact matching
}
}
```
### 3. Production Reliability
Portkey provides comprehensive reliability features:
- **Automatic Retries**: Handle temporary failures gracefully
- **Request Timeouts**: Prevent hanging operations
- **Conditional Routing**: Route requests based on specific conditions
- **Fallbacks**: Set up automatic provider failovers
- **Load Balancing**: Distribute requests efficiently
[Learn more about Reliability Features](https://portkey.ai/docs/product/ai-gateway/)
### 4. Metrics
Agent runs are complex. Portkey automatically logs **40+ comprehensive metrics** for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Portkey's customizable filters provide the metrics you need.
- Cost per agent interaction
- Response times and latency
- Token usage and efficiency
- Success/failure rates
- Cache hit rates
<img src="https://github.com/siddharthsambharia-portkey/Portkey-Product-Images/blob/main/Portkey-Dashboard.png?raw=true" width="70%" alt="Portkey Dashboard" />
### 5. Detailed Logging
Logs are essential for understanding agent behavior, diagnosing issues, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.
Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.
<details>
<summary><b>Traces</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Traces.png" alt="Portkey Traces" width="70%" />
</details>
<details>
<summary><b>Logs</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Logs.png" alt="Portkey Logs" width="70%" />
</details>
### 6. Enterprise Security Features
- Set budget limit and rate limts per Virtual Key (disposable API keys)
- Implement role-based access control
- Track system changes with audit logs
- Configure data retention policies
For detailed information on creating and managing Configs, visit the [Portkey documentation](https://docs.portkey.ai/product/ai-gateway/configs).
## Resources
- [📘 Portkey Documentation](https://docs.portkey.ai)
- [📊 Portkey Dashboard](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai)
- [🐦 Twitter](https://twitter.com/portkeyai)
- [💬 Discord Community](https://discord.gg/DD7vgKK299)

View File

@@ -15,48 +15,10 @@ icon: wrench
If you need to update Python, visit [python.org/downloads](https://python.org/downloads)
</Note>
# Setting Up Your Environment
Before installing CrewAI, it's recommended to set up a virtual environment. This helps isolate your project dependencies and avoid conflicts.
<Steps>
<Step title="Create a Virtual Environment">
Choose your preferred method to create a virtual environment:
**Using venv (Python's built-in tool):**
```shell Terminal
python3 -m venv .venv
```
**Using conda:**
```shell Terminal
conda create -n crewai-env python=3.12
```
</Step>
<Step title="Activate the Virtual Environment">
Activate your virtual environment based on your platform:
**On macOS/Linux (venv):**
```shell Terminal
source .venv/bin/activate
```
**On Windows (venv):**
```shell Terminal
.venv\Scripts\activate
```
**Using conda (all platforms):**
```shell Terminal
conda activate crewai-env
```
</Step>
</Steps>
# Installing CrewAI
Now let's get you set up! 🚀
CrewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently.
Let's get you set up! 🚀
<Steps>
<Step title="Install CrewAI">
@@ -110,9 +72,9 @@ Now let's get you set up! 🚀
# Creating a New Project
<Tip>
<Info>
We recommend using the YAML Template scaffolding for a structured approach to defining agents and tasks.
</Tip>
</Info>
<Steps>
<Step title="Generate Project Structure">
@@ -142,18 +104,7 @@ Now let's get you set up! 🚀
└── tasks.yaml
```
</Frame>
</Step>
<Step title="Install Additional Tools">
You can install additional tools using UV:
```shell Terminal
uv add <tool-name>
```
<Tip>
UV is our preferred package manager as it's significantly faster than pip and provides better dependency resolution.
</Tip>
</Step>
</Step>
<Step title="Customize Your Project">
Your project will contain these essential files:

View File

@@ -91,7 +91,6 @@
"how-to/custom-manager-agent",
"how-to/llm-connections",
"how-to/customizing-agents",
"how-to/multimodal-agents",
"how-to/coding-agents",
"how-to/force-tool-output-as-result",
"how-to/human-input-on-execution",
@@ -101,8 +100,7 @@
"how-to/conditional-tasks",
"how-to/agentops-observability",
"how-to/langtrace-observability",
"how-to/openlit-observability",
"how-to/portkey-observability"
"how-to/openlit-observability"
]
},
{

View File

@@ -278,7 +278,7 @@ email_summarizer:
Summarize emails into a concise and clear summary
backstory: >
You will create a 5 bullet point summary of the report
llm: openai/gpt-4o
llm: mixtal_llm
```
<Tip>
@@ -301,166 +301,38 @@ Use the annotations to properly reference the agent and task in the `crew.py` fi
### Annotations include:
Here are examples of how to use each annotation in your CrewAI project, and when you should use them:
* `@agent`
* `@task`
* `@crew`
* `@tool`
* `@before_kickoff`
* `@after_kickoff`
* `@callback`
* `@output_json`
* `@output_pydantic`
* `@cache_handler`
#### @agent
Used to define an agent in your crew. Use this when:
- You need to create a specialized AI agent with a specific role
- You want the agent to be automatically collected and managed by the crew
- You need to reuse the same agent configuration across multiple tasks
```python
```python crew.py
# ...
@agent
def research_agent(self) -> Agent:
def email_summarizer(self) -> Agent:
return Agent(
role="Research Analyst",
goal="Conduct thorough research on given topics",
backstory="Expert researcher with years of experience in data analysis",
tools=[SerperDevTool()],
verbose=True
config=self.agents_config["email_summarizer"],
)
```
#### @task
Used to define a task that can be executed by agents. Use this when:
- You need to define a specific piece of work for an agent
- You want tasks to be automatically sequenced and managed
- You need to establish dependencies between different tasks
```python
@task
def research_task(self) -> Task:
def email_summarizer_task(self) -> Task:
return Task(
description="Research the latest developments in AI technology",
expected_output="A comprehensive report on AI advancements",
agent=self.research_agent(),
output_file="output/research.md"
config=self.tasks_config["email_summarizer_task"],
)
# ...
```
#### @crew
Used to define your crew configuration. Use this when:
- You want to automatically collect all @agent and @task definitions
- You need to specify how tasks should be processed (sequential or hierarchical)
- You want to set up crew-wide configurations
```python
@crew
def research_crew(self) -> Crew:
return Crew(
agents=self.agents, # Automatically collected from @agent methods
tasks=self.tasks, # Automatically collected from @task methods
process=Process.sequential,
verbose=True
)
```
#### @tool
Used to create custom tools for your agents. Use this when:
- You need to give agents specific capabilities (like web search, data analysis)
- You want to encapsulate external API calls or complex operations
- You need to share functionality across multiple agents
```python
@tool
def web_search_tool(query: str, max_results: int = 5) -> list[str]:
"""
Search the web for information.
Args:
query: The search query
max_results: Maximum number of results to return
Returns:
List of search results
"""
# Implement your search logic here
return [f"Result {i} for: {query}" for i in range(max_results)]
```
#### @before_kickoff
Used to execute logic before the crew starts. Use this when:
- You need to validate or preprocess input data
- You want to set up resources or configurations before execution
- You need to perform any initialization logic
```python
@before_kickoff
def validate_inputs(self, inputs: Optional[Dict[str, Any]]) -> Optional[Dict[str, Any]]:
"""Validate and preprocess inputs before the crew starts."""
if inputs is None:
return None
if 'topic' not in inputs:
raise ValueError("Topic is required")
# Add additional context
inputs['timestamp'] = datetime.now().isoformat()
inputs['topic'] = inputs['topic'].strip().lower()
return inputs
```
#### @after_kickoff
Used to process results after the crew completes. Use this when:
- You need to format or transform the final output
- You want to perform cleanup operations
- You need to save or log the results in a specific way
```python
@after_kickoff
def process_results(self, result: CrewOutput) -> CrewOutput:
"""Process and format the results after the crew completes."""
result.raw = result.raw.strip()
result.raw = f"""
# Research Results
Generated on: {datetime.now().isoformat()}
{result.raw}
"""
return result
```
#### @callback
Used to handle events during crew execution. Use this when:
- You need to monitor task progress
- You want to log intermediate results
- You need to implement custom progress tracking or metrics
```python
@callback
def log_task_completion(self, task: Task, output: str):
"""Log task completion details for monitoring."""
print(f"Task '{task.description}' completed")
print(f"Output length: {len(output)} characters")
print(f"Agent used: {task.agent.role}")
print("-" * 50)
```
#### @cache_handler
Used to implement custom caching for task results. Use this when:
- You want to avoid redundant expensive operations
- You need to implement custom cache storage or expiration logic
- You want to persist results between runs
```python
@cache_handler
def custom_cache(self, key: str) -> Optional[str]:
"""Custom cache implementation for storing task results."""
cache_file = f"cache/{key}.json"
if os.path.exists(cache_file):
with open(cache_file, 'r') as f:
data = json.load(f)
# Check if cache is still valid (e.g., not expired)
if datetime.fromisoformat(data['timestamp']) > datetime.now() - timedelta(days=1):
return data['result']
return None
```
<Note>
These decorators are part of the CrewAI framework and help organize your crew's structure by automatically collecting agents, tasks, and handling various lifecycle events.
They should be used within a class decorated with `@CrewBase`.
</Note>
<Tip>
In addition to the [sequential process](../how-to/sequential-process), you can use the [hierarchical process](../how-to/hierarchical-process),
which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results.
You can learn more about the core concepts [here](/concepts).
</Tip>
### Replay Tasks from Latest Crew Kickoff

View File

@@ -1,118 +1,78 @@
---
title: Composio Tool
description: Composio provides 250+ production-ready tools for AI agents with flexible authentication management.
description: The `ComposioTool` is a wrapper around the composio set of tools and gives your agent access to a wide variety of tools from the Composio SDK.
icon: gear-code
---
# `ComposioToolSet`
# `ComposioTool`
## Description
Composio is an integration platform that allows you to connect your AI agents to 250+ tools. Key features include:
- **Enterprise-Grade Authentication**: Built-in support for OAuth, API Keys, JWT with automatic token refresh
- **Full Observability**: Detailed tool usage logs, execution timestamps, and more
This tools is a wrapper around the composio set of tools and gives your agent access to a wide variety of tools from the Composio SDK.
## Installation
To incorporate Composio tools into your project, follow the instructions below:
To incorporate this tool into your project, follow the installation instructions below:
```shell
pip install composio-crewai
pip install crewai
pip install composio-core
pip install 'crewai[tools]'
```
After the installation is complete, either run `composio login` or export your composio API key as `COMPOSIO_API_KEY`. Get your Composio API key from [here](https://app.composio.dev)
after the installation is complete, either run `composio login` or export your composio API key as `COMPOSIO_API_KEY`.
## Example
The following example demonstrates how to initialize the tool and execute a github action:
1. Initialize Composio toolset
1. Initialize Composio tools
```python Code
from composio_crewai import ComposioToolSet, App, Action
from crewai import Agent, Task, Crew
from composio import App
from crewai_tools import ComposioTool
from crewai import Agent, Task
toolset = ComposioToolSet()
tools = [ComposioTool.from_action(action=Action.GITHUB_ACTIVITY_STAR_REPO_FOR_AUTHENTICATED_USER)]
```
2. Connect your GitHub account
<CodeGroup>
```shell CLI
composio add github
```
If you don't know what action you want to use, use `from_app` and `tags` filter to get relevant actions
```python Code
request = toolset.initiate_connection(app=App.GITHUB)
print(f"Open this URL to authenticate: {request.redirectUrl}")
tools = ComposioTool.from_app(App.GITHUB, tags=["important"])
```
</CodeGroup>
3. Get Tools
or use `use_case` to search relevant actions
- Retrieving all the tools from an app (not recommended for production):
```python Code
tools = toolset.get_tools(apps=[App.GITHUB])
tools = ComposioTool.from_app(App.GITHUB, use_case="Star a github repository")
```
- Filtering tools based on tags:
```python Code
tag = "users"
filtered_action_enums = toolset.find_actions_by_tags(
App.GITHUB,
tags=[tag],
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
- Filtering tools based on use case:
```python Code
use_case = "Star a repository on GitHub"
filtered_action_enums = toolset.find_actions_by_use_case(
App.GITHUB, use_case=use_case, advanced=False
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
<Tip>Set `advanced` to True to get actions for complex use cases</Tip>
- Using specific tools:
In this demo, we will use the `GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER` action from the GitHub app.
```python Code
tools = toolset.get_tools(
actions=[Action.GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER]
)
```
Learn more about filtering actions [here](https://docs.composio.dev/patterns/tools/use-tools/use-specific-actions)
4. Define agent
2. Define agent
```python Code
crewai_agent = Agent(
role="GitHub Agent",
goal="You take action on GitHub using GitHub APIs",
backstory="You are AI agent that is responsible for taking actions on GitHub on behalf of users using GitHub APIs",
role="Github Agent",
goal="You take action on Github using Github APIs",
backstory=(
"You are AI agent that is responsible for taking actions on Github "
"on users behalf. You need to take action on Github using Github APIs"
),
verbose=True,
tools=tools,
llm= # pass an llm
)
```
5. Execute task
3. Execute task
```python Code
task = Task(
description="Star a repo composiohq/composio on GitHub",
description="Star a repo ComposioHQ/composio on GitHub",
agent=crewai_agent,
expected_output="Status of the operation",
expected_output="if the star happened",
)
crew = Crew(agents=[crewai_agent], tasks=[task])
crew.kickoff()
task.execute()
```
* More detailed list of tools can be found [here](https://app.composio.dev)
* More detailed list of tools can be found [here](https://app.composio.dev)

View File

@@ -1,6 +1,6 @@
[project]
name = "crewai"
version = "0.98.0"
version = "0.86.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<3.13"
@@ -11,22 +11,27 @@ dependencies = [
# Core Dependencies
"pydantic>=2.4.2",
"openai>=1.13.3",
"litellm==1.57.4",
"litellm>=1.56.4",
"instructor>=1.3.3",
# Text Processing
"pdfplumber>=0.11.4",
"regex>=2024.9.11",
# Telemetry and Monitoring
"opentelemetry-api>=1.22.0",
"opentelemetry-sdk>=1.22.0",
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
# Data Handling
"chromadb>=0.5.23",
"openpyxl>=3.1.5",
"pyvis>=0.3.2",
# Authentication and Security
"auth0-python>=4.7.1",
"python-dotenv>=1.0.0",
# Configuration and Utils
"click>=8.1.7",
"appdirs>=1.4.4",
@@ -36,7 +41,6 @@ dependencies = [
"tomli-w>=1.1.0",
"tomli>=2.0.2",
"blinker>=1.9.0",
"json5>=0.10.0",
]
[project.urls]
@@ -45,7 +49,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools>=0.32.1"]
tools = ["crewai-tools>=0.17.0"]
embeddings = [
"tiktoken~=0.7.0"
]

View File

@@ -14,7 +14,7 @@ warnings.filterwarnings(
category=UserWarning,
module="pydantic.main",
)
__version__ = "0.98.0"
__version__ = "0.86.0"
__all__ = [
"Agent",
"Crew",

View File

@@ -1,3 +1,6 @@
from __future__ import annotations
import os
import shutil
import subprocess
from typing import Any, Dict, List, Literal, Optional, Union
@@ -7,6 +10,7 @@ from pydantic import Field, InstanceOf, PrivateAttr, model_validator
from crewai.agents import CacheHandler
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.cli.constants import ENV_VARS, LITELLM_PARAMS
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context
@@ -19,7 +23,9 @@ from crewai.tools.base_tool import Tool
from crewai.utilities import Converter, Prompts
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
from crewai.utilities.converter import generate_model_description
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.logger import Logger
from crewai.utilities.rpm_controller import RPMController
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.utilities.training_handler import CrewTrainingHandler
@@ -44,24 +50,113 @@ class Agent(BaseAgent):
Each agent has a role, a goal, a backstory, and an optional language model (llm).
The agent can also have memory, can operate in verbose mode, and can delegate tasks to other agents.
Attributes:
agent_executor: An instance of the CrewAgentExecutor class.
role: The role of the agent.
goal: The objective of the agent.
backstory: The backstory of the agent.
knowledge: The knowledge base of the agent.
config: Dict representation of agent configuration.
llm: The language model that will run the agent.
function_calling_llm: The language model that will handle the tool calling for this agent, it overrides the crew function_calling_llm.
max_iter: Maximum number of iterations for an agent to execute a task.
memory: Whether the agent should have memory or not.
max_rpm: Maximum number of requests per minute for the agent execution to be respected.
verbose: Whether the agent execution should be in verbose mode.
allow_delegation: Whether the agent is allowed to delegate tasks to other agents.
tools: Tools at agents disposal
step_callback: Callback to be executed after each step of the agent execution.
knowledge_sources: Knowledge sources for the agent.
Args:
role (Optional[str]): The role of the agent
goal (Optional[str]): The objective of the agent
backstory (Optional[str]): The backstory of the agent
allow_delegation (bool): Whether the agent can delegate tasks
config (Optional[Dict[str, Any]]): Configuration for the agent
verbose (bool): Whether to enable verbose output
max_rpm (Optional[int]): Maximum requests per minute
tools (Optional[List[Any]]): Tools available to the agent
llm (Optional[Union[str, Any]]): Language model to use
function_calling_llm (Optional[Any]): Language model for tool calling
max_iter (Optional[int]): Maximum iterations for task execution
memory (bool): Whether the agent should have memory
step_callback (Optional[Any]): Callback after each execution step
knowledge_sources (Optional[List[BaseKnowledgeSource]]): Knowledge sources
"""
model_config = {
"arbitrary_types_allowed": True,
"extra": "allow",
}
def __init__(
self,
role: Optional[str] = None,
goal: Optional[str] = None,
backstory: Optional[str] = None,
allow_delegation: bool = False,
config: Optional[Dict[str, Any]] = None,
verbose: bool = False,
max_rpm: Optional[int] = None,
tools: Optional[List[Any]] = None,
llm: Optional[Union[str, LLM, Any]] = None,
function_calling_llm: Optional[Any] = None,
max_iter: Optional[int] = None,
memory: bool = True,
step_callback: Optional[Any] = None,
knowledge_sources: Optional[List[BaseKnowledgeSource]] = None,
**kwargs
) -> None:
"""Initialize an Agent with the given parameters."""
# Process tools before passing to parent
processed_tools = []
if tools:
from crewai.tools import BaseTool
for tool in tools:
if isinstance(tool, BaseTool):
processed_tools.append(tool)
elif callable(tool):
# Convert function to BaseTool
processed_tools.append(Tool.from_function(tool))
else:
raise ValueError(f"Tool {tool} must be either a BaseTool instance or a callable")
# Process LLM before passing to parent
processed_llm = None
if isinstance(llm, str):
processed_llm = LLM(model=llm)
elif isinstance(llm, LLM):
processed_llm = llm
elif llm is not None and hasattr(llm, 'model') and hasattr(llm, 'temperature'):
# Handle ChatOpenAI and similar objects
model_name = getattr(llm, 'model', None)
if model_name is not None:
if not isinstance(model_name, str):
model_name = str(model_name)
processed_llm = LLM(
model=model_name,
temperature=getattr(llm, 'temperature', None),
api_key=getattr(llm, 'api_key', None),
base_url=getattr(llm, 'base_url', None)
)
# If no valid LLM configuration found, leave as None for post_init_setup
# Initialize all fields in a dict
init_dict = {
"role": role,
"goal": goal,
"backstory": backstory,
"allow_delegation": allow_delegation,
"config": config,
"verbose": verbose,
"max_rpm": max_rpm,
"tools": processed_tools,
"max_iter": max_iter if max_iter is not None else 25,
"function_calling_llm": function_calling_llm,
"step_callback": step_callback,
"knowledge_sources": knowledge_sources,
**kwargs
}
# Initialize base model with all fields
super().__init__(**init_dict)
# Store original values for interpolation
self._original_role = role
self._original_goal = goal
self._original_backstory = backstory
# Set LLM after base initialization to ensure proper model handling
self.llm = processed_llm
# Initialize private attributes
self._logger = Logger(verbose=self.verbose)
if self.max_rpm:
self._rpm_controller = RPMController(max_rpm=self.max_rpm, logger=self._logger)
self._token_process = TokenProcess()
_times_executed: int = PrivateAttr(default=0)
max_execution_time: Optional[int] = Field(
@@ -84,7 +179,7 @@ class Agent(BaseAgent):
llm: Union[str, InstanceOf[LLM], Any] = Field(
description="Language model that will run the agent.", default=None
)
function_calling_llm: Optional[Union[str, InstanceOf[LLM], Any]] = Field(
function_calling_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
)
system_template: Optional[str] = Field(
@@ -137,11 +232,146 @@ class Agent(BaseAgent):
@model_validator(mode="after")
def post_init_setup(self):
self._set_knowledge()
self.agent_ops_agent_name = self.role
self.agent_ops_agent_name = self.role or "agent"
unaccepted_attributes = [
"AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY",
"AWS_REGION_NAME",
]
self.llm = create_llm(self.llm)
if self.function_calling_llm and not isinstance(self.function_calling_llm, LLM):
self.function_calling_llm = create_llm(self.function_calling_llm)
# Handle LLM initialization if not already done
if self.llm is None:
# Determine the model name from environment variables or use default
model_name = (
os.environ.get("OPENAI_MODEL_NAME")
or os.environ.get("MODEL")
or "gpt-4o-mini"
)
llm_params = {"model": model_name}
api_base = os.environ.get("OPENAI_API_BASE") or os.environ.get(
"OPENAI_BASE_URL"
)
if api_base:
llm_params["base_url"] = api_base
set_provider = model_name.split("/")[0] if "/" in model_name else "openai"
# Iterate over all environment variables to find matching API keys or use defaults
for provider, env_vars in ENV_VARS.items():
if provider == set_provider:
for env_var in env_vars:
# Check if the environment variable is set
key_name = env_var.get("key_name")
if key_name and key_name not in unaccepted_attributes:
env_value = os.environ.get(key_name)
if env_value:
key_name = key_name.lower()
for pattern in LITELLM_PARAMS:
if pattern in key_name:
key_name = pattern
break
llm_params[key_name] = env_value
# Check for default values if the environment variable is not set
elif env_var.get("default", False):
for key, value in env_var.items():
if key not in ["prompt", "key_name", "default"]:
# Only add default if the key is already set in os.environ
if key in os.environ:
try:
# Create a new dictionary for properly typed parameters
typed_params = {}
# Convert and validate values based on parameter type
if key in ['temperature', 'top_p', 'presence_penalty', 'frequency_penalty']:
if value is not None:
try:
typed_params[key] = float(value)
except (ValueError, TypeError):
pass
elif key in ['n', 'max_tokens', 'max_completion_tokens', 'seed']:
if value is not None:
try:
typed_params[key] = int(value)
except (ValueError, TypeError):
pass
elif key == 'logit_bias' and isinstance(value, str):
try:
bias_dict = {}
for pair in value.split(','):
token_id, bias = pair.split(':')
bias_dict[int(token_id.strip())] = float(bias.strip())
typed_params[key] = bias_dict
except (ValueError, AttributeError):
pass
elif key == 'response_format' and isinstance(value, str):
try:
import json
typed_params[key] = json.loads(value)
except json.JSONDecodeError:
pass
elif key == 'logprobs':
if value is not None:
typed_params[key] = bool(value.lower() == 'true') if isinstance(value, str) else bool(value)
elif key == 'callbacks':
typed_params[key] = [] if value is None else [value] if isinstance(value, str) else value
elif key == 'stop':
typed_params[key] = [value] if isinstance(value, str) else value
elif key in ['model', 'base_url', 'api_version', 'api_key']:
typed_params[key] = value
# Update llm_params with properly typed values
if typed_params:
llm_params.update(typed_params)
except (ValueError, AttributeError, json.JSONDecodeError):
continue
# Create LLM instance with properly typed parameters
valid_params = {
'model', 'timeout', 'temperature', 'top_p', 'n', 'stop',
'max_completion_tokens', 'max_tokens', 'presence_penalty',
'frequency_penalty', 'logit_bias', 'response_format',
'seed', 'logprobs', 'top_logprobs', 'base_url',
'api_version', 'api_key', 'callbacks'
}
# Filter out None values and invalid parameters
filtered_params = {}
for k, v in llm_params.items():
if k in valid_params and v is not None:
filtered_params[k] = v
# Create LLM instance with properly typed parameters
self.llm = LLM(**filtered_params)
else:
# For any other type, attempt to extract relevant attributes
llm_params = {
"model": getattr(self.llm, "model_name", None)
or getattr(self.llm, "deployment_name", None)
or str(self.llm),
"temperature": getattr(self.llm, "temperature", None),
"max_tokens": getattr(self.llm, "max_tokens", None),
"logprobs": getattr(self.llm, "logprobs", None),
"timeout": getattr(self.llm, "timeout", None),
"max_retries": getattr(self.llm, "max_retries", None),
"api_key": getattr(self.llm, "api_key", None),
"base_url": getattr(self.llm, "base_url", None),
"organization": getattr(self.llm, "organization", None),
}
# Remove None values to avoid passing unnecessary parameters
llm_params = {k: v for k, v in llm_params.items() if v is not None}
self.llm = LLM(**llm_params)
# Similar handling for function_calling_llm
if self.function_calling_llm:
if isinstance(self.function_calling_llm, str):
self.function_calling_llm = LLM(model=self.function_calling_llm)
elif not isinstance(self.function_calling_llm, LLM):
self.function_calling_llm = LLM(
model=getattr(self.function_calling_llm, "model_name", None)
or getattr(self.function_calling_llm, "deployment_name", None)
or str(self.function_calling_llm)
)
if not self.agent_executor:
self._setup_agent_executor()
@@ -159,7 +389,7 @@ class Agent(BaseAgent):
def _set_knowledge(self):
try:
if self.knowledge_sources:
knowledge_agent_name = f"{self.role.replace(' ', '_')}"
knowledge_agent_name = f"{(self.role or 'agent').replace(' ', '_')}"
if isinstance(self.knowledge_sources, list) and all(
isinstance(k, BaseKnowledgeSource) for k in self.knowledge_sources
):
@@ -259,9 +489,6 @@ class Agent(BaseAgent):
}
)["output"]
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
self._times_executed += 1
if self._times_executed > self.max_retry_limit:
raise e
@@ -307,6 +534,32 @@ class Agent(BaseAgent):
self.response_template.split("{{ .Response }}")[1].strip()
)
# Ensure LLM is initialized with proper error handling
try:
if not self.llm:
self.llm = LLM(model="gpt-4")
if hasattr(self, '_logger'):
self._logger.debug("Initialized default LLM with gpt-4 model")
except Exception as e:
if hasattr(self, '_logger'):
self._logger.error(f"Failed to initialize LLM: {str(e)}")
raise
# Create token callback with proper error handling
try:
token_callback = None
if hasattr(self, '_token_process'):
token_callback = TokenCalcHandler(self._token_process)
except Exception as e:
if hasattr(self, '_logger'):
self._logger.warning(f"Failed to create token callback: {str(e)}")
token_callback = None
# Initialize callbacks list
executor_callbacks = []
if token_callback:
executor_callbacks.append(token_callback)
self.agent_executor = CrewAgentExecutor(
llm=self.llm,
task=task,
@@ -324,9 +577,9 @@ class Agent(BaseAgent):
function_calling_llm=self.function_calling_llm,
respect_context_window=self.respect_context_window,
request_within_rpm_limit=(
self._rpm_controller.check_or_wait if self._rpm_controller else None
self._rpm_controller.check_or_wait if (hasattr(self, '_rpm_controller') and self._rpm_controller is not None) else None
),
callbacks=[TokenCalcHandler(self._token_process)],
callbacks=executor_callbacks,
)
def get_delegation_tools(self, agents: List[BaseAgent]):
@@ -336,7 +589,6 @@ class Agent(BaseAgent):
def get_multimodal_tools(self) -> List[Tool]:
from crewai.tools.agent_tools.add_image_tool import AddImageTool
return [AddImageTool()]
def get_code_execution_tools(self):

View File

@@ -18,6 +18,7 @@ from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.tools_handler import ToolsHandler
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.tools import BaseTool
from crewai.tools.base_tool import Tool
from crewai.utilities import I18N, Logger, RPMController
@@ -87,9 +88,9 @@ class BaseAgent(ABC, BaseModel):
formatting_errors: int = Field(
default=0, description="Number of formatting errors."
)
role: str = Field(description="Role of the agent")
goal: str = Field(description="Objective of the agent")
backstory: str = Field(description="Backstory of the agent")
role: Optional[str] = Field(default=None, description="Role of the agent")
goal: Optional[str] = Field(default=None, description="Objective of the agent")
backstory: Optional[str] = Field(default=None, description="Backstory of the agent")
config: Optional[Dict[str, Any]] = Field(
description="Configuration for the agent", default=None, exclude=True
)
@@ -130,26 +131,47 @@ class BaseAgent(ABC, BaseModel):
max_tokens: Optional[int] = Field(
default=None, description="Maximum number of tokens for the agent's execution."
)
function_calling_llm: Optional[Any] = Field(
default=None, description="Language model for function calling."
)
step_callback: Optional[Any] = Field(
default=None, description="Callback for execution steps."
)
knowledge_sources: Optional[List[BaseKnowledgeSource]] = Field(
default=None, description="Knowledge sources for the agent."
)
model_config = {
"arbitrary_types_allowed": True,
"extra": "allow", # Allow extra fields in constructor
}
@model_validator(mode="before")
@classmethod
def process_model_config(cls, values):
"""Process configuration values before model initialization."""
return process_config(values, cls)
@field_validator("tools")
@classmethod
def validate_tools(cls, tools: List[Any]) -> List[BaseTool]:
def validate_tools(cls, tools: Optional[List[Any]]) -> List[BaseTool]:
"""Validate and process the tools provided to the agent.
This method ensures that each tool is either an instance of BaseTool
or an object with 'name', 'func', and 'description' attributes. If the
tool meets these criteria, it is processed and added to the list of
tools. Otherwise, a ValueError is raised.
This method ensures that each tool is either an instance of BaseTool,
a function decorated with @tool, or an object with 'name', 'func',
and 'description' attributes. If the tool meets these criteria, it is
processed and added to the list of tools. Otherwise, a ValueError is raised.
"""
if not tools:
return []
processed_tools = []
for tool in tools:
if isinstance(tool, BaseTool):
processed_tools.append(tool)
elif callable(tool) and hasattr(tool, "_is_tool") and tool._is_tool:
# Handle @tool decorated functions
processed_tools.append(Tool.from_function(tool))
elif (
hasattr(tool, "name")
and hasattr(tool, "func")
@@ -157,31 +179,54 @@ class BaseAgent(ABC, BaseModel):
):
# Tool has the required attributes, create a Tool instance
processed_tools.append(Tool.from_langchain(tool))
else:
else:
raise ValueError(
f"Invalid tool type: {type(tool)}. "
"Tool must be an instance of BaseTool or "
"an object with 'name', 'func', and 'description' attributes."
"Tool must be an instance of BaseTool, a @tool decorated function, "
"or an object with 'name', 'func', and 'description' attributes."
)
return processed_tools
@model_validator(mode="after")
def validate_and_set_attributes(self):
# Validate required fields
for field in ["role", "goal", "backstory"]:
if getattr(self, field) is None:
raise ValueError(
f"{field} must be provided either directly or through config"
)
"""Validate and set attributes for the agent.
This method ensures that attributes are properly set and initialized,
either from direct parameters or configuration.
"""
# Store original values for interpolation
self._original_role = self.role
self._original_goal = self.goal
self._original_backstory = self.backstory
# Set private attributes
# Process config if provided
if self.config:
config_data = self.config
if isinstance(config_data, str):
import json
try:
config_data = json.loads(config_data)
except json.JSONDecodeError:
raise ValueError("Invalid JSON in config")
# Update fields from config if they're None
for field in ["role", "goal", "backstory"]:
if field in config_data and getattr(self, field) is None:
setattr(self, field, config_data[field])
# Set default values for required fields if they're still None
self.role = self.role or "Assistant"
self.goal = self.goal or "Help the user accomplish their tasks"
self.backstory = self.backstory or "I am an AI assistant ready to help"
# Initialize tools handler if not set
if not hasattr(self, 'tools_handler') or self.tools_handler is None:
self.tools_handler = ToolsHandler()
# Initialize logger and rpm controller
self._logger = Logger(verbose=self.verbose)
if self.max_rpm and not self._rpm_controller:
self._rpm_controller = RPMController(
max_rpm=self.max_rpm, logger=self._logger
)
if not self._token_process:
self._token_process = TokenProcess()
if self.max_rpm:
self._rpm_controller = RPMController(max_rpm=self.max_rpm, logger=self._logger)
return self
@@ -208,9 +253,9 @@ class BaseAgent(ABC, BaseModel):
@property
def key(self):
source = [
self._original_role or self.role,
self._original_goal or self.goal,
self._original_backstory or self.backstory,
str(self._original_role or self.role or ""),
str(self._original_goal or self.goal or ""),
str(self._original_backstory or self.backstory or ""),
]
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
@@ -256,29 +301,45 @@ class BaseAgent(ABC, BaseModel):
"tools_handler",
"cache_handler",
"llm",
"function_calling_llm",
}
# Copy llm and clear callbacks
existing_llm = shallow_copy(self.llm)
# Copy LLMs and clear callbacks
existing_llm = shallow_copy(self.llm) if self.llm else None
existing_function_calling_llm = shallow_copy(self.function_calling_llm) if self.function_calling_llm else None
# Create base data
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
copied_agent = type(self)(**copied_data, llm=existing_llm, tools=self.tools)
# Create new instance with copied data
copied_agent = type(self)(
**copied_data,
llm=existing_llm,
function_calling_llm=existing_function_calling_llm,
tools=self.tools
)
# Copy private attributes
copied_agent._original_role = self._original_role
copied_agent._original_goal = self._original_goal
copied_agent._original_backstory = self._original_backstory
return copied_agent
def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Interpolate inputs into the agent description and backstory."""
if self._original_role is None:
self._original_role = self.role
self._original_role = self.role or ""
if self._original_goal is None:
self._original_goal = self.goal
self._original_goal = self.goal or ""
if self._original_backstory is None:
self._original_backstory = self.backstory
self._original_backstory = self.backstory or ""
if inputs:
self.role = self._original_role.format(**inputs)
self.goal = self._original_goal.format(**inputs)
self.backstory = self._original_backstory.format(**inputs)
self.role = self._original_role.format(**inputs) if self._original_role else None
self.goal = self._original_goal.format(**inputs) if self._original_goal else None
self.backstory = self._original_backstory.format(**inputs) if self._original_backstory else None
def set_cache_handler(self, cache_handler: CacheHandler) -> None:
"""Set the cache handler for the agent.

View File

@@ -19,10 +19,15 @@ class CrewAgentExecutorMixin:
agent: Optional["BaseAgent"]
task: Optional["Task"]
iterations: int
have_forced_answer: bool
max_iter: int
_i18n: I18N
_printer: Printer = Printer()
def _should_force_answer(self) -> bool:
"""Determine if a forced answer is required based on iteration count."""
return (self.iterations >= self.max_iter) and not self.have_forced_answer
def _create_short_term_memory(self, output) -> None:
"""Create and save a short-term memory item if conditions are met."""
if (
@@ -77,16 +82,17 @@ class CrewAgentExecutorMixin:
)
self.crew._long_term_memory.save(long_term_memory)
for entity in evaluation.entities:
entity_memory = EntityMemoryItem(
name=entity.name,
type=entity.type,
description=entity.description,
relationships="\n".join(
[f"- {r}" for r in entity.relationships]
),
)
self.crew._entity_memory.save(entity_memory)
if hasattr(evaluation, 'entities') and evaluation.entities:
for entity in evaluation.entities:
entity_memory = EntityMemoryItem(
name=entity.name,
type=entity.type,
description=entity.description,
relationships="\n".join(
[f"- {r}" for r in entity.relationships]
),
)
self.crew._entity_memory.save(entity_memory)
except AttributeError as e:
print(f"Missing attributes for long term memory: {e}")
pass

View File

@@ -25,7 +25,7 @@ class OutputConverter(BaseModel, ABC):
llm: Any = Field(description="The language model to be used to convert the text.")
model: Any = Field(description="The model to be used to convert the text.")
instructions: str = Field(description="Conversion instructions to the LLM.")
max_attempts: int = Field(
max_attempts: Optional[int] = Field(
description="Max number of attempts to try to get the output formatted.",
default=3,
)

View File

@@ -2,26 +2,25 @@ from crewai.types.usage_metrics import UsageMetrics
class TokenProcess:
def __init__(self) -> None:
self.total_tokens: int = 0
self.prompt_tokens: int = 0
self.cached_prompt_tokens: int = 0
self.completion_tokens: int = 0
self.successful_requests: int = 0
total_tokens: int = 0
prompt_tokens: int = 0
cached_prompt_tokens: int = 0
completion_tokens: int = 0
successful_requests: int = 0
def sum_prompt_tokens(self, tokens: int) -> None:
self.prompt_tokens += tokens
self.total_tokens += tokens
def sum_prompt_tokens(self, tokens: int):
self.prompt_tokens = self.prompt_tokens + tokens
self.total_tokens = self.total_tokens + tokens
def sum_completion_tokens(self, tokens: int) -> None:
self.completion_tokens += tokens
self.total_tokens += tokens
def sum_completion_tokens(self, tokens: int):
self.completion_tokens = self.completion_tokens + tokens
self.total_tokens = self.total_tokens + tokens
def sum_cached_prompt_tokens(self, tokens: int) -> None:
self.cached_prompt_tokens += tokens
def sum_cached_prompt_tokens(self, tokens: int):
self.cached_prompt_tokens = self.cached_prompt_tokens + tokens
def sum_successful_requests(self, requests: int) -> None:
self.successful_requests += requests
def sum_successful_requests(self, requests: int):
self.successful_requests = self.successful_requests + requests
def get_summary(self) -> UsageMetrics:
return UsageMetrics(

View File

@@ -1,7 +1,7 @@
import json
import re
from dataclasses import dataclass
from typing import Any, Callable, Dict, List, Optional, Union
from typing import Any, Dict, List, Union
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
@@ -13,7 +13,6 @@ from crewai.agents.parser import (
OutputParserException,
)
from crewai.agents.tools_handler import ToolsHandler
from crewai.llm import LLM
from crewai.tools.base_tool import BaseTool
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N, Printer
@@ -51,11 +50,11 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
original_tools: List[Any] = [],
function_calling_llm: Any = None,
respect_context_window: bool = False,
request_within_rpm_limit: Optional[Callable[[], bool]] = None,
request_within_rpm_limit: Any = None,
callbacks: List[Any] = [],
):
self._i18n: I18N = I18N()
self.llm: LLM = llm
self.llm = llm
self.task = task
self.agent = agent
self.crew = crew
@@ -69,7 +68,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self.tools_handler = tools_handler
self.original_tools = original_tools
self.step_callback = step_callback
self.use_stop_words = self.llm.supports_stop_words()
self.use_stop_words = self.llm.supports_stop_words() if self.llm else False
self.tools_description = tools_description
self.function_calling_llm = function_calling_llm
self.respect_context_window = respect_context_window
@@ -78,11 +77,14 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self.messages: List[Dict[str, str]] = []
self.iterations = 0
self.log_error_after = 3
self.have_forced_answer = False
self.tool_name_to_tool_map: Dict[str, BaseTool] = {
tool.name: tool for tool in self.tools
}
self.stop = stop_words
self.llm.stop = list(set(self.llm.stop + self.stop))
if self.llm.stop:
self.llm.stop = list(set(self.llm.stop + self.stop))
else:
self.llm.stop = self.stop
def invoke(self, inputs: Dict[str, str]) -> Dict[str, Any]:
if "system" in self.prompt:
@@ -97,16 +99,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._show_start_logs()
self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
try:
formatted_answer = self._invoke_loop()
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
else:
self._handle_unknown_error(e)
raise e
formatted_answer = self._invoke_loop()
if self.ask_for_human_input:
formatted_answer = self._handle_human_feedback(formatted_answer)
@@ -115,186 +108,121 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._create_long_term_memory(formatted_answer)
return {"output": formatted_answer.output}
def _invoke_loop(self):
"""
Main loop to invoke the agent's thought process until it reaches a conclusion
or the maximum number of iterations is reached.
"""
formatted_answer = None
while not isinstance(formatted_answer, AgentFinish):
try:
if self._has_reached_max_iterations():
formatted_answer = self._handle_max_iterations_exceeded(
formatted_answer
)
break
self._enforce_rpm_limit()
answer = self._get_llm_response()
formatted_answer = self._process_llm_response(answer)
if isinstance(formatted_answer, AgentAction):
tool_result = self._execute_tool_and_check_finality(
formatted_answer
)
formatted_answer = self._handle_agent_action(
formatted_answer, tool_result
)
self._invoke_step_callback(formatted_answer)
self._append_message(formatted_answer.text, role="assistant")
except OutputParserException as e:
formatted_answer = self._handle_output_parser_exception(e)
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
if self._is_context_length_exceeded(e):
self._handle_context_length()
continue
else:
self._handle_unknown_error(e)
raise e
finally:
self.iterations += 1
self._show_logs(formatted_answer)
return formatted_answer
def _handle_unknown_error(self, exception: Exception) -> None:
"""Handle unknown errors by informing the user."""
self._printer.print(
content="An unknown error occurred. Please check the details below.",
color="red",
)
self._printer.print(
content=f"Error details: {exception}",
color="red",
)
def _has_reached_max_iterations(self) -> bool:
"""Check if the maximum number of iterations has been reached."""
return self.iterations >= self.max_iter
def _enforce_rpm_limit(self) -> None:
"""Enforce the requests per minute (RPM) limit if applicable."""
if self.request_within_rpm_limit:
self.request_within_rpm_limit()
def _get_llm_response(self) -> str:
"""Call the LLM and return the response, handling any invalid responses."""
def _invoke_loop(self, formatted_answer=None):
try:
answer = self.llm.call(
self.messages,
callbacks=self.callbacks,
)
while not isinstance(formatted_answer, AgentFinish):
if not self.request_within_rpm_limit or self.request_within_rpm_limit():
answer = self.llm.call(
self.messages,
callbacks=self.callbacks,
)
if answer is None or answer == "":
self._printer.print(
content="Received None or empty response from LLM call.",
color="red",
)
raise ValueError(
"Invalid response from LLM call - None or empty."
)
if not self.use_stop_words:
try:
self._format_answer(answer)
except OutputParserException as e:
if (
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE
in e.error
):
answer = answer.split("Observation:")[0].strip()
self.iterations += 1
formatted_answer = self._format_answer(answer)
if isinstance(formatted_answer, AgentAction):
tool_result = self._execute_tool_and_check_finality(
formatted_answer
)
# Directly append the result to the messages if the
# tool is "Add image to content" in case of multimodal
# agents
add_image_tool_name = self._i18n.tools("add_image")
if add_image_tool_name and formatted_answer.tool == add_image_tool_name:
self.messages.append(tool_result.result)
continue
else:
if self.step_callback:
self.step_callback(tool_result)
formatted_answer.text += f"\nObservation: {tool_result.result}"
formatted_answer.result = tool_result.result
if tool_result.result_as_answer:
return AgentFinish(
thought="",
output=tool_result.result,
text=formatted_answer.text,
)
self._show_logs(formatted_answer)
if self.step_callback:
self.step_callback(formatted_answer)
if self._should_force_answer():
if self.have_forced_answer:
return AgentFinish(
thought="",
output=self._i18n.errors(
"force_final_answer_error"
).format(formatted_answer.text),
text=formatted_answer.text,
)
else:
formatted_answer.text += (
f'\n{self._i18n.errors("force_final_answer")}'
)
self.have_forced_answer = True
self.messages.append(
self._format_msg(formatted_answer.text, role="assistant")
)
except OutputParserException as e:
self.messages.append({"role": "user", "content": e.error})
if self.iterations > self.log_error_after:
self._printer.print(
content=f"Error parsing LLM output, agent will retry: {e.error}",
color="red",
)
return self._invoke_loop(formatted_answer)
except Exception as e:
self._printer.print(
content=f"Error during LLM call: {e}",
color="red",
)
raise e
if not answer:
self._printer.print(
content="Received None or empty response from LLM call.",
color="red",
)
raise ValueError("Invalid response from LLM call - None or empty.")
return answer
def _process_llm_response(self, answer: str) -> Union[AgentAction, AgentFinish]:
"""Process the LLM response and format it into an AgentAction or AgentFinish."""
if not self.use_stop_words:
try:
# Preliminary parsing to check for errors.
self._format_answer(answer)
except OutputParserException as e:
if FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE in e.error:
answer = answer.split("Observation:")[0].strip()
return self._format_answer(answer)
def _handle_agent_action(
self, formatted_answer: AgentAction, tool_result: ToolResult
) -> Union[AgentAction, AgentFinish]:
"""Handle the AgentAction, execute tools, and process the results."""
add_image_tool = self._i18n.tools("add_image")
if (
isinstance(add_image_tool, dict)
and formatted_answer.tool.casefold().strip()
== add_image_tool.get("name", "").casefold().strip()
):
self.messages.append(tool_result.result)
return formatted_answer # Continue the loop
if self.step_callback:
self.step_callback(tool_result)
formatted_answer.text += f"\nObservation: {tool_result.result}"
formatted_answer.result = tool_result.result
if tool_result.result_as_answer:
return AgentFinish(
thought="",
output=tool_result.result,
text=formatted_answer.text,
)
if LLMContextLengthExceededException(str(e))._is_context_limit_error(
str(e)
):
self._handle_context_length()
return self._invoke_loop(formatted_answer)
else:
raise e
self._show_logs(formatted_answer)
return formatted_answer
def _invoke_step_callback(self, formatted_answer) -> None:
"""Invoke the step callback if it exists."""
if self.step_callback:
self.step_callback(formatted_answer)
def _append_message(self, text: str, role: str = "assistant") -> None:
"""Append a message to the message list with the given role."""
self.messages.append(self._format_msg(text, role=role))
def _handle_output_parser_exception(self, e: OutputParserException) -> AgentAction:
"""Handle OutputParserException by updating messages and formatted_answer."""
self.messages.append({"role": "user", "content": e.error})
formatted_answer = AgentAction(
text=e.error,
tool="",
tool_input="",
thought="",
)
if self.iterations > self.log_error_after:
self._printer.print(
content=f"Error parsing LLM output, agent will retry: {e.error}",
color="red",
)
return formatted_answer
def _is_context_length_exceeded(self, exception: Exception) -> bool:
"""Check if the exception is due to context length exceeding."""
return LLMContextLengthExceededException(
str(exception)
)._is_context_limit_error(str(exception))
def _show_start_logs(self):
if self.agent is None:
raise ValueError("Agent cannot be None")
if self.agent.verbose or (
hasattr(self, "crew") and getattr(self.crew, "verbose", False)
):
agent_role = self.agent.role.split("\n")[0]
agent_role = self.agent.role.split("\n")[0] if self.agent and self.agent.role else ""
self._printer.print(
content=f"\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{agent_role}\033[00m"
)
self._printer.print(
content=f"\033[95m## Task:\033[00m \033[92m{self.task.description}\033[00m"
)
if self.task and self.task.description:
self._printer.print(
content=f"\033[95m## Task:\033[00m \033[92m{self.task.description}\033[00m"
)
def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]):
if self.agent is None:
@@ -302,7 +230,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if self.agent.verbose or (
hasattr(self, "crew") and getattr(self.crew, "verbose", False)
):
agent_role = self.agent.role.split("\n")[0]
agent_role = self.agent.role.split("\n")[0] if self.agent and self.agent.role else ""
if isinstance(formatted_answer, AgentAction):
thought = re.sub(r"\n+", "\n", formatted_answer.thought)
formatted_json = json.dumps(
@@ -346,7 +274,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
agent=self.agent,
action=agent_action,
)
tool_calling = tool_usage.parse_tool_calling(agent_action.text)
tool_calling = tool_usage.parse(agent_action.text)
if isinstance(tool_calling, ToolUsageErrorException):
tool_result = tool_calling.message
@@ -561,45 +489,3 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self.ask_for_human_input = False
return formatted_answer
def _handle_max_iterations_exceeded(self, formatted_answer):
"""
Handles the case when the maximum number of iterations is exceeded.
Performs one more LLM call to get the final answer.
Parameters:
formatted_answer: The last formatted answer from the agent.
Returns:
The final formatted answer after exceeding max iterations.
"""
self._printer.print(
content="Maximum iterations reached. Requesting final answer.",
color="yellow",
)
if formatted_answer and hasattr(formatted_answer, "text"):
assistant_message = (
formatted_answer.text + f'\n{self._i18n.errors("force_final_answer")}'
)
else:
assistant_message = self._i18n.errors("force_final_answer")
self.messages.append(self._format_msg(assistant_message, role="assistant"))
# Perform one more LLM call to get the final answer
answer = self.llm.call(
self.messages,
callbacks=self.callbacks,
)
if answer is None or answer == "":
self._printer.print(
content="Received None or empty response from LLM call.",
color="red",
)
raise ValueError("Invalid response from LLM call - None or empty.")
formatted_answer = self._format_answer(answer)
# Return the formatted answer, regardless of its type
return formatted_answer

View File

@@ -1,13 +1,11 @@
import os
from importlib.metadata import version as get_version
from typing import Optional, Tuple
from typing import Optional
import click
from crewai.cli.add_crew_to_flow import add_crew_to_flow
from crewai.cli.create_crew import create_crew
from crewai.cli.create_flow import create_flow
from crewai.cli.crew_chat import run_chat
from crewai.memory.storage.kickoff_task_outputs_storage import (
KickoffTaskOutputsSQLiteStorage,
)
@@ -344,18 +342,5 @@ def flow_add_crew(crew_name):
add_crew_to_flow(crew_name)
@crewai.command()
def chat():
"""
Start a conversation with the Crew, collecting user-supplied inputs,
and using the Chat LLM to generate responses.
"""
click.secho(
"\nStarting a conversation with the Crew\n" "Type 'exit' or Ctrl+C to quit.\n",
)
run_chat()
if __name__ == "__main__":
crewai()

View File

@@ -17,12 +17,6 @@ ENV_VARS = {
"key_name": "GEMINI_API_KEY",
}
],
"nvidia_nim": [
{
"prompt": "Enter your NVIDIA API key (press Enter to skip)",
"key_name": "NVIDIA_NIM_API_KEY",
}
],
"groq": [
{
"prompt": "Enter your GROQ API key (press Enter to skip)",
@@ -91,12 +85,6 @@ ENV_VARS = {
"key_name": "CEREBRAS_API_KEY",
},
],
"sambanova": [
{
"prompt": "Enter your SambaNovaCloud API key (press Enter to skip)",
"key_name": "SAMBANOVA_API_KEY",
}
],
}
@@ -104,14 +92,12 @@ PROVIDERS = [
"openai",
"anthropic",
"gemini",
"nvidia_nim",
"groq",
"ollama",
"watson",
"bedrock",
"azure",
"cerebras",
"sambanova",
]
MODELS = {
@@ -128,75 +114,6 @@ MODELS = {
"gemini/gemini-gemma-2-9b-it",
"gemini/gemini-gemma-2-27b-it",
],
"nvidia_nim": [
"nvidia_nim/nvidia/mistral-nemo-minitron-8b-8k-instruct",
"nvidia_nim/nvidia/nemotron-4-mini-hindi-4b-instruct",
"nvidia_nim/nvidia/llama-3.1-nemotron-70b-instruct",
"nvidia_nim/nvidia/llama3-chatqa-1.5-8b",
"nvidia_nim/nvidia/llama3-chatqa-1.5-70b",
"nvidia_nim/nvidia/vila",
"nvidia_nim/nvidia/neva-22",
"nvidia_nim/nvidia/nemotron-mini-4b-instruct",
"nvidia_nim/nvidia/usdcode-llama3-70b-instruct",
"nvidia_nim/nvidia/nemotron-4-340b-instruct",
"nvidia_nim/meta/codellama-70b",
"nvidia_nim/meta/llama2-70b",
"nvidia_nim/meta/llama3-8b-instruct",
"nvidia_nim/meta/llama3-70b-instruct",
"nvidia_nim/meta/llama-3.1-8b-instruct",
"nvidia_nim/meta/llama-3.1-70b-instruct",
"nvidia_nim/meta/llama-3.1-405b-instruct",
"nvidia_nim/meta/llama-3.2-1b-instruct",
"nvidia_nim/meta/llama-3.2-3b-instruct",
"nvidia_nim/meta/llama-3.2-11b-vision-instruct",
"nvidia_nim/meta/llama-3.2-90b-vision-instruct",
"nvidia_nim/meta/llama-3.1-70b-instruct",
"nvidia_nim/google/gemma-7b",
"nvidia_nim/google/gemma-2b",
"nvidia_nim/google/codegemma-7b",
"nvidia_nim/google/codegemma-1.1-7b",
"nvidia_nim/google/recurrentgemma-2b",
"nvidia_nim/google/gemma-2-9b-it",
"nvidia_nim/google/gemma-2-27b-it",
"nvidia_nim/google/gemma-2-2b-it",
"nvidia_nim/google/deplot",
"nvidia_nim/google/paligemma",
"nvidia_nim/mistralai/mistral-7b-instruct-v0.2",
"nvidia_nim/mistralai/mixtral-8x7b-instruct-v0.1",
"nvidia_nim/mistralai/mistral-large",
"nvidia_nim/mistralai/mixtral-8x22b-instruct-v0.1",
"nvidia_nim/mistralai/mistral-7b-instruct-v0.3",
"nvidia_nim/nv-mistralai/mistral-nemo-12b-instruct",
"nvidia_nim/mistralai/mamba-codestral-7b-v0.1",
"nvidia_nim/microsoft/phi-3-mini-128k-instruct",
"nvidia_nim/microsoft/phi-3-mini-4k-instruct",
"nvidia_nim/microsoft/phi-3-small-8k-instruct",
"nvidia_nim/microsoft/phi-3-small-128k-instruct",
"nvidia_nim/microsoft/phi-3-medium-4k-instruct",
"nvidia_nim/microsoft/phi-3-medium-128k-instruct",
"nvidia_nim/microsoft/phi-3.5-mini-instruct",
"nvidia_nim/microsoft/phi-3.5-moe-instruct",
"nvidia_nim/microsoft/kosmos-2",
"nvidia_nim/microsoft/phi-3-vision-128k-instruct",
"nvidia_nim/microsoft/phi-3.5-vision-instruct",
"nvidia_nim/databricks/dbrx-instruct",
"nvidia_nim/snowflake/arctic",
"nvidia_nim/aisingapore/sea-lion-7b-instruct",
"nvidia_nim/ibm/granite-8b-code-instruct",
"nvidia_nim/ibm/granite-34b-code-instruct",
"nvidia_nim/ibm/granite-3.0-8b-instruct",
"nvidia_nim/ibm/granite-3.0-3b-a800m-instruct",
"nvidia_nim/mediatek/breeze-7b-instruct",
"nvidia_nim/upstage/solar-10.7b-instruct",
"nvidia_nim/writer/palmyra-med-70b-32k",
"nvidia_nim/writer/palmyra-med-70b",
"nvidia_nim/writer/palmyra-fin-70b-32k",
"nvidia_nim/01-ai/yi-large",
"nvidia_nim/deepseek-ai/deepseek-coder-6.7b-instruct",
"nvidia_nim/rakuten/rakutenai-7b-instruct",
"nvidia_nim/rakuten/rakutenai-7b-chat",
"nvidia_nim/baichuan-inc/baichuan2-13b-chat",
],
"groq": [
"groq/llama-3.1-8b-instant",
"groq/llama-3.1-70b-versatile",
@@ -239,23 +156,8 @@ MODELS = {
"bedrock/mistral.mistral-7b-instruct-v0:2",
"bedrock/mistral.mixtral-8x7b-instruct-v0:1",
],
"sambanova": [
"sambanova/Meta-Llama-3.3-70B-Instruct",
"sambanova/QwQ-32B-Preview",
"sambanova/Qwen2.5-72B-Instruct",
"sambanova/Qwen2.5-Coder-32B-Instruct",
"sambanova/Meta-Llama-3.1-405B-Instruct",
"sambanova/Meta-Llama-3.1-70B-Instruct",
"sambanova/Meta-Llama-3.1-8B-Instruct",
"sambanova/Llama-3.2-90B-Vision-Instruct",
"sambanova/Llama-3.2-11B-Vision-Instruct",
"sambanova/Meta-Llama-3.2-3B-Instruct",
"sambanova/Meta-Llama-3.2-1B-Instruct",
],
}
DEFAULT_LLM_MODEL = "gpt-4o-mini"
JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"

View File

@@ -1,536 +0,0 @@
import json
import platform
import re
import sys
import threading
import time
from pathlib import Path
from typing import Any, Dict, List, Optional, Set, Tuple
import click
import tomli
from packaging import version
from crewai.cli.utils import read_toml
from crewai.cli.version import get_crewai_version
from crewai.crew import Crew
from crewai.llm import LLM
from crewai.types.crew_chat import ChatInputField, ChatInputs
from crewai.utilities.llm_utils import create_llm
MIN_REQUIRED_VERSION = "0.98.0"
def check_conversational_crews_version(
crewai_version: str, pyproject_data: dict
) -> bool:
"""
Check if the installed crewAI version supports conversational crews.
Args:
crewai_version: The current version of crewAI.
pyproject_data: Dictionary containing pyproject.toml data.
Returns:
bool: True if version check passes, False otherwise.
"""
try:
if version.parse(crewai_version) < version.parse(MIN_REQUIRED_VERSION):
click.secho(
"You are using an older version of crewAI that doesn't support conversational crews. "
"Run 'uv upgrade crewai' to get the latest version.",
fg="red",
)
return False
except version.InvalidVersion:
click.secho("Invalid crewAI version format detected.", fg="red")
return False
return True
def run_chat():
"""
Runs an interactive chat loop using the Crew's chat LLM with function calling.
Incorporates crew_name, crew_description, and input fields to build a tool schema.
Exits if crew_name or crew_description are missing.
"""
crewai_version = get_crewai_version()
pyproject_data = read_toml()
if not check_conversational_crews_version(crewai_version, pyproject_data):
return
crew, crew_name = load_crew_and_name()
chat_llm = initialize_chat_llm(crew)
if not chat_llm:
return
# Indicate that the crew is being analyzed
click.secho(
"\nAnalyzing crew and required inputs - this may take 3 to 30 seconds "
"depending on the complexity of your crew.",
fg="white",
)
# Start loading indicator
loading_complete = threading.Event()
loading_thread = threading.Thread(target=show_loading, args=(loading_complete,))
loading_thread.start()
try:
crew_chat_inputs = generate_crew_chat_inputs(crew, crew_name, chat_llm)
crew_tool_schema = generate_crew_tool_schema(crew_chat_inputs)
system_message = build_system_message(crew_chat_inputs)
# Call the LLM to generate the introductory message
introductory_message = chat_llm.call(
messages=[{"role": "system", "content": system_message}]
)
finally:
# Stop loading indicator
loading_complete.set()
loading_thread.join()
# Indicate that the analysis is complete
click.secho("\nFinished analyzing crew.\n", fg="white")
click.secho(f"Assistant: {introductory_message}\n", fg="green")
messages = [
{"role": "system", "content": system_message},
{"role": "assistant", "content": introductory_message},
]
available_functions = {
crew_chat_inputs.crew_name: create_tool_function(crew, messages),
}
chat_loop(chat_llm, messages, crew_tool_schema, available_functions)
def show_loading(event: threading.Event):
"""Display animated loading dots while processing."""
while not event.is_set():
print(".", end="", flush=True)
time.sleep(1)
print()
def initialize_chat_llm(crew: Crew) -> Optional[LLM]:
"""Initializes the chat LLM and handles exceptions."""
try:
return create_llm(crew.chat_llm)
except Exception as e:
click.secho(
f"Unable to find a Chat LLM. Please make sure you set chat_llm on the crew: {e}",
fg="red",
)
return None
def build_system_message(crew_chat_inputs: ChatInputs) -> str:
"""Builds the initial system message for the chat."""
required_fields_str = (
", ".join(
f"{field.name} (desc: {field.description or 'n/a'})"
for field in crew_chat_inputs.inputs
)
or "(No required fields detected)"
)
return (
"You are a helpful AI assistant for the CrewAI platform. "
"Your primary purpose is to assist users with the crew's specific tasks. "
"You can answer general questions, but should guide users back to the crew's purpose afterward. "
"For example, after answering a general question, remind the user of your main purpose, such as generating a research report, and prompt them to specify a topic or task related to the crew's purpose. "
"You have a function (tool) you can call by name if you have all required inputs. "
f"Those required inputs are: {required_fields_str}. "
"Once you have them, call the function. "
"Please keep your responses concise and friendly. "
"If a user asks a question outside the crew's scope, provide a brief answer and remind them of the crew's purpose. "
"After calling the tool, be prepared to take user feedback and make adjustments as needed. "
"If you are ever unsure about a user's request or need clarification, ask the user for more information. "
"Before doing anything else, introduce yourself with a friendly message like: 'Hey! I'm here to help you with [crew's purpose]. Could you please provide me with [inputs] so we can get started?' "
"For example: 'Hey! I'm here to help you with uncovering and reporting cutting-edge developments through thorough research and detailed analysis. Could you please provide me with a topic you're interested in? This will help us generate a comprehensive research report and detailed analysis.'"
f"\nCrew Name: {crew_chat_inputs.crew_name}"
f"\nCrew Description: {crew_chat_inputs.crew_description}"
)
def create_tool_function(crew: Crew, messages: List[Dict[str, str]]) -> Any:
"""Creates a wrapper function for running the crew tool with messages."""
def run_crew_tool_with_messages(**kwargs):
return run_crew_tool(crew, messages, **kwargs)
return run_crew_tool_with_messages
def flush_input():
"""Flush any pending input from the user."""
if platform.system() == "Windows":
# Windows platform
import msvcrt
while msvcrt.kbhit():
msvcrt.getch()
else:
# Unix-like platforms (Linux, macOS)
import termios
termios.tcflush(sys.stdin, termios.TCIFLUSH)
def chat_loop(chat_llm, messages, crew_tool_schema, available_functions):
"""Main chat loop for interacting with the user."""
while True:
try:
# Flush any pending input before accepting new input
flush_input()
user_input = get_user_input()
handle_user_input(
user_input, chat_llm, messages, crew_tool_schema, available_functions
)
except KeyboardInterrupt:
click.echo("\nExiting chat. Goodbye!")
break
except Exception as e:
click.secho(f"An error occurred: {e}", fg="red")
break
def get_user_input() -> str:
"""Collect multi-line user input with exit handling."""
click.secho(
"\nYou (type your message below. Press 'Enter' twice when you're done):",
fg="blue",
)
user_input_lines = []
while True:
line = input()
if line.strip().lower() == "exit":
return "exit"
if line == "":
break
user_input_lines.append(line)
return "\n".join(user_input_lines)
def handle_user_input(
user_input: str,
chat_llm: LLM,
messages: List[Dict[str, str]],
crew_tool_schema: Dict[str, Any],
available_functions: Dict[str, Any],
) -> None:
if user_input.strip().lower() == "exit":
click.echo("Exiting chat. Goodbye!")
return
if not user_input.strip():
click.echo("Empty message. Please provide input or type 'exit' to quit.")
return
messages.append({"role": "user", "content": user_input})
# Indicate that assistant is processing
click.echo()
click.secho("Assistant is processing your input. Please wait...", fg="green")
# Process assistant's response
final_response = chat_llm.call(
messages=messages,
tools=[crew_tool_schema],
available_functions=available_functions,
)
messages.append({"role": "assistant", "content": final_response})
click.secho(f"\nAssistant: {final_response}\n", fg="green")
def generate_crew_tool_schema(crew_inputs: ChatInputs) -> dict:
"""
Dynamically build a Littellm 'function' schema for the given crew.
crew_name: The name of the crew (used for the function 'name').
crew_inputs: A ChatInputs object containing crew_description
and a list of input fields (each with a name & description).
"""
properties = {}
for field in crew_inputs.inputs:
properties[field.name] = {
"type": "string",
"description": field.description or "No description provided",
}
required_fields = [field.name for field in crew_inputs.inputs]
return {
"type": "function",
"function": {
"name": crew_inputs.crew_name,
"description": crew_inputs.crew_description or "No crew description",
"parameters": {
"type": "object",
"properties": properties,
"required": required_fields,
},
},
}
def run_crew_tool(crew: Crew, messages: List[Dict[str, str]], **kwargs):
"""
Runs the crew using crew.kickoff(inputs=kwargs) and returns the output.
Args:
crew (Crew): The crew instance to run.
messages (List[Dict[str, str]]): The chat messages up to this point.
**kwargs: The inputs collected from the user.
Returns:
str: The output from the crew's execution.
Raises:
SystemExit: Exits the chat if an error occurs during crew execution.
"""
try:
# Serialize 'messages' to JSON string before adding to kwargs
kwargs["crew_chat_messages"] = json.dumps(messages)
# Run the crew with the provided inputs
crew_output = crew.kickoff(inputs=kwargs)
# Convert CrewOutput to a string to send back to the user
result = str(crew_output)
return result
except Exception as e:
# Exit the chat and show the error message
click.secho("An error occurred while running the crew:", fg="red")
click.secho(str(e), fg="red")
sys.exit(1)
def load_crew_and_name() -> Tuple[Crew, str]:
"""
Loads the crew by importing the crew class from the user's project.
Returns:
Tuple[Crew, str]: A tuple containing the Crew instance and the name of the crew.
"""
# Get the current working directory
cwd = Path.cwd()
# Path to the pyproject.toml file
pyproject_path = cwd / "pyproject.toml"
if not pyproject_path.exists():
raise FileNotFoundError("pyproject.toml not found in the current directory.")
# Load the pyproject.toml file using 'tomli'
with pyproject_path.open("rb") as f:
pyproject_data = tomli.load(f)
# Get the project name from the 'project' section
project_name = pyproject_data["project"]["name"]
folder_name = project_name
# Derive the crew class name from the project name
# E.g., if project_name is 'my_project', crew_class_name is 'MyProject'
crew_class_name = project_name.replace("_", " ").title().replace(" ", "")
# Add the 'src' directory to sys.path
src_path = cwd / "src"
if str(src_path) not in sys.path:
sys.path.insert(0, str(src_path))
# Import the crew module
crew_module_name = f"{folder_name}.crew"
try:
crew_module = __import__(crew_module_name, fromlist=[crew_class_name])
except ImportError as e:
raise ImportError(f"Failed to import crew module {crew_module_name}: {e}")
# Get the crew class from the module
try:
crew_class = getattr(crew_module, crew_class_name)
except AttributeError:
raise AttributeError(
f"Crew class {crew_class_name} not found in module {crew_module_name}"
)
# Instantiate the crew
crew_instance = crew_class().crew()
return crew_instance, crew_class_name
def generate_crew_chat_inputs(crew: Crew, crew_name: str, chat_llm) -> ChatInputs:
"""
Generates the ChatInputs required for the crew by analyzing the tasks and agents.
Args:
crew (Crew): The crew object containing tasks and agents.
crew_name (str): The name of the crew.
chat_llm: The chat language model to use for AI calls.
Returns:
ChatInputs: An object containing the crew's name, description, and input fields.
"""
# Extract placeholders from tasks and agents
required_inputs = fetch_required_inputs(crew)
# Generate descriptions for each input using AI
input_fields = []
for input_name in required_inputs:
description = generate_input_description_with_ai(input_name, crew, chat_llm)
input_fields.append(ChatInputField(name=input_name, description=description))
# Generate crew description using AI
crew_description = generate_crew_description_with_ai(crew, chat_llm)
return ChatInputs(
crew_name=crew_name, crew_description=crew_description, inputs=input_fields
)
def fetch_required_inputs(crew: Crew) -> Set[str]:
"""
Extracts placeholders from the crew's tasks and agents.
Args:
crew (Crew): The crew object.
Returns:
Set[str]: A set of placeholder names.
"""
placeholder_pattern = re.compile(r"\{(.+?)\}")
required_inputs: Set[str] = set()
# Scan tasks
for task in crew.tasks:
text = f"{task.description or ''} {task.expected_output or ''}"
required_inputs.update(placeholder_pattern.findall(text))
# Scan agents
for agent in crew.agents:
text = f"{agent.role or ''} {agent.goal or ''} {agent.backstory or ''}"
required_inputs.update(placeholder_pattern.findall(text))
return required_inputs
def generate_input_description_with_ai(input_name: str, crew: Crew, chat_llm) -> str:
"""
Generates an input description using AI based on the context of the crew.
Args:
input_name (str): The name of the input placeholder.
crew (Crew): The crew object.
chat_llm: The chat language model to use for AI calls.
Returns:
str: A concise description of the input.
"""
# Gather context from tasks and agents where the input is used
context_texts = []
placeholder_pattern = re.compile(r"\{(.+?)\}")
for task in crew.tasks:
if (
f"{{{input_name}}}" in task.description
or f"{{{input_name}}}" in task.expected_output
):
# Replace placeholders with input names
task_description = placeholder_pattern.sub(
lambda m: m.group(1), task.description or ""
)
expected_output = placeholder_pattern.sub(
lambda m: m.group(1), task.expected_output or ""
)
context_texts.append(f"Task Description: {task_description}")
context_texts.append(f"Expected Output: {expected_output}")
for agent in crew.agents:
if (
f"{{{input_name}}}" in agent.role
or f"{{{input_name}}}" in agent.goal
or f"{{{input_name}}}" in agent.backstory
):
# Replace placeholders with input names
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role or "")
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal or "")
agent_backstory = placeholder_pattern.sub(
lambda m: m.group(1), agent.backstory or ""
)
context_texts.append(f"Agent Role: {agent_role}")
context_texts.append(f"Agent Goal: {agent_goal}")
context_texts.append(f"Agent Backstory: {agent_backstory}")
context = "\n".join(context_texts)
if not context:
# If no context is found for the input, raise an exception as per instruction
raise ValueError(f"No context found for input '{input_name}'.")
prompt = (
f"Based on the following context, write a concise description (15 words or less) of the input '{input_name}'.\n"
"Provide only the description, without any extra text or labels. Do not include placeholders like '{topic}' in the description.\n"
"Context:\n"
f"{context}"
)
response = chat_llm.call(messages=[{"role": "user", "content": prompt}])
description = response.strip()
return description
def generate_crew_description_with_ai(crew: Crew, chat_llm) -> str:
"""
Generates a brief description of the crew using AI.
Args:
crew (Crew): The crew object.
chat_llm: The chat language model to use for AI calls.
Returns:
str: A concise description of the crew's purpose (15 words or less).
"""
# Gather context from tasks and agents
context_texts = []
placeholder_pattern = re.compile(r"\{(.+?)\}")
for task in crew.tasks:
# Replace placeholders with input names
task_description = placeholder_pattern.sub(
lambda m: m.group(1), task.description or ""
)
expected_output = placeholder_pattern.sub(
lambda m: m.group(1), task.expected_output or ""
)
context_texts.append(f"Task Description: {task_description}")
context_texts.append(f"Expected Output: {expected_output}")
for agent in crew.agents:
# Replace placeholders with input names
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role or "")
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal or "")
agent_backstory = placeholder_pattern.sub(
lambda m: m.group(1), agent.backstory or ""
)
context_texts.append(f"Agent Role: {agent_role}")
context_texts.append(f"Agent Goal: {agent_goal}")
context_texts.append(f"Agent Backstory: {agent_backstory}")
context = "\n".join(context_texts)
if not context:
raise ValueError("No context found for generating crew description.")
prompt = (
"Based on the following context, write a concise, action-oriented description (15 words or less) of the crew's purpose.\n"
"Provide only the description, without any extra text or labels. Do not include placeholders like '{topic}' in the description.\n"
"Context:\n"
f"{context}"
)
response = chat_llm.call(messages=[{"role": "user", "content": prompt}])
crew_description = response.strip()
return crew_description

View File

@@ -5,6 +5,7 @@ from pathlib import Path
import click
import requests
from typing import Any
from crewai.cli.constants import JSON_URL, MODELS, PROVIDERS
@@ -192,7 +193,7 @@ def download_data(response):
data_chunks = []
with click.progressbar(
length=total_size, label="Downloading", show_pos=True
) as progress_bar:
) as progress_bar: # type: Any
for chunk in response.iter_content(block_size):
if chunk:
data_chunks.append(chunk)

View File

@@ -1,3 +1,2 @@
.env
__pycache__/
.DS_Store

View File

@@ -2,7 +2,7 @@ research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is {current_year}.
the current year is 2024.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher

View File

@@ -2,8 +2,6 @@
import sys
import warnings
from datetime import datetime
from {{folder_name}}.crew import {{crew_name}}
warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")
@@ -18,14 +16,9 @@ def run():
Run the crew.
"""
inputs = {
'topic': 'AI LLMs',
'current_year': str(datetime.now().year)
'topic': 'AI LLMs'
}
try:
{{crew_name}}().crew().kickoff(inputs=inputs)
except Exception as e:
raise Exception(f"An error occurred while running the crew: {e}")
{{crew_name}}().crew().kickoff(inputs=inputs)
def train():
@@ -62,4 +55,4 @@ def test():
{{crew_name}}().crew().test(n_iterations=int(sys.argv[1]), openai_model_name=sys.argv[2], inputs=inputs)
except Exception as e:
raise Exception(f"An error occurred while testing the crew: {e}")
raise Exception(f"An error occurred while replaying the crew: {e}")

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.98.0,<1.0.0"
"crewai[tools]>=0.86.0,<1.0.0"
]
[project.scripts]

View File

@@ -1,4 +1,3 @@
.env
__pycache__/
lib/
.DS_Store

View File

@@ -3,7 +3,7 @@ from random import randint
from pydantic import BaseModel
from crewai.flow import Flow, listen, start
from crewai.flow.flow import Flow, listen, start
from {{folder_name}}.crews.poem_crew.poem_crew import PoemCrew

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.98.0,<1.0.0",
"crewai[tools]>=0.86.0,<1.0.0",
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.98.0"
"crewai[tools]>=0.86.0"
]
[tool.crewai]

View File

@@ -1,11 +1,12 @@
import asyncio
import json
import re
import uuid
import warnings
from concurrent.futures import Future
from hashlib import md5
from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Union
from typing import Any, Callable, Dict, List, Optional, Tuple, Union
from crewai.tools.base_tool import BaseTool
from pydantic import (
UUID4,
@@ -46,7 +47,6 @@ from crewai.utilities.formatter import (
aggregate_raw_outputs_from_task_outputs,
aggregate_raw_outputs_from_tasks,
)
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.planning_handler import CrewPlanner
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
from crewai.utilities.training_handler import CrewTrainingHandler
@@ -83,7 +83,6 @@ class Crew(BaseModel):
step_callback: Callback to be executed after each step for every agents execution.
share_crew: Whether you want to share the complete crew information and execution with crewAI to make the library better, and allow us to train models.
planning: Plan the crew execution and add the plan to the crew.
chat_llm: The language model used for orchestrating chat interactions with the crew.
"""
__hash__ = object.__hash__ # type: ignore
@@ -150,7 +149,7 @@ class Crew(BaseModel):
manager_agent: Optional[BaseAgent] = Field(
description="Custom agent that will be used as manager.", default=None
)
function_calling_llm: Optional[Union[str, InstanceOf[LLM], Any]] = Field(
function_calling_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
)
config: Optional[Union[Json, Dict[str, Any]]] = Field(default=None)
@@ -206,10 +205,6 @@ class Crew(BaseModel):
default=None,
description="Knowledge sources for the crew. Add knowledge sources to the knowledge object.",
)
chat_llm: Optional[Any] = Field(
default=None,
description="LLM used to handle chatting with the crew.",
)
_knowledge: Optional[Knowledge] = PrivateAttr(
default=None,
)
@@ -246,9 +241,15 @@ class Crew(BaseModel):
if self.output_log_file:
self._file_handler = FileHandler(self.output_log_file)
self._rpm_controller = RPMController(max_rpm=self.max_rpm, logger=self._logger)
if self.function_calling_llm and not isinstance(self.function_calling_llm, LLM):
self.function_calling_llm = create_llm(self.function_calling_llm)
if self.function_calling_llm:
if isinstance(self.function_calling_llm, str):
self.function_calling_llm = LLM(model=self.function_calling_llm)
elif not isinstance(self.function_calling_llm, LLM):
self.function_calling_llm = LLM(
model=getattr(self.function_calling_llm, "model_name", None)
or getattr(self.function_calling_llm, "deployment_name", None)
or str(self.function_calling_llm)
)
self._telemetry = Telemetry()
self._telemetry.set_tracer()
return self
@@ -513,8 +514,6 @@ class Crew(BaseModel):
inputs: Optional[Dict[str, Any]] = None,
) -> CrewOutput:
for before_callback in self.before_kickoff_callbacks:
if inputs is None:
inputs = {}
inputs = before_callback(inputs)
"""Starts the crew to work on its assigned tasks."""
@@ -676,7 +675,6 @@ class Crew(BaseModel):
else:
self.manager_llm = (
getattr(self.manager_llm, "model_name", None)
or getattr(self.manager_llm, "model", None)
or getattr(self.manager_llm, "deployment_name", None)
or self.manager_llm
)
@@ -732,7 +730,7 @@ class Crew(BaseModel):
tools_for_task = task.tools or agent_to_use.tools or []
tools_for_task = self._prepare_tools(agent_to_use, task, tools_for_task)
self._log_task_start(task, agent_to_use.role)
self._log_task_start(task, agent_to_use.role if agent_to_use and agent_to_use.role else "")
if isinstance(task, ConditionalTask):
skipped_task_output = self._handle_conditional_task(
@@ -798,8 +796,8 @@ class Crew(BaseModel):
return None
def _prepare_tools(
self, agent: BaseAgent, task: Task, tools: List[Tool]
) -> List[Tool]:
self, agent: BaseAgent, task: Task, tools: List[Union[Tool, BaseTool]]
) -> List[Union[Tool, BaseTool]]:
# Add delegation tools if agent allows delegation
if agent.allow_delegation:
if self.process == Process.hierarchical:
@@ -828,8 +826,8 @@ class Crew(BaseModel):
return task.agent
def _merge_tools(
self, existing_tools: List[Tool], new_tools: List[Tool]
) -> List[Tool]:
self, existing_tools: List[Union[Tool, BaseTool]], new_tools: List[Union[Tool, BaseTool]]
) -> List[Union[Tool, BaseTool]]:
"""Merge new tools into existing tools list, avoiding duplicates by tool name."""
if not new_tools:
return existing_tools
@@ -995,31 +993,6 @@ class Crew(BaseModel):
return self._knowledge.query(query)
return None
def fetch_inputs(self) -> Set[str]:
"""
Gathers placeholders (e.g., {something}) referenced in tasks or agents.
Scans each task's 'description' + 'expected_output', and each agent's
'role', 'goal', and 'backstory'.
Returns a set of all discovered placeholder names.
"""
placeholder_pattern = re.compile(r"\{(.+?)\}")
required_inputs: Set[str] = set()
# Scan tasks for inputs
for task in self.tasks:
# description and expected_output might contain e.g. {topic}, {user_name}, etc.
text = f"{task.description or ''} {task.expected_output or ''}"
required_inputs.update(placeholder_pattern.findall(text))
# Scan agents for inputs
for agent in self.agents:
# role, goal, backstory might have placeholders like {role_detail}, etc.
text = f"{agent.role or ''} {agent.goal or ''} {agent.backstory or ''}"
required_inputs.update(placeholder_pattern.findall(text))
return required_inputs
def copy(self):
"""Create a deep copy of the Crew."""
@@ -1075,7 +1048,7 @@ class Crew(BaseModel):
def _interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Interpolates the inputs in the tasks and agents."""
[
task.interpolate_inputs_and_add_conversation_history(
task.interpolate_inputs(
# type: ignore # "interpolate_inputs" of "Task" does not return a value (it only ever returns None)
inputs
)

View File

@@ -1,5 +1,5 @@
import json
from typing import Any, Dict, Optional
from typing import Any, Callable, Dict, Optional, Union
from pydantic import BaseModel, Field
@@ -23,14 +23,25 @@ class CrewOutput(BaseModel):
)
token_usage: UsageMetrics = Field(description="Processed token summary", default={})
@property
def json(self) -> Optional[str]:
def json(
self,
*,
include: Union[set[str], None] = None,
exclude: Union[set[str], None] = None,
by_alias: bool = False,
exclude_unset: bool = False,
exclude_defaults: bool = False,
exclude_none: bool = False,
encoder: Optional[Callable[[Any], Any]] = None,
models_as_dict: bool = True,
**dumps_kwargs: Any,
) -> str:
if self.tasks_output[-1].output_format != OutputFormat.JSON:
raise ValueError(
"No JSON output found in the final task. Please make sure to set the output_json property in the final task in your crew."
)
return json.dumps(self.json_dict)
return json.dumps(self.json_dict, default=encoder, **dumps_kwargs)
def to_dict(self) -> Dict[str, Any]:
"""Convert json_output and pydantic_output to a dictionary."""

View File

@@ -1,5 +1,3 @@
from crewai.flow.flow import Flow, start, listen, or_, and_, router
from crewai.flow.persistence import persist
__all__ = ["Flow", "start", "listen", "or_", "and_", "router", "persist"]
from crewai.flow.flow import Flow
__all__ = ["Flow"]

View File

@@ -1,6 +1,5 @@
import asyncio
import inspect
import logging
from typing import (
Any,
Callable,
@@ -14,10 +13,9 @@ from typing import (
Union,
cast,
)
from uuid import uuid4
from blinker import Signal
from pydantic import BaseModel, Field, ValidationError
from pydantic import BaseModel, ValidationError
from crewai.flow.flow_events import (
FlowFinishedEvent,
@@ -26,70 +24,10 @@ from crewai.flow.flow_events import (
MethodExecutionStartedEvent,
)
from crewai.flow.flow_visualizer import plot_flow
from crewai.flow.persistence.base import FlowPersistence
from crewai.flow.utils import get_possible_return_constants
from crewai.telemetry import Telemetry
from crewai.utilities.printer import Printer
logger = logging.getLogger(__name__)
class FlowState(BaseModel):
"""Base model for all flow states, ensuring each state has a unique ID."""
id: str = Field(
default_factory=lambda: str(uuid4()),
description="Unique identifier for the flow state",
)
# Type variables with explicit bounds
T = TypeVar(
"T", bound=Union[Dict[str, Any], BaseModel]
) # Generic flow state type parameter
StateT = TypeVar(
"StateT", bound=Union[Dict[str, Any], BaseModel]
) # State validation type parameter
def ensure_state_type(state: Any, expected_type: Type[StateT]) -> StateT:
"""Ensure state matches expected type with proper validation.
Args:
state: State instance to validate
expected_type: Expected type for the state
Returns:
Validated state instance
Raises:
TypeError: If state doesn't match expected type
ValueError: If state validation fails
"""
"""Ensure state matches expected type with proper validation.
Args:
state: State instance to validate
expected_type: Expected type for the state
Returns:
Validated state instance
Raises:
TypeError: If state doesn't match expected type
ValueError: If state validation fails
"""
if expected_type is dict:
if not isinstance(state, dict):
raise TypeError(f"Expected dict, got {type(state).__name__}")
return cast(StateT, state)
if isinstance(expected_type, type) and issubclass(expected_type, BaseModel):
if not isinstance(state, expected_type):
raise TypeError(
f"Expected {expected_type.__name__}, got {type(state).__name__}"
)
return cast(StateT, state)
raise TypeError(f"Invalid expected_type: {expected_type}")
T = TypeVar("T", bound=Union[BaseModel, Dict[str, Any]])
def start(condition: Optional[Union[str, dict, Callable]] = None) -> Callable:
@@ -133,7 +71,6 @@ def start(condition: Optional[Union[str, dict, Callable]] = None) -> Callable:
>>> def complex_start(self):
... pass
"""
def decorator(func):
func.__is_start_method__ = True
if condition is not None:
@@ -158,7 +95,6 @@ def start(condition: Optional[Union[str, dict, Callable]] = None) -> Callable:
return decorator
def listen(condition: Union[str, dict, Callable]) -> Callable:
"""
Creates a listener that executes when specified conditions are met.
@@ -195,7 +131,6 @@ def listen(condition: Union[str, dict, Callable]) -> Callable:
>>> def handle_completion(self):
... pass
"""
def decorator(func):
if isinstance(condition, str):
func.__trigger_methods__ = [condition]
@@ -260,7 +195,6 @@ def router(condition: Union[str, dict, Callable]) -> Callable:
... return CONTINUE
... return STOP
"""
def decorator(func):
func.__is_router__ = True
if isinstance(condition, str):
@@ -284,7 +218,6 @@ def router(condition: Union[str, dict, Callable]) -> Callable:
return decorator
def or_(*conditions: Union[str, dict, Callable]) -> dict:
"""
Combines multiple conditions with OR logic for flow control.
@@ -387,32 +320,21 @@ class FlowMeta(type):
routers = set()
for attr_name, attr_value in dct.items():
# Check for any flow-related attributes
if (
hasattr(attr_value, "__is_flow_method__")
or hasattr(attr_value, "__is_start_method__")
or hasattr(attr_value, "__trigger_methods__")
or hasattr(attr_value, "__is_router__")
):
# Register start methods
if hasattr(attr_value, "__is_start_method__"):
start_methods.append(attr_name)
# Register listeners and routers
if hasattr(attr_value, "__is_start_method__"):
start_methods.append(attr_name)
if hasattr(attr_value, "__trigger_methods__"):
methods = attr_value.__trigger_methods__
condition_type = getattr(attr_value, "__condition_type__", "OR")
listeners[attr_name] = (condition_type, methods)
if (
hasattr(attr_value, "__is_router__")
and attr_value.__is_router__
):
routers.add(attr_name)
possible_returns = get_possible_return_constants(attr_value)
if possible_returns:
router_paths[attr_name] = possible_returns
elif hasattr(attr_value, "__trigger_methods__"):
methods = attr_value.__trigger_methods__
condition_type = getattr(attr_value, "__condition_type__", "OR")
listeners[attr_name] = (condition_type, methods)
if hasattr(attr_value, "__is_router__") and attr_value.__is_router__:
routers.add(attr_name)
possible_returns = get_possible_return_constants(attr_value)
if possible_returns:
router_paths[attr_name] = possible_returns
setattr(cls, "_start_methods", start_methods)
setattr(cls, "_listeners", listeners)
@@ -423,12 +345,7 @@ class FlowMeta(type):
class Flow(Generic[T], metaclass=FlowMeta):
"""Base class for all flows.
Type parameter T must be either Dict[str, Any] or a subclass of BaseModel."""
_telemetry = Telemetry()
_printer = Printer()
_start_methods: List[str] = []
_listeners: Dict[str, tuple[str, List[str]]] = {}
@@ -444,130 +361,30 @@ class Flow(Generic[T], metaclass=FlowMeta):
_FlowGeneric.__name__ = f"{cls.__name__}[{item.__name__}]"
return _FlowGeneric
def __init__(
self,
persistence: Optional[FlowPersistence] = None,
**kwargs: Any,
) -> None:
"""Initialize a new Flow instance.
Args:
persistence: Optional persistence backend for storing flow states
**kwargs: Additional state values to initialize or override
"""
# Initialize basic instance attributes
def __init__(self) -> None:
self._methods: Dict[str, Callable] = {}
self._state: T = self._create_initial_state()
self._method_execution_counts: Dict[str, int] = {}
self._pending_and_listeners: Dict[str, Set[str]] = {}
self._method_outputs: List[Any] = [] # List to store all method outputs
self._persistence: Optional[FlowPersistence] = persistence
# Initialize state with initial values
self._state = self._create_initial_state()
# Apply any additional kwargs
if kwargs:
self._initialize_state(kwargs)
self._telemetry.flow_creation_span(self.__class__.__name__)
# Register all flow-related methods
for method_name in dir(self):
if not method_name.startswith("_"):
method = getattr(self, method_name)
# Check for any flow-related attributes
if (
hasattr(method, "__is_flow_method__")
or hasattr(method, "__is_start_method__")
or hasattr(method, "__trigger_methods__")
or hasattr(method, "__is_router__")
):
# Ensure method is bound to this instance
if not hasattr(method, "__self__"):
method = method.__get__(self, self.__class__)
self._methods[method_name] = method
if callable(getattr(self, method_name)) and not method_name.startswith(
"__"
):
self._methods[method_name] = getattr(self, method_name)
def _create_initial_state(self) -> T:
"""Create and initialize flow state with UUID and default values.
Returns:
New state instance with UUID and default values initialized
Raises:
ValueError: If structured state model lacks 'id' field
TypeError: If state is neither BaseModel nor dictionary
"""
# Handle case where initial_state is None but we have a type parameter
if self.initial_state is None and hasattr(self, "_initial_state_T"):
state_type = getattr(self, "_initial_state_T")
if isinstance(state_type, type):
if issubclass(state_type, FlowState):
# Create instance without id, then set it
instance = state_type()
if not hasattr(instance, "id"):
setattr(instance, "id", str(uuid4()))
return cast(T, instance)
elif issubclass(state_type, BaseModel):
# Create a new type that includes the ID field
class StateWithId(state_type, FlowState): # type: ignore
pass
instance = StateWithId()
if not hasattr(instance, "id"):
setattr(instance, "id", str(uuid4()))
return cast(T, instance)
elif state_type is dict:
return cast(T, {"id": str(uuid4())})
# Handle case where no initial state is provided
return self._initial_state_T() # type: ignore
if self.initial_state is None:
return cast(T, {"id": str(uuid4())})
# Handle case where initial_state is a type (class)
if isinstance(self.initial_state, type):
if issubclass(self.initial_state, FlowState):
return cast(T, self.initial_state()) # Uses model defaults
elif issubclass(self.initial_state, BaseModel):
# Validate that the model has an id field
model_fields = getattr(self.initial_state, "model_fields", None)
if not model_fields or "id" not in model_fields:
raise ValueError("Flow state model must have an 'id' field")
return cast(T, self.initial_state()) # Uses model defaults
elif self.initial_state is dict:
return cast(T, {"id": str(uuid4())})
# Handle dictionary instance case
if isinstance(self.initial_state, dict):
new_state = dict(self.initial_state) # Copy to avoid mutations
if "id" not in new_state:
new_state["id"] = str(uuid4())
return cast(T, new_state)
# Handle BaseModel instance case
if isinstance(self.initial_state, BaseModel):
model = cast(BaseModel, self.initial_state)
if not hasattr(model, "id"):
raise ValueError("Flow state model must have an 'id' field")
# Create new instance with same values to avoid mutations
if hasattr(model, "model_dump"):
# Pydantic v2
state_dict = model.model_dump()
elif hasattr(model, "dict"):
# Pydantic v1
state_dict = model.dict()
else:
# Fallback for other BaseModel implementations
state_dict = {
k: v for k, v in model.__dict__.items() if not k.startswith("_")
}
# Create new instance of the same class
model_class = type(model)
return cast(T, model_class(**state_dict))
raise TypeError(
f"Initial state must be dict or BaseModel, got {type(self.initial_state)}"
)
return {} # type: ignore
elif isinstance(self.initial_state, type):
return self.initial_state()
else:
return self.initial_state
@property
def state(self) -> T:
@@ -578,158 +395,34 @@ class Flow(Generic[T], metaclass=FlowMeta):
"""Returns the list of all outputs from executed methods."""
return self._method_outputs
@property
def flow_id(self) -> str:
"""Returns the unique identifier of this flow instance.
This property provides a consistent way to access the flow's unique identifier
regardless of the underlying state implementation (dict or BaseModel).
Returns:
str: The flow's unique identifier, or an empty string if not found
Note:
This property safely handles both dictionary and BaseModel state types,
returning an empty string if the ID cannot be retrieved rather than raising
an exception.
Example:
```python
flow = MyFlow()
print(f"Current flow ID: {flow.flow_id}") # Safely get flow ID
```
"""
try:
if not hasattr(self, '_state'):
return ""
if isinstance(self._state, dict):
return str(self._state.get("id", ""))
elif isinstance(self._state, BaseModel):
return str(getattr(self._state, "id", ""))
return ""
except (AttributeError, TypeError):
return "" # Safely handle any unexpected attribute access issues
def _initialize_state(self, inputs: Dict[str, Any]) -> None:
"""Initialize or update flow state with new inputs.
Args:
inputs: Dictionary of state values to set/update
Raises:
ValueError: If validation fails for structured state
TypeError: If state is neither BaseModel nor dictionary
"""
if isinstance(self._state, dict):
# For dict states, preserve existing fields unless overridden
current_id = self._state.get("id")
# Only update specified fields
for k, v in inputs.items():
self._state[k] = v
# Ensure ID is preserved or generated
if current_id:
self._state["id"] = current_id
elif "id" not in self._state:
self._state["id"] = str(uuid4())
elif isinstance(self._state, BaseModel):
# For BaseModel states, preserve existing fields unless overridden
if isinstance(self._state, BaseModel):
# Structured state
try:
model = cast(BaseModel, self._state)
# Get current state as dict
if hasattr(model, "model_dump"):
current_state = model.model_dump()
elif hasattr(model, "dict"):
current_state = model.dict()
else:
current_state = {
k: v for k, v in model.__dict__.items() if not k.startswith("_")
}
# Create new state with preserved fields and updates
new_state = {**current_state, **inputs}
def create_model_with_extra_forbid(
base_model: Type[BaseModel],
) -> Type[BaseModel]:
class ModelWithExtraForbid(base_model): # type: ignore
model_config = base_model.model_config.copy()
model_config["extra"] = "forbid"
# Create new instance with merged state
model_class = type(model)
if hasattr(model_class, "model_validate"):
# Pydantic v2
self._state = cast(T, model_class.model_validate(new_state))
elif hasattr(model_class, "parse_obj"):
# Pydantic v1
self._state = cast(T, model_class.parse_obj(new_state))
else:
# Fallback for other BaseModel implementations
self._state = cast(T, model_class(**new_state))
return ModelWithExtraForbid
ModelWithExtraForbid = create_model_with_extra_forbid(
self._state.__class__
)
self._state = cast(
T, ModelWithExtraForbid(**{**self._state.model_dump(), **inputs})
)
except ValidationError as e:
raise ValueError(f"Invalid inputs for structured state: {e}") from e
elif isinstance(self._state, dict):
self._state.update(inputs)
else:
raise TypeError("State must be a BaseModel instance or a dictionary.")
def _restore_state(self, stored_state: Dict[str, Any]) -> None:
"""Restore flow state from persistence.
Args:
stored_state: Previously stored state to restore
Raises:
ValueError: If validation fails for structured state
TypeError: If state is neither BaseModel nor dictionary
"""
# When restoring from persistence, use the stored ID
stored_id = stored_state.get("id")
if not stored_id:
raise ValueError("Stored state must have an 'id' field")
if isinstance(self._state, dict):
# For dict states, update all fields from stored state
self._state.clear()
self._state.update(stored_state)
elif isinstance(self._state, BaseModel):
# For BaseModel states, create new instance with stored values
model = cast(BaseModel, self._state)
if hasattr(model, "model_validate"):
# Pydantic v2
self._state = cast(T, type(model).model_validate(stored_state))
elif hasattr(model, "parse_obj"):
# Pydantic v1
self._state = cast(T, type(model).parse_obj(stored_state))
else:
# Fallback for other BaseModel implementations
self._state = cast(T, type(model)(**stored_state))
else:
raise TypeError(f"State must be dict or BaseModel, got {type(self._state)}")
def kickoff(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
"""Start the flow execution.
Args:
inputs: Optional dictionary containing input values and potentially a state ID to restore
"""
# Handle state restoration if ID is provided in inputs
if inputs and 'id' in inputs and self._persistence is not None:
restore_uuid = inputs['id']
stored_state = self._persistence.load_state(restore_uuid)
# Override the id in the state if it exists in inputs
if 'id' in inputs:
if isinstance(self._state, dict):
self._state['id'] = inputs['id']
elif isinstance(self._state, BaseModel):
setattr(self._state, 'id', inputs['id'])
if stored_state:
self._log_flow_event(f"Loading flow state from memory for UUID: {restore_uuid}", color="yellow")
# Restore the state
self._restore_state(stored_state)
else:
self._log_flow_event(f"No flow state found for UUID: {restore_uuid}", color="red")
# Apply any additional inputs after restoration
filtered_inputs = {k: v for k, v in inputs.items() if k != 'id'}
if filtered_inputs:
self._initialize_state(filtered_inputs)
# Start flow execution
self.event_emitter.send(
self,
event=FlowStartedEvent(
@@ -737,11 +430,9 @@ class Flow(Generic[T], metaclass=FlowMeta):
flow_name=self.__class__.__name__,
),
)
self._log_flow_event(f"Flow started with ID: {self.flow_id}", color="bold_magenta")
if inputs is not None and 'id' not in inputs:
if inputs is not None:
self._initialize_state(inputs)
return asyncio.run(self.kickoff_async())
async def kickoff_async(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
@@ -984,30 +675,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
traceback.print_exc()
def _log_flow_event(self, message: str, color: str = "yellow", level: str = "info") -> None:
"""Centralized logging method for flow events.
This method provides a consistent interface for logging flow-related events,
combining both console output with colors and proper logging levels.
Args:
message: The message to log
color: Color to use for console output (default: yellow)
Available colors: purple, red, bold_green, bold_purple,
bold_blue, yellow, yellow
level: Log level to use (default: info)
Supported levels: info, warning
Note:
This method uses the Printer utility for colored console output
and the standard logging module for log level support.
"""
self._printer.print(message, color=color)
if level == "info":
logger.info(message)
elif level == "warning":
logger.warning(message)
def plot(self, filename: str = "crewai_flow") -> None:
self._telemetry.flow_plotting_span(
self.__class__.__name__, list(self._methods.keys())

View File

@@ -106,7 +106,12 @@ class FlowPlot:
# Add nodes to the network
try:
add_nodes_to_network(net, self.flow, node_positions, self.node_styles)
add_nodes_to_network(
net,
flow=self.flow,
pos=node_positions,
node_styles=self.node_styles
)
except Exception as e:
raise RuntimeError(f"Failed to add nodes to network: {str(e)}")

View File

@@ -1,18 +0,0 @@
"""
CrewAI Flow Persistence.
This module provides interfaces and implementations for persisting flow states.
"""
from typing import Any, Dict, TypeVar, Union
from pydantic import BaseModel
from crewai.flow.persistence.base import FlowPersistence
from crewai.flow.persistence.decorators import persist
from crewai.flow.persistence.sqlite import SQLiteFlowPersistence
__all__ = ["FlowPersistence", "persist", "SQLiteFlowPersistence"]
StateType = TypeVar('StateType', bound=Union[Dict[str, Any], BaseModel])
DictStateType = Dict[str, Any]

View File

@@ -1,53 +0,0 @@
"""Base class for flow state persistence."""
import abc
from typing import Any, Dict, Optional, Union
from pydantic import BaseModel
class FlowPersistence(abc.ABC):
"""Abstract base class for flow state persistence.
This class defines the interface that all persistence implementations must follow.
It supports both structured (Pydantic BaseModel) and unstructured (dict) states.
"""
@abc.abstractmethod
def init_db(self) -> None:
"""Initialize the persistence backend.
This method should handle any necessary setup, such as:
- Creating tables
- Establishing connections
- Setting up indexes
"""
pass
@abc.abstractmethod
def save_state(
self,
flow_uuid: str,
method_name: str,
state_data: Union[Dict[str, Any], BaseModel]
) -> None:
"""Persist the flow state after method completion.
Args:
flow_uuid: Unique identifier for the flow instance
method_name: Name of the method that just completed
state_data: Current state data (either dict or Pydantic model)
"""
pass
@abc.abstractmethod
def load_state(self, flow_uuid: str) -> Optional[Dict[str, Any]]:
"""Load the most recent state for a given flow UUID.
Args:
flow_uuid: Unique identifier for the flow instance
Returns:
The most recent state as a dictionary, or None if no state exists
"""
pass

View File

@@ -1,252 +0,0 @@
"""
Decorators for flow state persistence.
Example:
```python
from crewai.flow.flow import Flow, start
from crewai.flow.persistence import persist, SQLiteFlowPersistence
class MyFlow(Flow):
@start()
@persist(SQLiteFlowPersistence())
def sync_method(self):
# Synchronous method implementation
pass
@start()
@persist(SQLiteFlowPersistence())
async def async_method(self):
# Asynchronous method implementation
await some_async_operation()
```
"""
import asyncio
import functools
import logging
from typing import (
Any,
Callable,
Optional,
Type,
TypeVar,
Union,
cast,
)
from pydantic import BaseModel
from crewai.flow.persistence.base import FlowPersistence
from crewai.flow.persistence.sqlite import SQLiteFlowPersistence
from crewai.utilities.printer import Printer
logger = logging.getLogger(__name__)
T = TypeVar("T")
# Constants for log messages
LOG_MESSAGES = {
"save_state": "Saving flow state to memory for ID: {}",
"save_error": "Failed to persist state for method {}: {}",
"state_missing": "Flow instance has no state",
"id_missing": "Flow state must have an 'id' field for persistence"
}
class PersistenceDecorator:
"""Class to handle flow state persistence with consistent logging."""
_printer = Printer() # Class-level printer instance
@classmethod
def persist_state(cls, flow_instance: Any, method_name: str, persistence_instance: FlowPersistence) -> None:
"""Persist flow state with proper error handling and logging.
This method handles the persistence of flow state data, including proper
error handling and colored console output for status updates.
Args:
flow_instance: The flow instance whose state to persist
method_name: Name of the method that triggered persistence
persistence_instance: The persistence backend to use
Raises:
ValueError: If flow has no state or state lacks an ID
RuntimeError: If state persistence fails
AttributeError: If flow instance lacks required state attributes
"""
try:
state = getattr(flow_instance, 'state', None)
if state is None:
raise ValueError("Flow instance has no state")
flow_uuid: Optional[str] = None
if isinstance(state, dict):
flow_uuid = state.get('id')
elif isinstance(state, BaseModel):
flow_uuid = getattr(state, 'id', None)
if not flow_uuid:
raise ValueError("Flow state must have an 'id' field for persistence")
# Log state saving with consistent message
cls._printer.print(LOG_MESSAGES["save_state"].format(flow_uuid), color="cyan")
logger.info(LOG_MESSAGES["save_state"].format(flow_uuid))
try:
persistence_instance.save_state(
flow_uuid=flow_uuid,
method_name=method_name,
state_data=state,
)
except Exception as e:
error_msg = LOG_MESSAGES["save_error"].format(method_name, str(e))
cls._printer.print(error_msg, color="red")
logger.error(error_msg)
raise RuntimeError(f"State persistence failed: {str(e)}") from e
except AttributeError:
error_msg = LOG_MESSAGES["state_missing"]
cls._printer.print(error_msg, color="red")
logger.error(error_msg)
raise ValueError(error_msg)
except (TypeError, ValueError) as e:
error_msg = LOG_MESSAGES["id_missing"]
cls._printer.print(error_msg, color="red")
logger.error(error_msg)
raise ValueError(error_msg) from e
def persist(persistence: Optional[FlowPersistence] = None):
"""Decorator to persist flow state.
This decorator can be applied at either the class level or method level.
When applied at the class level, it automatically persists all flow method
states. When applied at the method level, it persists only that method's
state.
Args:
persistence: Optional FlowPersistence implementation to use.
If not provided, uses SQLiteFlowPersistence.
Returns:
A decorator that can be applied to either a class or method
Raises:
ValueError: If the flow state doesn't have an 'id' field
RuntimeError: If state persistence fails
Example:
@persist # Class-level persistence with default SQLite
class MyFlow(Flow[MyState]):
@start()
def begin(self):
pass
"""
def decorator(target: Union[Type, Callable[..., T]]) -> Union[Type, Callable[..., T]]:
"""Decorator that handles both class and method decoration."""
actual_persistence = persistence or SQLiteFlowPersistence()
if isinstance(target, type):
# Class decoration
original_init = getattr(target, "__init__")
@functools.wraps(original_init)
def new_init(self: Any, *args: Any, **kwargs: Any) -> None:
if 'persistence' not in kwargs:
kwargs['persistence'] = actual_persistence
original_init(self, *args, **kwargs)
setattr(target, "__init__", new_init)
# Store original methods to preserve their decorators
original_methods = {}
for name, method in target.__dict__.items():
if callable(method) and (
hasattr(method, "__is_start_method__") or
hasattr(method, "__trigger_methods__") or
hasattr(method, "__condition_type__") or
hasattr(method, "__is_flow_method__") or
hasattr(method, "__is_router__")
):
original_methods[name] = method
# Create wrapped versions of the methods that include persistence
for name, method in original_methods.items():
if asyncio.iscoroutinefunction(method):
# Create a closure to capture the current name and method
def create_async_wrapper(method_name: str, original_method: Callable):
@functools.wraps(original_method)
async def method_wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
result = await original_method(self, *args, **kwargs)
PersistenceDecorator.persist_state(self, method_name, actual_persistence)
return result
return method_wrapper
wrapped = create_async_wrapper(name, method)
# Preserve all original decorators and attributes
for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]:
if hasattr(method, attr):
setattr(wrapped, attr, getattr(method, attr))
setattr(wrapped, "__is_flow_method__", True)
# Update the class with the wrapped method
setattr(target, name, wrapped)
else:
# Create a closure to capture the current name and method
def create_sync_wrapper(method_name: str, original_method: Callable):
@functools.wraps(original_method)
def method_wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
result = original_method(self, *args, **kwargs)
PersistenceDecorator.persist_state(self, method_name, actual_persistence)
return result
return method_wrapper
wrapped = create_sync_wrapper(name, method)
# Preserve all original decorators and attributes
for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]:
if hasattr(method, attr):
setattr(wrapped, attr, getattr(method, attr))
setattr(wrapped, "__is_flow_method__", True)
# Update the class with the wrapped method
setattr(target, name, wrapped)
return target
else:
# Method decoration
method = target
setattr(method, "__is_flow_method__", True)
if asyncio.iscoroutinefunction(method):
@functools.wraps(method)
async def method_async_wrapper(flow_instance: Any, *args: Any, **kwargs: Any) -> T:
method_coro = method(flow_instance, *args, **kwargs)
if asyncio.iscoroutine(method_coro):
result = await method_coro
else:
result = method_coro
PersistenceDecorator.persist_state(flow_instance, method.__name__, actual_persistence)
return result
for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]:
if hasattr(method, attr):
setattr(method_async_wrapper, attr, getattr(method, attr))
setattr(method_async_wrapper, "__is_flow_method__", True)
return cast(Callable[..., T], method_async_wrapper)
else:
@functools.wraps(method)
def method_sync_wrapper(flow_instance: Any, *args: Any, **kwargs: Any) -> T:
result = method(flow_instance, *args, **kwargs)
PersistenceDecorator.persist_state(flow_instance, method.__name__, actual_persistence)
return result
for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]:
if hasattr(method, attr):
setattr(method_sync_wrapper, attr, getattr(method, attr))
setattr(method_sync_wrapper, "__is_flow_method__", True)
return cast(Callable[..., T], method_sync_wrapper)
return decorator

View File

@@ -1,123 +0,0 @@
"""
SQLite-based implementation of flow state persistence.
"""
import json
import sqlite3
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, Optional, Union
from pydantic import BaseModel
from crewai.flow.persistence.base import FlowPersistence
class SQLiteFlowPersistence(FlowPersistence):
"""SQLite-based implementation of flow state persistence.
This class provides a simple, file-based persistence implementation using SQLite.
It's suitable for development and testing, or for production use cases with
moderate performance requirements.
"""
db_path: str # Type annotation for instance variable
def __init__(self, db_path: Optional[str] = None):
"""Initialize SQLite persistence.
Args:
db_path: Path to the SQLite database file. If not provided, uses
db_storage_path() from utilities.paths.
Raises:
ValueError: If db_path is invalid
"""
from crewai.utilities.paths import db_storage_path
# Get path from argument or default location
path = db_path or str(Path(db_storage_path()) / "flow_states.db")
if not path:
raise ValueError("Database path must be provided")
self.db_path = path # Now mypy knows this is str
self.init_db()
def init_db(self) -> None:
"""Create the necessary tables if they don't exist."""
with sqlite3.connect(self.db_path) as conn:
conn.execute("""
CREATE TABLE IF NOT EXISTS flow_states (
id INTEGER PRIMARY KEY AUTOINCREMENT,
flow_uuid TEXT NOT NULL,
method_name TEXT NOT NULL,
timestamp DATETIME NOT NULL,
state_json TEXT NOT NULL
)
""")
# Add index for faster UUID lookups
conn.execute("""
CREATE INDEX IF NOT EXISTS idx_flow_states_uuid
ON flow_states(flow_uuid)
""")
def save_state(
self,
flow_uuid: str,
method_name: str,
state_data: Union[Dict[str, Any], BaseModel],
) -> None:
"""Save the current flow state to SQLite.
Args:
flow_uuid: Unique identifier for the flow instance
method_name: Name of the method that just completed
state_data: Current state data (either dict or Pydantic model)
"""
# Convert state_data to dict, handling both Pydantic and dict cases
if isinstance(state_data, BaseModel):
state_dict = dict(state_data) # Use dict() for better type compatibility
elif isinstance(state_data, dict):
state_dict = state_data
else:
raise ValueError(
f"state_data must be either a Pydantic BaseModel or dict, got {type(state_data)}"
)
with sqlite3.connect(self.db_path) as conn:
conn.execute("""
INSERT INTO flow_states (
flow_uuid,
method_name,
timestamp,
state_json
) VALUES (?, ?, ?, ?)
""", (
flow_uuid,
method_name,
datetime.utcnow().isoformat(),
json.dumps(state_dict),
))
def load_state(self, flow_uuid: str) -> Optional[Dict[str, Any]]:
"""Load the most recent state for a given flow UUID.
Args:
flow_uuid: Unique identifier for the flow instance
Returns:
The most recent state as a dictionary, or None if no state exists
"""
with sqlite3.connect(self.db_path) as conn:
cursor = conn.execute("""
SELECT state_json
FROM flow_states
WHERE flow_uuid = ?
ORDER BY id DESC
LIMIT 1
""", (flow_uuid,))
row = cursor.fetchone()
if row:
return json.loads(row[0])
return None

View File

@@ -2,17 +2,11 @@ from pathlib import Path
from typing import Iterator, List, Optional, Union
from urllib.parse import urlparse
try:
from docling.datamodel.base_models import InputFormat
from docling.document_converter import DocumentConverter
from docling.exceptions import ConversionError
from docling_core.transforms.chunker.hierarchical_chunker import HierarchicalChunker
from docling_core.types.doc.document import DoclingDocument
DOCLING_AVAILABLE = True
except ImportError:
DOCLING_AVAILABLE = False
from docling.datamodel.base_models import InputFormat
from docling.document_converter import DocumentConverter
from docling.exceptions import ConversionError
from docling_core.transforms.chunker.hierarchical_chunker import HierarchicalChunker
from docling_core.types.doc.document import DoclingDocument
from pydantic import Field
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
@@ -25,22 +19,14 @@ class CrewDoclingSource(BaseKnowledgeSource):
This will auto support PDF, DOCX, and TXT, XLSX, Images, and HTML files without any additional dependencies and follows the docling package as the source of truth.
"""
def __init__(self, *args, **kwargs):
if not DOCLING_AVAILABLE:
raise ImportError(
"The docling package is required to use CrewDoclingSource. "
"Please install it using: uv add docling"
)
super().__init__(*args, **kwargs)
_logger: Logger = Logger(verbose=True)
file_path: Optional[List[Union[Path, str]]] = Field(default=None)
file_paths: List[Union[Path, str]] = Field(default_factory=list)
chunks: List[str] = Field(default_factory=list)
safe_file_paths: List[Union[Path, str]] = Field(default_factory=list)
content: List["DoclingDocument"] = Field(default_factory=list)
document_converter: "DocumentConverter" = Field(
content: List[DoclingDocument] = Field(default_factory=list)
document_converter: DocumentConverter = Field(
default_factory=lambda: DocumentConverter(
allowed_formats=[
InputFormat.MD,
@@ -66,7 +52,7 @@ class CrewDoclingSource(BaseKnowledgeSource):
self.safe_file_paths = self.validate_content()
self.content = self._load_content()
def _load_content(self) -> List["DoclingDocument"]:
def _load_content(self) -> List[DoclingDocument]:
try:
return self._convert_source_to_docling_documents()
except ConversionError as e:
@@ -88,11 +74,11 @@ class CrewDoclingSource(BaseKnowledgeSource):
self.chunks.extend(list(new_chunks_iterable))
self._save_documents()
def _convert_source_to_docling_documents(self) -> List["DoclingDocument"]:
def _convert_source_to_docling_documents(self) -> List[DoclingDocument]:
conv_results_iter = self.document_converter.convert_all(self.safe_file_paths)
return [result.document for result in conv_results_iter]
def _chunk_doc(self, doc: "DoclingDocument") -> Iterator[str]:
def _chunk_doc(self, doc: DoclingDocument) -> Iterator[str]:
chunker = HierarchicalChunker()
for chunk in chunker.chunk(doc):
yield chunk.text

View File

@@ -1,27 +1,22 @@
import json
import logging
import os
import sys
import threading
import warnings
from contextlib import contextmanager
from typing import Any, Dict, List, Optional, Union, cast
from typing import Any, Dict, List, Optional, Union
from dotenv import load_dotenv
from pydantic import BaseModel, Field
with warnings.catch_warnings():
warnings.simplefilter("ignore", UserWarning)
import litellm
from litellm import Choices, get_supported_openai_params
from litellm.types.utils import ModelResponse
from litellm import get_supported_openai_params
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
load_dotenv()
class FilteredStream:
def __init__(self, original_stream):
@@ -30,7 +25,6 @@ class FilteredStream:
def write(self, s) -> int:
with self._lock:
# Filter out extraneous messages from LiteLLM
if (
"Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new"
in s
@@ -76,18 +70,6 @@ LLM_CONTEXT_WINDOW_SIZES = {
"mixtral-8x7b-32768": 32768,
"llama-3.3-70b-versatile": 128000,
"llama-3.3-70b-instruct": 128000,
# sambanova
"Meta-Llama-3.3-70B-Instruct": 131072,
"QwQ-32B-Preview": 8192,
"Qwen2.5-72B-Instruct": 8192,
"Qwen2.5-Coder-32B-Instruct": 8192,
"Meta-Llama-3.1-405B-Instruct": 8192,
"Meta-Llama-3.1-70B-Instruct": 131072,
"Meta-Llama-3.1-8B-Instruct": 131072,
"Llama-3.2-90B-Vision-Instruct": 16384,
"Llama-3.2-11B-Vision-Instruct": 16384,
"Meta-Llama-3.2-3B-Instruct": 4096,
"Meta-Llama-3.2-1B-Instruct": 16384,
}
DEFAULT_CONTEXT_WINDOW_SIZE = 8192
@@ -98,26 +80,48 @@ CONTEXT_WINDOW_USAGE_RATIO = 0.75
def suppress_warnings():
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
warnings.filterwarnings(
"ignore", message="open_text is deprecated*", category=DeprecationWarning
)
# Redirect stdout and stderr
old_stdout = sys.stdout
old_stderr = sys.stderr
sys.stdout = FilteredStream(old_stdout)
sys.stderr = FilteredStream(old_stderr)
try:
yield
finally:
# Restore stdout and stderr
sys.stdout = old_stdout
sys.stderr = old_stderr
class LLM:
class LLM(BaseModel):
model: str = "gpt-4" # Set default model
timeout: Optional[Union[float, int]] = None
temperature: Optional[float] = None
top_p: Optional[float] = None
n: Optional[int] = None
stop: Optional[Union[str, List[str]]] = None
max_completion_tokens: Optional[int] = None
max_tokens: Optional[int] = None
presence_penalty: Optional[float] = None
frequency_penalty: Optional[float] = None
logit_bias: Optional[Dict[int, float]] = None
response_format: Optional[Dict[str, Any]] = None
seed: Optional[int] = None
logprobs: Optional[bool] = None
top_logprobs: Optional[int] = None
base_url: Optional[str] = None
api_version: Optional[str] = None
api_key: Optional[str] = None
callbacks: Optional[List[Any]] = None
context_window_size: Optional[int] = None
kwargs: Dict[str, Any] = Field(default_factory=dict)
logger: Optional[logging.Logger] = Field(default_factory=lambda: logging.getLogger(__name__))
def __init__(
self,
model: str,
model: Optional[Union[str, 'LLM']] = "gpt-4",
timeout: Optional[Union[float, int]] = None,
temperature: Optional[float] = None,
top_p: Optional[float] = None,
@@ -130,217 +134,422 @@ class LLM:
logit_bias: Optional[Dict[int, float]] = None,
response_format: Optional[Dict[str, Any]] = None,
seed: Optional[int] = None,
logprobs: Optional[int] = None,
logprobs: Optional[bool] = None,
top_logprobs: Optional[int] = None,
base_url: Optional[str] = None,
api_version: Optional[str] = None,
api_key: Optional[str] = None,
callbacks: List[Any] = [],
):
self.model = model
self.timeout = timeout
self.temperature = temperature
self.top_p = top_p
self.n = n
self.max_completion_tokens = max_completion_tokens
self.max_tokens = max_tokens
self.presence_penalty = presence_penalty
self.frequency_penalty = frequency_penalty
self.logit_bias = logit_bias
self.response_format = response_format
self.seed = seed
self.logprobs = logprobs
self.top_logprobs = top_logprobs
self.base_url = base_url
self.api_version = api_version
self.api_key = api_key
self.callbacks = callbacks
self.context_window_size = 0
callbacks: Optional[List[Any]] = None,
context_window_size: Optional[int] = None,
**kwargs: Any,
) -> None:
# Initialize with default values
init_dict = {
"model": model if isinstance(model, str) else "gpt-4",
"timeout": timeout,
"temperature": temperature,
"top_p": top_p,
"n": n,
"stop": stop,
"max_completion_tokens": max_completion_tokens,
"max_tokens": max_tokens,
"presence_penalty": presence_penalty,
"frequency_penalty": frequency_penalty,
"logit_bias": logit_bias,
"response_format": response_format,
"seed": seed,
"logprobs": logprobs,
"top_logprobs": top_logprobs,
"base_url": base_url,
"api_version": api_version,
"api_key": api_key,
"callbacks": callbacks,
"context_window_size": context_window_size,
"kwargs": kwargs,
}
super().__init__(**init_dict)
# Initialize model with default value
self.model = "gpt-4" # Default fallback
# Extract and validate model name
if isinstance(model, LLM):
# Extract and validate model name from LLM instance
if hasattr(model, 'model'):
if isinstance(model.model, str):
self.model = model.model
else:
# Try to extract string model name from nested LLM
if isinstance(model.model, LLM):
self.model = str(model.model.model) if hasattr(model.model, 'model') else "gpt-4"
else:
self.model = "gpt-4"
if self.logger:
self.logger.warning("Nested LLM model is not a string, using default: gpt-4")
else:
self.model = "gpt-4"
if self.logger:
self.logger.warning("LLM instance has no model attribute, using default: gpt-4")
else:
# Extract and validate model name for non-LLM instances
if not isinstance(model, str):
if self.logger:
self.logger.debug(f"Model is not a string, attempting to extract name. Type: {type(model)}")
if model is not None:
if hasattr(model, 'model_name'):
model_name = getattr(model, 'model_name', None)
self.model = str(model_name) if model_name is not None else "gpt-4"
elif hasattr(model, 'model'):
model_attr = getattr(model, 'model', None)
self.model = str(model_attr) if model_attr is not None else "gpt-4"
elif hasattr(model, '_model_name'):
model_name = getattr(model, '_model_name', None)
self.model = str(model_name) if model_name is not None else "gpt-4"
else:
self.model = "gpt-4" # Default fallback
if self.logger:
self.logger.warning(f"Could not extract model name from {type(model)}, using default: {self.model}")
else:
self.model = "gpt-4" # Default fallback for None
if self.logger:
self.logger.warning("Model is None, using default: gpt-4")
else:
self.model = str(model) # Ensure it's a string
# If model is an LLM instance, copy its configuration
if isinstance(model, LLM):
# Extract and validate model name first
if hasattr(model, 'model'):
if isinstance(model.model, str):
self.model = model.model
else:
# Try to extract string model name from nested LLM
if isinstance(model.model, LLM):
self.model = str(model.model.model) if hasattr(model.model, 'model') else "gpt-4"
else:
self.model = "gpt-4"
if self.logger:
self.logger.warning("Nested LLM model is not a string, using default: gpt-4")
else:
self.model = "gpt-4"
if self.logger:
self.logger.warning("LLM instance has no model attribute, using default: gpt-4")
# Copy other configuration
self.timeout = model.timeout
self.temperature = model.temperature
self.top_p = model.top_p
self.n = model.n
self.stop = model.stop
self.max_completion_tokens = model.max_completion_tokens
self.max_tokens = model.max_tokens
self.presence_penalty = model.presence_penalty
self.frequency_penalty = model.frequency_penalty
self.logit_bias = model.logit_bias
self.response_format = model.response_format
self.seed = model.seed
self.logprobs = model.logprobs
self.top_logprobs = model.top_logprobs
self.base_url = model.base_url
self.api_version = model.api_version
self.api_key = model.api_key
self.callbacks = model.callbacks
self.context_window_size = model.context_window_size
self.kwargs = model.kwargs
# Final validation of model name
if not isinstance(self.model, str):
self.model = "gpt-4"
if self.logger:
self.logger.warning("Model name is still not a string after LLM copy, using default: gpt-4")
else:
# Extract and validate model name for non-LLM instances
if not isinstance(model, str):
if self.logger:
self.logger.debug(f"Model is not a string, attempting to extract name. Type: {type(model)}")
if model is not None:
if hasattr(model, 'model_name'):
model_name = getattr(model, 'model_name', None)
self.model = str(model_name) if model_name is not None else "gpt-4"
elif hasattr(model, 'model'):
model_attr = getattr(model, 'model', None)
self.model = str(model_attr) if model_attr is not None else "gpt-4"
elif hasattr(model, '_model_name'):
model_name = getattr(model, '_model_name', None)
self.model = str(model_name) if model_name is not None else "gpt-4"
else:
self.model = "gpt-4" # Default fallback
if self.logger:
self.logger.warning(f"Could not extract model name from {type(model)}, using default: {self.model}")
else:
self.model = "gpt-4" # Default fallback for None
if self.logger:
self.logger.warning("Model is None, using default: gpt-4")
else:
self.model = str(model) # Ensure it's a string
# Final validation
if not isinstance(self.model, str):
self.model = "gpt-4"
if self.logger:
self.logger.warning("Model name is still not a string after extraction, using default: gpt-4")
self.timeout = timeout
self.temperature = temperature
self.top_p = top_p
self.n = n
self.stop = stop
self.max_completion_tokens = max_completion_tokens
self.max_tokens = max_tokens
self.presence_penalty = presence_penalty
self.frequency_penalty = frequency_penalty
self.logit_bias = logit_bias
self.response_format = response_format
self.seed = seed
self.logprobs = logprobs
self.top_logprobs = top_logprobs
self.base_url = base_url
self.api_version = api_version
self.api_key = api_key
self.callbacks = callbacks
self.context_window_size = 0
self.kwargs = kwargs
# Ensure model is a string after initialization
if not isinstance(self.model, str):
self.model = "gpt-4"
self.logger.warning(f"Model is still not a string after initialization, using default: {self.model}")
litellm.drop_params = True
# Normalize self.stop to always be a List[str]
if stop is None:
self.stop: List[str] = []
elif isinstance(stop, str):
self.stop = [stop]
else:
self.stop = stop
self.set_callbacks(callbacks)
self.set_env_callbacks()
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
messages: List[Dict[str, str]],
callbacks: Optional[List[Any]] = None
) -> str:
"""
High-level llm call method that:
1) Accepts either a string or a list of messages
2) Converts string input to the required message format
3) Calls litellm.completion
4) Handles function/tool calls if any
5) Returns the final text response or tool result
Parameters:
- messages (Union[str, List[Dict[str, str]]]): The input messages for the LLM.
- If a string is provided, it will be converted into a message list with a single entry.
- If a list of dictionaries is provided, each dictionary should have 'role' and 'content' keys.
- tools (Optional[List[dict]]): A list of tool schemas for function calling.
- callbacks (Optional[List[Any]]): A list of callback functions to be executed.
- available_functions (Optional[Dict[str, Any]]): A dictionary mapping function names to actual Python functions.
Returns:
- str: The final text response from the LLM or the result of a tool function call.
Examples:
---------
# Example 1: Using a string input
response = llm.call("Return the name of a random city in the world.")
print(response)
# Example 2: Using a list of messages
messages = [{"role": "user", "content": "What is the capital of France?"}]
response = llm.call(messages)
print(response)
"""
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
with suppress_warnings():
if callbacks and len(callbacks) > 0:
self.set_callbacks(callbacks)
# Store original model to restore later
original_model = self.model
try:
# --- 1) Prepare the parameters for the completion call
# Ensure model is a string before making the call
if not isinstance(self.model, str):
if self.logger:
self.logger.warning(f"Model is not a string in call method: {type(self.model)}. Attempting to convert...")
if isinstance(self.model, LLM):
self.model = self.model.model if isinstance(self.model.model, str) else "gpt-4"
elif hasattr(self.model, 'model_name'):
self.model = str(self.model.model_name)
elif hasattr(self.model, 'model'):
if isinstance(self.model.model, str):
self.model = str(self.model.model)
elif hasattr(self.model.model, 'model_name'):
self.model = str(self.model.model.model_name)
else:
self.model = "gpt-4"
if self.logger:
self.logger.warning("Could not extract model name from nested model object, using default: gpt-4")
else:
self.model = "gpt-4"
if self.logger:
self.logger.warning("Could not extract model name, using default: gpt-4")
if self.logger:
self.logger.debug(f"Using model: {self.model} (type: {type(self.model)}) for LiteLLM call")
# Create base params with validated model name
# Extract model name string
model_name = None
if isinstance(self.model, str):
model_name = self.model
elif hasattr(self.model, 'model_name'):
model_name = str(self.model.model_name)
elif hasattr(self.model, 'model'):
if isinstance(self.model.model, str):
model_name = str(self.model.model)
elif hasattr(self.model.model, 'model_name'):
model_name = str(self.model.model.model_name)
if not model_name:
model_name = "gpt-4"
if self.logger:
self.logger.warning("Could not extract model name, using default: gpt-4")
params = {
"model": self.model,
"model": model_name,
"messages": messages,
"timeout": self.timeout,
"temperature": self.temperature,
"top_p": self.top_p,
"n": self.n,
"stop": self.stop,
"max_tokens": self.max_tokens or self.max_completion_tokens,
"presence_penalty": self.presence_penalty,
"frequency_penalty": self.frequency_penalty,
"logit_bias": self.logit_bias,
"response_format": self.response_format,
"seed": self.seed,
"logprobs": self.logprobs,
"top_logprobs": self.top_logprobs,
"stream": False,
"api_key": self.api_key or os.getenv("OPENAI_API_KEY"),
"api_base": self.base_url,
"api_version": self.api_version,
"api_key": self.api_key,
"stream": False,
"tools": tools,
}
if self.logger:
self.logger.debug(f"Using model parameters: {params}")
# Remove None values from params
# Add API configuration if available
api_key = self.api_key or os.getenv("OPENAI_API_KEY")
if api_key:
params["api_key"] = api_key
# Try to get supported parameters for the model
try:
supported_params = get_supported_openai_params(self.model)
optional_params = {}
if supported_params:
param_mapping = {
"timeout": self.timeout,
"temperature": self.temperature,
"top_p": self.top_p,
"n": self.n,
"stop": self.stop,
"max_tokens": self.max_tokens or self.max_completion_tokens,
"presence_penalty": self.presence_penalty,
"frequency_penalty": self.frequency_penalty,
"logit_bias": self.logit_bias,
"response_format": self.response_format,
"seed": self.seed,
"logprobs": self.logprobs,
"top_logprobs": self.top_logprobs
}
# Only add parameters that are supported and not None
optional_params = {
k: v for k, v in param_mapping.items()
if k in supported_params and v is not None
}
if "logprobs" in supported_params and self.logprobs is not None:
optional_params["logprobs"] = self.logprobs
if "top_logprobs" in supported_params and self.top_logprobs is not None:
optional_params["top_logprobs"] = self.top_logprobs
except Exception as e:
if self.logger:
self.logger.error(f"Failed to get supported params for model {self.model}: {str(e)}")
# If we can't get supported params, just add non-None parameters
param_mapping = {
"timeout": self.timeout,
"temperature": self.temperature,
"top_p": self.top_p,
"n": self.n,
"stop": self.stop,
"max_tokens": self.max_tokens or self.max_completion_tokens,
"presence_penalty": self.presence_penalty,
"frequency_penalty": self.frequency_penalty,
"logit_bias": self.logit_bias,
"response_format": self.response_format,
"seed": self.seed,
"logprobs": self.logprobs,
"top_logprobs": self.top_logprobs
}
optional_params = {k: v for k, v in param_mapping.items() if v is not None}
# Update params with optional parameters
params.update(optional_params)
# Add API endpoint configuration if available
if self.base_url:
params["api_base"] = self.base_url
if self.api_version:
params["api_version"] = self.api_version
# Final validation of model parameter
if not isinstance(params["model"], str):
if self.logger:
self.logger.error(f"Model is still not a string after all conversions: {type(params['model'])}")
params["model"] = "gpt-4"
# Update params with non-None optional parameters
params.update({k: v for k, v in optional_params.items() if v is not None})
# Add any additional kwargs
if self.kwargs:
params.update(self.kwargs)
# Remove None values to avoid passing unnecessary parameters
params = {k: v for k, v in params.items() if v is not None}
# --- 2) Make the completion call
response = litellm.completion(**params)
response_message = cast(Choices, cast(ModelResponse, response).choices)[
0
].message
text_response = response_message.content or ""
tool_calls = getattr(response_message, "tool_calls", [])
# --- 3) Handle callbacks with usage info
if callbacks and len(callbacks) > 0:
content = response["choices"][0]["message"]["content"]
# Extract usage metrics
usage = response.get("usage", {})
if callbacks:
for callback in callbacks:
if hasattr(callback, "log_success_event"):
usage_info = getattr(response, "usage", None)
if usage_info:
callback.log_success_event(
kwargs=params,
response_obj={"usage": usage_info},
start_time=0,
end_time=0,
)
# --- 4) If no tool calls, return the text response
if not tool_calls or not available_functions:
return text_response
# --- 5) Handle the tool call
tool_call = tool_calls[0]
function_name = tool_call.function.name
if function_name in available_functions:
try:
function_args = json.loads(tool_call.function.arguments)
except json.JSONDecodeError as e:
logging.warning(f"Failed to parse function arguments: {e}")
return text_response
fn = available_functions[function_name]
try:
# Call the actual tool function
result = fn(**function_args)
return result
except Exception as e:
logging.error(
f"Error executing function '{function_name}': {e}"
)
return text_response
else:
logging.warning(
f"Tool call requested unknown function '{function_name}'"
)
return text_response
if hasattr(callback, "update_token_usage"):
callback.update_token_usage(usage)
return content
except Exception as e:
if not LLMContextLengthExceededException(
str(e)
)._is_context_limit_error(str(e)):
logging.error(f"LiteLLM call failed: {str(e)}")
raise
raise # Re-raise the exception after logging
finally:
# Always restore the original model object
self.model = original_model
def supports_function_calling(self) -> bool:
"""Check if the LLM supports function calling.
Returns:
bool: True if the model supports function calling, False otherwise
"""
try:
params = get_supported_openai_params(model=self.model)
return "response_format" in params
except Exception as e:
logging.error(f"Failed to get supported params: {str(e)}")
if self.logger:
self.logger.error(f"Failed to get supported params: {str(e)}")
return False
def supports_stop_words(self) -> bool:
"""Check if the LLM supports stop words.
Returns False if the LLM is not properly initialized."""
if not hasattr(self, 'model') or self.model is None:
return False
try:
params = get_supported_openai_params(model=self.model)
return "stop" in params
except Exception as e:
logging.error(f"Failed to get supported params: {str(e)}")
if self.logger:
self.logger.error(f"Failed to get supported params: {str(e)}")
return False
def get_context_window_size(self) -> int:
"""Get the context window size for the current model.
Returns:
int: The context window size in tokens
"""
Returns the context window size, using 75% of the maximum to avoid
cutting off messages mid-thread.
"""
if self.context_window_size != 0:
return self.context_window_size
# Only using 75% of the context window size to avoid cutting the message in the middle
if self.context_window_size is not None and self.context_window_size != 0:
return int(self.context_window_size)
self.context_window_size = int(
DEFAULT_CONTEXT_WINDOW_SIZE * CONTEXT_WINDOW_USAGE_RATIO
)
for key, value in LLM_CONTEXT_WINDOW_SIZES.items():
if self.model.startswith(key):
self.context_window_size = int(value * CONTEXT_WINDOW_USAGE_RATIO)
window_size = DEFAULT_CONTEXT_WINDOW_SIZE
if isinstance(self.model, str):
for key, value in LLM_CONTEXT_WINDOW_SIZES.items():
if self.model.startswith(key):
window_size = value
break
self.context_window_size = int(window_size * CONTEXT_WINDOW_USAGE_RATIO)
return self.context_window_size
def set_callbacks(self, callbacks: List[Any]):
def set_callbacks(self, callbacks: Optional[List[Any]] = None) -> None:
"""Set callbacks for the LLM.
Args:
callbacks: Optional list of callback functions. If None, no callbacks will be set.
"""
Attempt to keep a single set of callbacks in litellm by removing old
duplicates and adding new ones.
"""
with suppress_warnings():
if callbacks is not None:
callback_types = [type(callback) for callback in callbacks]
for callback in litellm.success_callback[:]:
if type(callback) in callback_types:
@@ -371,20 +580,19 @@ class LLM:
This will set `litellm.success_callback` to ["langfuse", "langsmith"] and
`litellm.failure_callback` to ["langfuse"].
"""
with suppress_warnings():
success_callbacks_str = os.environ.get("LITELLM_SUCCESS_CALLBACKS", "")
success_callbacks = []
if success_callbacks_str:
success_callbacks = [
cb.strip() for cb in success_callbacks_str.split(",") if cb.strip()
]
success_callbacks_str = os.environ.get("LITELLM_SUCCESS_CALLBACKS", "")
success_callbacks = []
if success_callbacks_str:
success_callbacks = [
callback.strip() for callback in success_callbacks_str.split(",")
]
failure_callbacks_str = os.environ.get("LITELLM_FAILURE_CALLBACKS", "")
failure_callbacks = []
if failure_callbacks_str:
failure_callbacks = [
cb.strip() for cb in failure_callbacks_str.split(",") if cb.strip()
]
failure_callbacks_str = os.environ.get("LITELLM_FAILURE_CALLBACKS", "")
failure_callbacks = []
if failure_callbacks_str:
failure_callbacks = [
callback.strip() for callback in failure_callbacks_str.split(",")
]
litellm.success_callback = success_callbacks
litellm.failure_callback = failure_callbacks
litellm.success_callback = success_callbacks
litellm.failure_callback = failure_callbacks

View File

@@ -1,17 +1,12 @@
import json
import logging
import sqlite3
from pathlib import Path
from typing import Any, Dict, List, Optional
from crewai.task import Task
from crewai.utilities import Printer
from crewai.utilities.crew_json_encoder import CrewJSONEncoder
from crewai.utilities.errors import DatabaseError, DatabaseOperationError
from crewai.utilities.paths import db_storage_path
logger = logging.getLogger(__name__)
class KickoffTaskOutputsSQLiteStorage:
"""
@@ -19,24 +14,15 @@ class KickoffTaskOutputsSQLiteStorage:
"""
def __init__(
self, db_path: Optional[str] = None
self, db_path: str = f"{db_storage_path()}/latest_kickoff_task_outputs.db"
) -> None:
if db_path is None:
# Get the parent directory of the default db path and create our db file there
db_path = str(Path(db_storage_path()) / "latest_kickoff_task_outputs.db")
self.db_path = db_path
self._printer: Printer = Printer()
self._initialize_db()
def _initialize_db(self) -> None:
"""Initialize the SQLite database and create the latest_kickoff_task_outputs table.
This method sets up the database schema for storing task outputs. It creates
a table with columns for task_id, expected_output, output (as JSON),
task_index, inputs (as JSON), was_replayed flag, and timestamp.
Raises:
DatabaseOperationError: If database initialization fails due to SQLite errors.
def _initialize_db(self):
"""
Initializes the SQLite database and creates LTM table
"""
try:
with sqlite3.connect(self.db_path) as conn:
@@ -57,9 +43,10 @@ class KickoffTaskOutputsSQLiteStorage:
conn.commit()
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.INIT_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)
self._printer.print(
content=f"SAVING KICKOFF TASK OUTPUTS ERROR: An error occurred during database initialization: {e}",
color="red",
)
def add(
self,
@@ -68,22 +55,9 @@ class KickoffTaskOutputsSQLiteStorage:
task_index: int,
was_replayed: bool = False,
inputs: Dict[str, Any] = {},
) -> None:
"""Add a new task output record to the database.
Args:
task: The Task object containing task details.
output: Dictionary containing the task's output data.
task_index: Integer index of the task in the sequence.
was_replayed: Boolean indicating if this was a replay execution.
inputs: Dictionary of input parameters used for the task.
Raises:
DatabaseOperationError: If saving the task output fails due to SQLite errors.
"""
):
try:
with sqlite3.connect(self.db_path) as conn:
conn.execute("BEGIN TRANSACTION")
cursor = conn.cursor()
cursor.execute(
"""
@@ -102,31 +76,21 @@ class KickoffTaskOutputsSQLiteStorage:
)
conn.commit()
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.SAVE_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)
self._printer.print(
content=f"SAVING KICKOFF TASK OUTPUTS ERROR: An error occurred during database initialization: {e}",
color="red",
)
def update(
self,
task_index: int,
**kwargs: Any,
) -> None:
"""Update an existing task output record in the database.
Updates fields of a task output record identified by task_index. The fields
to update are provided as keyword arguments.
Args:
task_index: Integer index of the task to update.
**kwargs: Arbitrary keyword arguments representing fields to update.
Values that are dictionaries will be JSON encoded.
Raises:
DatabaseOperationError: If updating the task output fails due to SQLite errors.
**kwargs,
):
"""
Updates an existing row in the latest_kickoff_task_outputs table based on task_index.
"""
try:
with sqlite3.connect(self.db_path) as conn:
conn.execute("BEGIN TRANSACTION")
cursor = conn.cursor()
fields = []
@@ -146,23 +110,14 @@ class KickoffTaskOutputsSQLiteStorage:
conn.commit()
if cursor.rowcount == 0:
logger.warning(f"No row found with task_index {task_index}. No update performed.")
self._printer.print(
f"No row found with task_index {task_index}. No update performed.",
color="red",
)
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.UPDATE_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)
self._printer.print(f"UPDATE KICKOFF TASK OUTPUTS ERROR: {e}", color="red")
def load(self) -> List[Dict[str, Any]]:
"""Load all task output records from the database.
Returns:
List of dictionaries containing task output records, ordered by task_index.
Each dictionary contains: task_id, expected_output, output, task_index,
inputs, was_replayed, and timestamp.
Raises:
DatabaseOperationError: If loading task outputs fails due to SQLite errors.
"""
def load(self) -> Optional[List[Dict[str, Any]]]:
try:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
@@ -189,26 +144,23 @@ class KickoffTaskOutputsSQLiteStorage:
return results
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.LOAD_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)
self._printer.print(
content=f"LOADING KICKOFF TASK OUTPUTS ERROR: An error occurred while querying kickoff task outputs: {e}",
color="red",
)
return None
def delete_all(self) -> None:
"""Delete all task output records from the database.
This method removes all records from the latest_kickoff_task_outputs table.
Use with caution as this operation cannot be undone.
Raises:
DatabaseOperationError: If deleting task outputs fails due to SQLite errors.
def delete_all(self):
"""
Deletes all rows from the latest_kickoff_task_outputs table.
"""
try:
with sqlite3.connect(self.db_path) as conn:
conn.execute("BEGIN TRANSACTION")
cursor = conn.cursor()
cursor.execute("DELETE FROM latest_kickoff_task_outputs")
conn.commit()
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.DELETE_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)
self._printer.print(
content=f"ERROR: Failed to delete all kickoff task outputs: {e}",
color="red",
)

View File

@@ -1,6 +1,5 @@
import json
import sqlite3
from pathlib import Path
from typing import Any, Dict, List, Optional, Union
from crewai.utilities import Printer
@@ -13,15 +12,10 @@ class LTMSQLiteStorage:
"""
def __init__(
self, db_path: Optional[str] = None
self, db_path: str = f"{db_storage_path()}/long_term_memory_storage.db"
) -> None:
if db_path is None:
# Get the parent directory of the default db path and create our db file there
db_path = str(Path(db_storage_path()) / "long_term_memory_storage.db")
self.db_path = db_path
self._printer: Printer = Printer()
# Ensure parent directory exists
Path(self.db_path).parent.mkdir(parents=True, exist_ok=True)
self._initialize_db()
def _initialize_db(self):

View File

@@ -27,18 +27,10 @@ class Mem0Storage(Storage):
raise ValueError("User ID is required for user memory type")
# API key in memory config overrides the environment variable
config = self.memory_config.get("config", {})
mem0_api_key = config.get("api_key") or os.getenv("MEM0_API_KEY")
mem0_org_id = config.get("org_id")
mem0_project_id = config.get("project_id")
# Initialize MemoryClient with available parameters
if mem0_org_id and mem0_project_id:
self.memory = MemoryClient(
api_key=mem0_api_key, org_id=mem0_org_id, project_id=mem0_project_id
)
else:
self.memory = MemoryClient(api_key=mem0_api_key)
mem0_api_key = self.memory_config.get("config", {}).get("api_key") or os.getenv(
"MEM0_API_KEY"
)
self.memory = MemoryClient(api_key=mem0_api_key)
def _sanitize_role(self, role: str) -> str:
"""
@@ -65,7 +57,7 @@ class Mem0Storage(Storage):
metadata={"type": "long_term", **metadata},
)
elif self.memory_type == "entities":
entity_name = self._get_agent_name()
entity_name = None
self.memory.add(
value, user_id=entity_name, metadata={"type": "entity", **metadata}
)

View File

@@ -4,23 +4,18 @@ from typing import Callable
from crewai import Crew
from crewai.project.utils import memoize
"""Decorators for defining crew components and their behaviors."""
def before_kickoff(func):
"""Marks a method to execute before crew kickoff."""
func.is_before_kickoff = True
return func
def after_kickoff(func):
"""Marks a method to execute after crew kickoff."""
func.is_after_kickoff = True
return func
def task(func):
"""Marks a method as a crew task."""
func.is_task = True
@wraps(func)
@@ -34,51 +29,43 @@ def task(func):
def agent(func):
"""Marks a method as a crew agent."""
func.is_agent = True
func = memoize(func)
return func
def llm(func):
"""Marks a method as an LLM provider."""
func.is_llm = True
func = memoize(func)
return func
def output_json(cls):
"""Marks a class as JSON output format."""
cls.is_output_json = True
return cls
def output_pydantic(cls):
"""Marks a class as Pydantic output format."""
cls.is_output_pydantic = True
return cls
def tool(func):
"""Marks a method as a crew tool."""
func.is_tool = True
return memoize(func)
def callback(func):
"""Marks a method as a crew callback."""
func.is_callback = True
return memoize(func)
def cache_handler(func):
"""Marks a method as a cache handler."""
func.is_cache_handler = True
return memoize(func)
def crew(func) -> Callable[..., Crew]:
"""Marks a method as the main crew execution point."""
@wraps(func)
def wrapper(self, *args, **kwargs) -> Crew:

View File

@@ -1,5 +1,4 @@
import inspect
import logging
from pathlib import Path
from typing import Any, Callable, Dict, TypeVar, cast
@@ -8,16 +7,10 @@ from dotenv import load_dotenv
load_dotenv()
logging.basicConfig(level=logging.WARNING)
T = TypeVar("T", bound=type)
"""Base decorator for creating crew classes with configuration and function management."""
def CrewBase(cls: T) -> T:
"""Wraps a class with crew functionality and configuration management."""
class WrappedClass(cls): # type: ignore
is_crew_class: bool = True # type: ignore
@@ -31,9 +24,16 @@ def CrewBase(cls: T) -> T:
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.load_configurations()
agents_config_path = self.base_directory / self.original_agents_config_path
tasks_config_path = self.base_directory / self.original_tasks_config_path
self.agents_config = self.load_yaml(agents_config_path)
self.tasks_config = self.load_yaml(tasks_config_path)
self.map_all_agent_variables()
self.map_all_task_variables()
# Preserve all decorated functions
self._original_functions = {
name: method
@@ -49,6 +49,7 @@ def CrewBase(cls: T) -> T:
]
)
}
# Store specific function types
self._original_tasks = self._filter_functions(
self._original_functions, "is_task"
@@ -66,44 +67,6 @@ def CrewBase(cls: T) -> T:
self._original_functions, "is_kickoff"
)
def load_configurations(self):
"""Load agent and task configurations from YAML files."""
if isinstance(self.original_agents_config_path, str):
agents_config_path = (
self.base_directory / self.original_agents_config_path
)
try:
self.agents_config = self.load_yaml(agents_config_path)
except FileNotFoundError:
logging.warning(
f"Agent config file not found at {agents_config_path}. "
"Proceeding with empty agent configurations."
)
self.agents_config = {}
else:
logging.warning(
"No agent configuration path provided. Proceeding with empty agent configurations."
)
self.agents_config = {}
if isinstance(self.original_tasks_config_path, str):
tasks_config_path = (
self.base_directory / self.original_tasks_config_path
)
try:
self.tasks_config = self.load_yaml(tasks_config_path)
except FileNotFoundError:
logging.warning(
f"Task config file not found at {tasks_config_path}. "
"Proceeding with empty task configurations."
)
self.tasks_config = {}
else:
logging.warning(
"No task configuration path provided. Proceeding with empty task configurations."
)
self.tasks_config = {}
@staticmethod
def load_yaml(config_path: Path):
try:
@@ -253,5 +216,5 @@ def CrewBase(cls: T) -> T:
# Include base class (qual)name in the wrapper class (qual)name.
WrappedClass.__name__ = CrewBase.__name__ + "(" + cls.__name__ + ")"
WrappedClass.__qualname__ = CrewBase.__qualname__ + "(" + cls.__name__ + ")"
return cast(T, WrappedClass)

View File

@@ -41,7 +41,6 @@ from crewai.tools.base_tool import BaseTool
from crewai.utilities.config import process_config
from crewai.utilities.converter import Converter, convert_to_model
from crewai.utilities.i18n import I18N
from crewai.utilities.printer import Printer
class Task(BaseModel):
@@ -128,40 +127,38 @@ class Task(BaseModel):
processed_by_agents: Set[str] = Field(default_factory=set)
guardrail: Optional[Callable[[TaskOutput], Tuple[bool, Any]]] = Field(
default=None,
description="Function to validate task output before proceeding to next task",
description="Function to validate task output before proceeding to next task"
)
max_retries: int = Field(
default=3, description="Maximum number of retries when guardrail fails"
default=3,
description="Maximum number of retries when guardrail fails"
)
retry_count: int = Field(default=0, description="Current number of retries")
start_time: Optional[datetime.datetime] = Field(
default=None, description="Start time of the task execution"
)
end_time: Optional[datetime.datetime] = Field(
default=None, description="End time of the task execution"
retry_count: int = Field(
default=0,
description="Current number of retries"
)
@field_validator("guardrail")
@classmethod
def validate_guardrail_function(cls, v: Optional[Callable]) -> Optional[Callable]:
"""Validate that the guardrail function has the correct signature and behavior.
While type hints provide static checking, this validator ensures runtime safety by:
1. Verifying the function accepts exactly one parameter (the TaskOutput)
2. Checking return type annotations match Tuple[bool, Any] if present
3. Providing clear, immediate error messages for debugging
This runtime validation is crucial because:
- Type hints are optional and can be ignored at runtime
- Function signatures need immediate validation before task execution
- Clear error messages help users debug guardrail implementation issues
Args:
v: The guardrail function to validate
Returns:
The validated guardrail function
Raises:
ValueError: If the function signature is invalid or return annotation
doesn't match Tuple[bool, Any]
@@ -174,13 +171,8 @@ class Task(BaseModel):
# Check return annotation if present, but don't require it
return_annotation = sig.return_annotation
if return_annotation != inspect.Signature.empty:
if not (
return_annotation == Tuple[bool, Any]
or str(return_annotation) == "Tuple[bool, Any]"
):
raise ValueError(
"If return type is annotated, it must be Tuple[bool, Any]"
)
if not (return_annotation == Tuple[bool, Any] or str(return_annotation) == 'Tuple[bool, Any]'):
raise ValueError("If return type is annotated, it must be Tuple[bool, Any]")
return v
_telemetry: Telemetry = PrivateAttr(default_factory=Telemetry)
@@ -189,6 +181,7 @@ class Task(BaseModel):
_original_expected_output: Optional[str] = PrivateAttr(default=None)
_original_output_file: Optional[str] = PrivateAttr(default=None)
_thread: Optional[threading.Thread] = PrivateAttr(default=None)
_execution_time: Optional[float] = PrivateAttr(default=None)
@model_validator(mode="before")
@classmethod
@@ -213,19 +206,25 @@ class Task(BaseModel):
"may_not_set_field", "This field is not to be set by the user.", {}
)
def _set_start_execution_time(self) -> float:
return datetime.datetime.now().timestamp()
def _set_end_execution_time(self, start_time: float) -> None:
self._execution_time = datetime.datetime.now().timestamp() - start_time
@field_validator("output_file")
@classmethod
def output_file_validation(cls, value: Optional[str]) -> Optional[str]:
"""Validate the output file path.
Args:
value: The output file path to validate. Can be None or a string.
If the path contains template variables (e.g. {var}), leading slashes are preserved.
For regular paths, leading slashes are stripped.
Returns:
The validated and potentially modified path, or None if no path was provided.
Raises:
ValueError: If the path contains invalid characters, path traversal attempts,
or other security concerns.
@@ -235,24 +234,18 @@ class Task(BaseModel):
# Basic security checks
if ".." in value:
raise ValueError(
"Path traversal attempts are not allowed in output_file paths"
)
raise ValueError("Path traversal attempts are not allowed in output_file paths")
# Check for shell expansion first
if value.startswith("~") or value.startswith("$"):
raise ValueError(
"Shell expansion characters are not allowed in output_file paths"
)
if value.startswith('~') or value.startswith('$'):
raise ValueError("Shell expansion characters are not allowed in output_file paths")
# Then check other shell special characters
if any(char in value for char in ["|", ">", "<", "&", ";"]):
raise ValueError(
"Shell special characters are not allowed in output_file paths"
)
if any(char in value for char in ['|', '>', '<', '&', ';']):
raise ValueError("Shell special characters are not allowed in output_file paths")
# Don't strip leading slash if it's a template path with variables
if "{" in value or "}" in value:
if "{" in value or "}" in value:
# Validate template variable format
template_vars = [part.split("}")[0] for part in value.split("{")[1:]]
for var in template_vars:
@@ -276,7 +269,9 @@ class Task(BaseModel):
@model_validator(mode="after")
def check_tools(self):
"""Check if the tools are set."""
if not self.tools and self.agent and self.agent.tools:
if self.agent and self.agent.tools:
if self.tools is None:
self.tools = []
self.tools.extend(self.agent.tools)
return self
@@ -309,12 +304,6 @@ class Task(BaseModel):
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
@property
def execution_duration(self) -> float | None:
if not self.start_time or not self.end_time:
return None
return (self.end_time - self.start_time).total_seconds()
def execute_async(
self,
agent: BaseAgent | None = None,
@@ -355,13 +344,14 @@ class Task(BaseModel):
f"The task '{self.description}' has no agent assigned, therefore it can't be executed directly and should be executed in a Crew using a specific process that support that, like hierarchical."
)
self.start_time = datetime.datetime.now()
start_time = self._set_start_execution_time()
self._execution_span = self._telemetry.task_started(crew=agent.crew, task=self)
self.prompt_context = context
tools = tools or self.tools or []
self.processed_by_agents.add(agent.role)
if agent and agent.role:
self.processed_by_agents.add(agent.role)
result = agent.execute_task(
task=self,
@@ -391,14 +381,10 @@ class Task(BaseModel):
)
self.retry_count += 1
context = self.i18n.errors("validation_error").format(
guardrail_result_error=guardrail_result.error,
task_output=task_output.raw,
)
printer = Printer()
printer.print(
content=f"Guardrail blocked, retrying, due to: {guardrail_result.error}\n",
color="yellow",
context = (
f"### Previous attempt failed validation: {guardrail_result.error}\n\n\n"
f"### Previous result:\n{task_output.raw}\n\n\n"
"Try again, making sure to address the validation error."
)
return self._execute_core(agent, context, tools)
@@ -409,17 +395,15 @@ class Task(BaseModel):
if isinstance(guardrail_result.result, str):
task_output.raw = guardrail_result.result
pydantic_output, json_output = self._export_output(
guardrail_result.result
)
pydantic_output, json_output = self._export_output(guardrail_result.result)
task_output.pydantic = pydantic_output
task_output.json_dict = json_output
elif isinstance(guardrail_result.result, TaskOutput):
task_output = guardrail_result.result
self.output = task_output
self.end_time = datetime.datetime.now()
self._set_end_execution_time(start_time)
if self.callback:
self.callback(self.output)
@@ -451,16 +435,13 @@ class Task(BaseModel):
tasks_slices = [self.description, output]
return "\n".join(tasks_slices)
def interpolate_inputs_and_add_conversation_history(
self, inputs: Dict[str, Union[str, int, float]]
) -> None:
def interpolate_inputs(self, inputs: Dict[str, Union[str, int, float]]) -> None:
"""Interpolate inputs into the task description, expected output, and output file path.
Add conversation history if present.
Args:
inputs: Dictionary mapping template variables to their values.
Supported value types are strings, integers, and floats.
Raises:
ValueError: If a required template variable is missing from inputs.
"""
@@ -477,9 +458,7 @@ class Task(BaseModel):
try:
self.description = self._original_description.format(**inputs)
except KeyError as e:
raise ValueError(
f"Missing required template variable '{e.args[0]}' in description"
) from e
raise ValueError(f"Missing required template variable '{e.args[0]}' in description") from e
except ValueError as e:
raise ValueError(f"Error interpolating description: {str(e)}") from e
@@ -496,49 +475,22 @@ class Task(BaseModel):
input_string=self._original_output_file, inputs=inputs
)
except (KeyError, ValueError) as e:
raise ValueError(
f"Error interpolating output_file path: {str(e)}"
) from e
raise ValueError(f"Error interpolating output_file path: {str(e)}") from e
if "crew_chat_messages" in inputs and inputs["crew_chat_messages"]:
conversation_instruction = self.i18n.slice(
"conversation_history_instruction"
)
crew_chat_messages_json = str(inputs["crew_chat_messages"])
try:
crew_chat_messages = json.loads(crew_chat_messages_json)
except json.JSONDecodeError as e:
print("An error occurred while parsing crew chat messages:", e)
raise
conversation_history = "\n".join(
f"{msg['role'].capitalize()}: {msg['content']}"
for msg in crew_chat_messages
if isinstance(msg, dict) and "role" in msg and "content" in msg
)
self.description += (
f"\n\n{conversation_instruction}\n\n{conversation_history}"
)
def interpolate_only(
self, input_string: Optional[str], inputs: Dict[str, Union[str, int, float]]
) -> str:
def interpolate_only(self, input_string: Optional[str], inputs: Dict[str, Union[str, int, float]]) -> str:
"""Interpolate placeholders (e.g., {key}) in a string while leaving JSON untouched.
Args:
input_string: The string containing template variables to interpolate.
Can be None or empty, in which case an empty string is returned.
inputs: Dictionary mapping template variables to their values.
Supported value types are strings, integers, and floats.
If input_string is empty or has no placeholders, inputs can be empty.
Returns:
The interpolated string with all template variables replaced with their values.
Empty string if input_string is None or empty.
Raises:
ValueError: If a required template variable is missing from inputs.
KeyError: If a template variable is not found in the inputs dictionary.
@@ -548,17 +500,13 @@ class Task(BaseModel):
if "{" not in input_string and "}" not in input_string:
return input_string
if not inputs:
raise ValueError(
"Inputs dictionary cannot be empty when interpolating variables"
)
raise ValueError("Inputs dictionary cannot be empty when interpolating variables")
try:
# Validate input types
for key, value in inputs.items():
if not isinstance(value, (str, int, float)):
raise ValueError(
f"Value for key '{key}' must be a string, integer, or float, got {type(value).__name__}"
)
raise ValueError(f"Value for key '{key}' must be a string, integer, or float, got {type(value).__name__}")
escaped_string = input_string.replace("{", "{{").replace("}", "}}")
@@ -567,9 +515,7 @@ class Task(BaseModel):
return escaped_string.format(**inputs)
except KeyError as e:
raise KeyError(
f"Template variable '{e.args[0]}' not found in inputs dictionary"
) from e
raise KeyError(f"Template variable '{e.args[0]}' not found in inputs dictionary") from e
except ValueError as e:
raise ValueError(f"Error during string interpolation: {str(e)}") from e
@@ -654,10 +600,10 @@ class Task(BaseModel):
def _save_file(self, result: Any) -> None:
"""Save task output to a file.
Args:
result: The result to save to the file. Can be a dict or any stringifiable object.
Raises:
ValueError: If output_file is not set
RuntimeError: If there is an error writing to the file
@@ -675,7 +621,6 @@ class Task(BaseModel):
with resolved_path.open("w", encoding="utf-8") as file:
if isinstance(result, dict):
import json
json.dump(result, file, ensure_ascii=False, indent=2)
else:
file.write(str(result))

View File

@@ -1,5 +1,5 @@
import json
from typing import Any, Dict, Optional
from typing import Any, Callable, Dict, Optional, Union
from pydantic import BaseModel, Field, model_validator
@@ -34,8 +34,19 @@ class TaskOutput(BaseModel):
self.summary = f"{excerpt}..."
return self
@property
def json(self) -> Optional[str]:
def json(
self,
*,
include: Union[set[str], None] = None,
exclude: Union[set[str], None] = None,
by_alias: bool = False,
exclude_unset: bool = False,
exclude_defaults: bool = False,
exclude_none: bool = False,
encoder: Optional[Callable[[Any], Any]] = None,
models_as_dict: bool = True,
**dumps_kwargs: Any,
) -> str:
if self.output_format != OutputFormat.JSON:
raise ValueError(
"""
@@ -45,7 +56,7 @@ class TaskOutput(BaseModel):
"""
)
return json.dumps(self.json_dict)
return json.dumps(self.json_dict, default=encoder, **dumps_kwargs)
def to_dict(self) -> Dict[str, Any]:
"""Convert json_output and pydantic_output to a dictionary."""

View File

@@ -1,5 +1,5 @@
import logging
from typing import Optional
from typing import Optional, Union
from pydantic import Field
@@ -19,13 +19,13 @@ class BaseAgentTool(BaseTool):
default_factory=I18N, description="Internationalization settings"
)
def sanitize_agent_name(self, name: str) -> str:
def sanitize_agent_name(self, name: Optional[str]) -> str:
"""
Sanitize agent role name by normalizing whitespace and setting to lowercase.
Converts all whitespace (including newlines) to single spaces and removes quotes.
Args:
name (str): The agent role name to sanitize
name (Optional[str]): The agent role name to sanitize
Returns:
str: The sanitized agent role name, with whitespace normalized,
@@ -54,12 +54,12 @@ class BaseAgentTool(BaseTool):
) -> str:
"""
Execute delegation to an agent with case-insensitive and whitespace-tolerant matching.
Args:
agent_name: Name/role of the agent to delegate to (case-insensitive)
task: The specific question or task to delegate
context: Optional additional context for the task execution
Returns:
str: The execution result from the delegated agent or an error message
if the agent cannot be found

View File

@@ -1,23 +1,12 @@
import warnings
from abc import ABC, abstractmethod
from inspect import signature
from typing import Any, Callable, Type, get_args, get_origin
from pydantic import (
BaseModel,
ConfigDict,
Field,
PydanticDeprecatedSince20,
create_model,
validator,
)
from pydantic import BaseModel, ConfigDict, Field, create_model, validator
from pydantic import BaseModel as PydanticBaseModel
from crewai.tools.structured_tool import CrewStructuredTool
# Ignore all "PydanticDeprecatedSince20" warnings globally
warnings.filterwarnings("ignore", category=PydanticDeprecatedSince20)
class BaseTool(BaseModel, ABC):
class _ArgsSchemaPlaceholder(PydanticBaseModel):

View File

@@ -142,7 +142,12 @@ class CrewStructuredTool:
# Create model
schema_name = f"{name.title()}Schema"
return create_model(schema_name, **fields)
return create_model(
schema_name,
__base__=BaseModel,
__config__=None,
**{k: v for k, v in fields.items()}
)
def _validate_function_signature(self) -> None:
"""Validate that the function signature matches the args schema."""
@@ -170,7 +175,7 @@ class CrewStructuredTool:
f"not found in args_schema"
)
def _parse_args(self, raw_args: Union[str, dict]) -> dict:
def _parse_args(self, raw_args: Union[str, dict[str, Any]]) -> dict[str, Any]:
"""Parse and validate the input arguments against the schema.
Args:
@@ -178,6 +183,9 @@ class CrewStructuredTool:
Returns:
The validated arguments as a dictionary
Raises:
ValueError: If the arguments cannot be parsed or fail validation
"""
if isinstance(raw_args, str):
try:
@@ -195,8 +203,8 @@ class CrewStructuredTool:
async def ainvoke(
self,
input: Union[str, dict],
config: Optional[dict] = None,
input: Union[str, dict[str, Any]],
config: Optional[dict[str, Any]] = None,
**kwargs: Any,
) -> Any:
"""Asynchronously invoke the tool.
@@ -229,7 +237,10 @@ class CrewStructuredTool:
return self.invoke(input_dict)
def invoke(
self, input: Union[str, dict], config: Optional[dict] = None, **kwargs: Any
self,
input: Union[str, dict[str, Any]],
config: Optional[dict[str, Any]] = None,
**kwargs: Any
) -> Any:
"""Main method for tool execution."""
parsed_args = self._parse_args(input)

View File

@@ -1,14 +1,9 @@
import ast
import datetime
import json
import time
from difflib import SequenceMatcher
from json import JSONDecodeError
from textwrap import dedent
from typing import Any, Dict, List, Optional, Union
import json5
from json_repair import repair_json
from typing import Any, List, Union
import crewai.utilities.events as events
from crewai.agents.tools_handler import ToolsHandler
@@ -24,15 +19,7 @@ try:
import agentops # type: ignore
except ImportError:
agentops = None
OPENAI_BIGGER_MODELS = [
"gpt-4",
"gpt-4o",
"o1-preview",
"o1-mini",
"o1",
"o3",
"o3-mini",
]
OPENAI_BIGGER_MODELS = ["gpt-4", "gpt-4o", "o1-preview", "o1-mini", "o1", "o3", "o3-mini"]
class ToolUsageErrorException(Exception):
@@ -93,7 +80,7 @@ class ToolUsage:
self._max_parsing_attempts = 2
self._remember_format_after_usages = 4
def parse_tool_calling(self, tool_string: str):
def parse(self, tool_string: str):
"""Parse the tool string and return the tool calling."""
return self._tool_calling(tool_string)
@@ -107,6 +94,7 @@ class ToolUsage:
self.task.increment_tools_errors()
return error
# BUG? The code below seems to be unreachable
try:
tool = self._select_tool(calling.tool_name)
except Exception as e:
@@ -128,7 +116,7 @@ class ToolUsage:
self._printer.print(content=f"\n\n{error}\n", color="red")
return error
return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}"
return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}" # type: ignore # BUG?: "_use" of "ToolUsage" does not return a value (it only ever returns None)
def _use(
self,
@@ -181,7 +169,7 @@ class ToolUsage:
if calling.arguments:
try:
acceptable_args = tool.args_schema.model_json_schema()["properties"].keys() # type: ignore
acceptable_args = tool.args_schema.schema()["properties"].keys() # type: ignore # Item "None" of "type[BaseModel] | None" has no attribute "schema"
arguments = {
k: v
for k, v in calling.arguments.items()
@@ -361,13 +349,13 @@ class ToolUsage:
tool_name = self.action.tool
tool = self._select_tool(tool_name)
try:
arguments = self._validate_tool_input(self.action.tool_input)
tool_input = self._validate_tool_input(self.action.tool_input)
arguments = ast.literal_eval(tool_input)
except Exception:
if raise_error:
raise
else:
return ToolUsageErrorException(
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
f'{self._i18n.errors("tool_arguments_error")}'
)
@@ -375,14 +363,14 @@ class ToolUsage:
if raise_error:
raise
else:
return ToolUsageErrorException(
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
f'{self._i18n.errors("tool_arguments_error")}'
)
return ToolCalling(
tool_name=tool.name,
arguments=arguments,
log=tool_string,
log=tool_string, # type: ignore
)
def _tool_calling(
@@ -408,55 +396,57 @@ class ToolUsage:
)
return self._tool_calling(tool_string)
def _validate_tool_input(self, tool_input: Optional[str]) -> Dict[str, Any]:
if tool_input is None:
return {}
if not isinstance(tool_input, str) or not tool_input.strip():
raise Exception(
"Tool input must be a valid dictionary in JSON or Python literal format"
)
# Attempt 1: Parse as JSON
def _validate_tool_input(self, tool_input: str) -> str:
try:
arguments = json.loads(tool_input)
if isinstance(arguments, dict):
return arguments
except (JSONDecodeError, TypeError):
pass # Continue to the next parsing attempt
ast.literal_eval(tool_input)
return tool_input
except Exception:
# Clean and ensure the string is properly enclosed in braces
tool_input = tool_input.strip()
if not tool_input.startswith("{"):
tool_input = "{" + tool_input
if not tool_input.endswith("}"):
tool_input += "}"
# Attempt 2: Parse as Python literal
try:
arguments = ast.literal_eval(tool_input)
if isinstance(arguments, dict):
return arguments
except (ValueError, SyntaxError):
pass # Continue to the next parsing attempt
# Manually split the input into key-value pairs
entries = tool_input.strip("{} ").split(",")
formatted_entries = []
# Attempt 3: Parse as JSON5
try:
arguments = json5.loads(tool_input)
if isinstance(arguments, dict):
return arguments
except (JSONDecodeError, ValueError, TypeError):
pass # Continue to the next parsing attempt
for entry in entries:
if ":" not in entry:
continue # Skip malformed entries
key, value = entry.split(":", 1)
# Attempt 4: Repair JSON
try:
repaired_input = repair_json(tool_input)
self._printer.print(
content=f"Repaired JSON: {repaired_input}", color="blue"
)
arguments = json.loads(repaired_input)
if isinstance(arguments, dict):
return arguments
except Exception as e:
self._printer.print(content=f"Failed to repair JSON: {e}", color="red")
# Remove extraneous white spaces and quotes, replace single quotes
key = key.strip().strip('"').replace("'", '"')
value = value.strip()
# If all parsing attempts fail, raise an error
raise Exception(
"Tool input must be a valid dictionary in JSON or Python literal format"
)
# Handle replacement of single quotes at the start and end of the value string
if value.startswith("'") and value.endswith("'"):
value = value[1:-1] # Remove single quotes
value = (
'"' + value.replace('"', '\\"') + '"'
) # Re-encapsulate with double quotes
elif value.isdigit(): # Check if value is a digit, hence integer
value = value
elif value.lower() in [
"true",
"false",
]: # Check for boolean and null values
value = value.lower().capitalize()
elif value.lower() == "null":
value = "None"
else:
# Assume the value is a string and needs quotes
value = '"' + value.replace('"', '\\"') + '"'
# Rebuild the entry with proper quoting
formatted_entry = f'"{key}": {value}'
formatted_entries.append(formatted_entry)
# Reconstruct the JSON string
new_json_string = "{" + ", ".join(formatted_entries) + "}"
return new_json_string
def on_tool_error(self, tool: Any, tool_calling: ToolCalling, e: Exception) -> None:
event_data = self._prepare_event_data(tool, tool_calling)

View File

@@ -9,11 +9,11 @@
"task": "\nCurrent Task: {input}\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:",
"memory": "\n\n# Useful context: \n{memory}",
"role_playing": "You are {role}. {backstory}\nYour personal goal is: {goal}",
"tools": "\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n```",
"no_tools": "\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
"format": "I MUST either use a tool (use one at time) OR give my best final answer not both at the same time. When responding, I must use the following format:\n\n```\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary enclosed in curly braces\nObservation: the result of the action\n```\nThis Thought/Action/Action Input/Result can repeat N times. Once I know the final answer, I must return the following format:\n\n```\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n```",
"final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfies the expected criteria, use the EXACT format below:\n\n```\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n```",
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nHere is the expected format I must follow:\n\n```\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n This Thought/Action/Action Input/Result process can repeat N times. Once I know the final answer, I must return the following format:\n\n```\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n```",
"tools": "\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nUse the following format:\n\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple python dictionary, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary information is gathered:\n\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n",
"no_tools": "\nTo give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
"format": "I MUST either use a tool (use one at time) OR give my best final answer not both at the same time. To Use the following format:\n\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary enclosed in curly braces\nObservation: the result of the action\n... (this Thought/Action/Action Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n",
"final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfies the expected criteria, use the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n",
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nI just remembered the expected format I must follow:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n",
"task_with_context": "{task}\n\nThis is the context you're working with:\n{context}",
"expected_output": "\nThis is the expect criteria for your final answer: {expected_output}\nyou MUST return the actual complete content as the final answer, not a summary.",
"human_feedback": "You got human feedback on your work, re-evaluate it and give a new Final Answer when ready.\n {human_feedback}",
@@ -23,11 +23,10 @@
"summary": "This is a summary of our conversation so far:\n{merged_summary}",
"manager_request": "Your best answer to your coworker asking you this, accounting for the context shared.",
"formatted_task_instructions": "Ensure your final answer contains only the content in the following format: {output_format}\n\nEnsure the final output does not include any code block markers like ```json or ```python.",
"human_feedback_classification": "Determine if the following feedback indicates that the user is satisfied or if further changes are needed. Respond with 'True' if further changes are needed, or 'False' if the user is satisfied. **Important** Do not include any additional commentary outside of your 'True' or 'False' response.\n\nFeedback: \"{feedback}\"",
"conversation_history_instruction": "You are a member of a crew collaborating to achieve a common goal. Your task is a specific action that contributes to this larger objective. For additional context, please review the conversation history between you and the user that led to the initiation of this crew. Use any relevant information or feedback from the conversation to inform your task execution and ensure your response aligns with both the immediate task and the crew's overall goals."
"human_feedback_classification": "Determine if the following feedback indicates that the user is satisfied or if further changes are needed. Respond with 'True' if further changes are needed, or 'False' if the user is satisfied. **Important** Do not include any additional commentary outside of your 'True' or 'False' response.\n\nFeedback: \"{feedback}\""
},
"errors": {
"force_final_answer_error": "You can't keep going, here is the best final answer you generated:\n\n {formatted_answer}",
"force_final_answer_error": "You can't keep going, this was the best you could do.\n {formatted_answer.text}",
"force_final_answer": "Now it's time you MUST give your absolute best final answer. You'll ignore all previous instructions, stop using any tools, and just return your absolute BEST Final answer.",
"agent_tool_unexisting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
"task_repeated_usage": "I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n",
@@ -35,15 +34,14 @@
"tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.",
"wrong_tool_name": "You tried to use the tool {tool}, but it doesn't exist. You must use one of the following tools, use one at time: {tools}.",
"tool_usage_exception": "I encountered an error while trying to use the tool. This was the error: {error}.\n Tool {tool} accepts these inputs: {tool_inputs}",
"agent_tool_execution_error": "Error executing task with agent '{agent_role}'. Error: {error}",
"validation_error": "### Previous attempt failed validation: {guardrail_result_error}\n\n\n### Previous result:\n{task_output}\n\n\nTry again, making sure to address the validation error."
"agent_tool_execution_error": "Error executing task with agent '{agent_role}'. Error: {error}"
},
"tools": {
"delegate_work": "Delegate a specific task to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.",
"ask_question": "Ask a specific question to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them.",
"add_image": {
"name": "Add image to content",
"description": "See image to understand its content, you can optionally ask a question about the image",
"description": "See image to understand it's content, you can optionally ask a question about the image",
"default_action": "Please provide a detailed description of this image, including all visual elements, context, and any notable details you can observe."
}
}

View File

@@ -1,40 +0,0 @@
from typing import List
from pydantic import BaseModel, Field
class ChatInputField(BaseModel):
"""
Represents a single required input for the crew, with a name and short description.
Example:
{
"name": "topic",
"description": "The topic to focus on for the conversation"
}
"""
name: str = Field(..., description="The name of the input field")
description: str = Field(..., description="A short description of the input field")
class ChatInputs(BaseModel):
"""
Holds a high-level crew_description plus a list of ChatInputFields.
Example:
{
"crew_name": "topic-based-qa",
"crew_description": "Use this crew for topic-based Q&A",
"inputs": [
{"name": "topic", "description": "The topic to focus on"},
{"name": "username", "description": "Name of the user"},
]
}
"""
crew_name: str = Field(..., description="The name of the crew")
crew_description: str = Field(
..., description="A description of the crew's purpose"
)
inputs: List[ChatInputField] = Field(
default_factory=list, description="A list of input fields for the crew"
)

View File

@@ -26,24 +26,17 @@ class Converter(OutputConverter):
if self.llm.supports_function_calling():
return self._create_instructor().to_pydantic()
else:
response = self.llm.call(
return self.llm.call(
[
{"role": "system", "content": self.instructions},
{"role": "user", "content": self.text},
]
)
return self.model.model_validate_json(response)
except ValidationError as e:
if current_attempt < self.max_attempts:
return self.to_pydantic(current_attempt + 1)
raise ConverterError(
f"Failed to convert text into a Pydantic model due to the following validation error: {e}"
)
except Exception as e:
if current_attempt < self.max_attempts:
return self.to_pydantic(current_attempt + 1)
raise ConverterError(
f"Failed to convert text into a Pydantic model due to the following error: {e}"
return ConverterError(
f"Failed to convert text into a pydantic model due to the following error: {e}"
)
def to_json(self, current_attempt=1):
@@ -73,6 +66,7 @@ class Converter(OutputConverter):
llm=self.llm,
model=self.model,
content=self.text,
instructions=self.instructions,
)
return inst
@@ -193,15 +187,10 @@ def convert_with_instructions(
def get_conversion_instructions(model: Type[BaseModel], llm: Any) -> str:
instructions = "Please convert the following text into valid JSON."
instructions = "I'm gonna convert this raw text into valid JSON."
if llm.supports_function_calling():
model_schema = PydanticSchemaParser(model=model).get_schema()
instructions += (
f"\n\nThe JSON should follow this schema:\n```json\n{model_schema}\n```"
)
else:
model_description = generate_model_description(model)
instructions += f"\n\nThe JSON should follow this format:\n{model_description}"
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
return instructions
@@ -241,13 +230,9 @@ def generate_model_description(model: Type[BaseModel]) -> str:
origin = get_origin(field_type)
args = get_args(field_type)
if origin is Union or (origin is None and len(args) > 0):
# Handle both Union and the new '|' syntax
if origin is Union and type(None) in args:
non_none_args = [arg for arg in args if arg is not type(None)]
if len(non_none_args) == 1:
return f"Optional[{describe_field(non_none_args[0])}]"
else:
return f"Optional[Union[{', '.join(describe_field(arg) for arg in non_none_args)}]]"
return f"Optional[{describe_field(non_none_args[0])}]"
elif origin is list:
return f"List[{describe_field(args[0])}]"
elif origin is dict:
@@ -256,10 +241,8 @@ def generate_model_description(model: Type[BaseModel]) -> str:
return f"Dict[{key_type}, {value_type}]"
elif isinstance(field_type, type) and issubclass(field_type, BaseModel):
return generate_model_description(field_type)
elif hasattr(field_type, "__name__"):
return field_type.__name__
else:
return str(field_type)
return field_type.__name__
fields = model.__annotations__
field_descriptions = [

View File

@@ -1,5 +1,3 @@
"""JSON encoder for handling CrewAI specific types."""
import json
from datetime import date, datetime
from decimal import Decimal
@@ -10,7 +8,6 @@ from pydantic import BaseModel
class CrewJSONEncoder(json.JSONEncoder):
"""Custom JSON encoder for CrewAI objects and special types."""
def default(self, obj):
if isinstance(obj, BaseModel):
return self._handle_pydantic_model(obj)

View File

@@ -6,10 +6,9 @@ from pydantic import BaseModel, ValidationError
from crewai.agents.parser import OutputParserException
"""Parser for converting text outputs into Pydantic models."""
class CrewPydanticOutputParser:
"""Parses text outputs into specified Pydantic models."""
"""Parses the text into pydantic models"""
pydantic_object: Type[BaseModel]

View File

@@ -14,7 +14,6 @@ class EmbeddingConfigurator:
"vertexai": self._configure_vertexai,
"google": self._configure_google,
"cohere": self._configure_cohere,
"voyageai": self._configure_voyageai,
"bedrock": self._configure_bedrock,
"huggingface": self._configure_huggingface,
"watson": self._configure_watson,
@@ -125,17 +124,6 @@ class EmbeddingConfigurator:
api_key=config.get("api_key"),
)
@staticmethod
def _configure_voyageai(config, model_name):
from chromadb.utils.embedding_functions.voyageai_embedding_function import (
VoyageAIEmbeddingFunction,
)
return VoyageAIEmbeddingFunction(
model_name=model_name,
api_key=config.get("api_key"),
)
@staticmethod
def _configure_bedrock(config, model_name):
from chromadb.utils.embedding_functions.amazon_bedrock_embedding_function import (

View File

@@ -1,39 +0,0 @@
"""Error message definitions for CrewAI database operations."""
from typing import Optional
class DatabaseOperationError(Exception):
"""Base exception class for database operation errors."""
def __init__(self, message: str, original_error: Optional[Exception] = None):
"""Initialize the database operation error.
Args:
message: The error message to display
original_error: The original exception that caused this error, if any
"""
super().__init__(message)
self.original_error = original_error
class DatabaseError:
"""Standardized error message templates for database operations."""
INIT_ERROR: str = "Database initialization error: {}"
SAVE_ERROR: str = "Error saving task outputs: {}"
UPDATE_ERROR: str = "Error updating task outputs: {}"
LOAD_ERROR: str = "Error loading task outputs: {}"
DELETE_ERROR: str = "Error deleting task outputs: {}"
@classmethod
def format_error(cls, template: str, error: Exception) -> str:
"""Format an error message with the given template and error.
Args:
template: The error message template to use
error: The exception to format into the template
Returns:
The formatted error message
"""
return template.format(str(error))

View File

@@ -180,12 +180,12 @@ class CrewEvaluator:
self._test_result_span = self._telemetry.individual_test_result_span(
self.crew,
evaluation_result.pydantic.quality,
current_task.execution_duration,
current_task._execution_time,
self.openai_model_name,
)
self.tasks_scores[self.iteration].append(evaluation_result.pydantic.quality)
self.run_execution_times[self.iteration].append(
current_task.execution_duration
current_task._execution_time
)
else:
raise ValueError("Evaluation result is not in the expected format")

View File

@@ -4,10 +4,8 @@ from typing import Dict, Optional, Union
from pydantic import BaseModel, Field, PrivateAttr, model_validator
"""Internationalization support for CrewAI prompts and messages."""
class I18N(BaseModel):
"""Handles loading and retrieving internationalized prompts."""
_prompts: Dict[str, Dict[str, str]] = PrivateAttr()
prompt_file: Optional[str] = Field(
default=None,

View File

@@ -11,10 +11,12 @@ class InternalInstructor:
model: Type,
agent: Optional[Any] = None,
llm: Optional[str] = None,
instructions: Optional[str] = None,
):
self.content = content
self.agent = agent
self.llm = llm
self.instructions = instructions
self.model = model
self._client = None
self.set_instructor()
@@ -29,7 +31,10 @@ class InternalInstructor:
import instructor
from litellm import completion
self._client = instructor.from_litellm(completion)
self._client = instructor.from_litellm(
completion,
mode=instructor.Mode.TOOLS,
)
def to_json(self):
model = self.to_pydantic()
@@ -37,6 +42,8 @@ class InternalInstructor:
def to_pydantic(self):
messages = [{"role": "user", "content": self.content}]
if self.instructions:
messages.append({"role": "system", "content": self.instructions})
model = self._client.chat.completions.create(
model=self.llm.model, response_model=self.model, messages=messages
)

View File

@@ -1,181 +0,0 @@
import os
from typing import Any, Dict, List, Optional, Union
from crewai.cli.constants import DEFAULT_LLM_MODEL, ENV_VARS, LITELLM_PARAMS
from crewai.llm import LLM
def create_llm(
llm_value: Union[str, LLM, Any, None] = None,
) -> Optional[LLM]:
"""
Creates or returns an LLM instance based on the given llm_value.
Args:
llm_value (str | LLM | Any | None):
- str: The model name (e.g., "gpt-4").
- LLM: Already instantiated LLM, returned as-is.
- Any: Attempt to extract known attributes like model_name, temperature, etc.
- None: Use environment-based or fallback default model.
Returns:
An LLM instance if successful, or None if something fails.
"""
# 1) If llm_value is already an LLM object, return it directly
if isinstance(llm_value, LLM):
return llm_value
# 2) If llm_value is a string (model name)
if isinstance(llm_value, str):
try:
created_llm = LLM(model=llm_value)
return created_llm
except Exception as e:
print(f"Failed to instantiate LLM with model='{llm_value}': {e}")
return None
# 3) If llm_value is None, parse environment variables or use default
if llm_value is None:
return _llm_via_environment_or_fallback()
# 4) Otherwise, attempt to extract relevant attributes from an unknown object
try:
# Extract attributes with explicit types
model = (
getattr(llm_value, "model_name", None)
or getattr(llm_value, "deployment_name", None)
or str(llm_value)
)
temperature: Optional[float] = getattr(llm_value, "temperature", None)
max_tokens: Optional[int] = getattr(llm_value, "max_tokens", None)
logprobs: Optional[int] = getattr(llm_value, "logprobs", None)
timeout: Optional[float] = getattr(llm_value, "timeout", None)
api_key: Optional[str] = getattr(llm_value, "api_key", None)
base_url: Optional[str] = getattr(llm_value, "base_url", None)
created_llm = LLM(
model=model,
temperature=temperature,
max_tokens=max_tokens,
logprobs=logprobs,
timeout=timeout,
api_key=api_key,
base_url=base_url,
)
return created_llm
except Exception as e:
print(f"Error instantiating LLM from unknown object type: {e}")
return None
def _llm_via_environment_or_fallback() -> Optional[LLM]:
"""
Helper function: if llm_value is None, we load environment variables or fallback default model.
"""
model_name = (
os.environ.get("OPENAI_MODEL_NAME")
or os.environ.get("MODEL")
or DEFAULT_LLM_MODEL
)
# Initialize parameters with correct types
model: str = model_name
temperature: Optional[float] = None
max_tokens: Optional[int] = None
max_completion_tokens: Optional[int] = None
logprobs: Optional[int] = None
timeout: Optional[float] = None
api_key: Optional[str] = None
base_url: Optional[str] = None
api_version: Optional[str] = None
presence_penalty: Optional[float] = None
frequency_penalty: Optional[float] = None
top_p: Optional[float] = None
n: Optional[int] = None
stop: Optional[Union[str, List[str]]] = None
logit_bias: Optional[Dict[int, float]] = None
response_format: Optional[Dict[str, Any]] = None
seed: Optional[int] = None
top_logprobs: Optional[int] = None
callbacks: List[Any] = []
# Optional base URL from env
api_base = os.environ.get("OPENAI_API_BASE") or os.environ.get("OPENAI_BASE_URL")
if api_base:
base_url = api_base
# Initialize llm_params dictionary
llm_params: Dict[str, Any] = {
"model": model,
"temperature": temperature,
"max_tokens": max_tokens,
"max_completion_tokens": max_completion_tokens,
"logprobs": logprobs,
"timeout": timeout,
"api_key": api_key,
"base_url": base_url,
"api_version": api_version,
"presence_penalty": presence_penalty,
"frequency_penalty": frequency_penalty,
"top_p": top_p,
"n": n,
"stop": stop,
"logit_bias": logit_bias,
"response_format": response_format,
"seed": seed,
"top_logprobs": top_logprobs,
"callbacks": callbacks,
}
UNACCEPTED_ATTRIBUTES = [
"AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY",
"AWS_REGION_NAME",
]
set_provider = model_name.split("/")[0] if "/" in model_name else "openai"
if set_provider in ENV_VARS:
env_vars_for_provider = ENV_VARS[set_provider]
if isinstance(env_vars_for_provider, (list, tuple)):
for env_var in env_vars_for_provider:
key_name = env_var.get("key_name")
if key_name and key_name not in UNACCEPTED_ATTRIBUTES:
env_value = os.environ.get(key_name)
if env_value:
# Map environment variable names to recognized parameters
param_key = _normalize_key_name(key_name.lower())
llm_params[param_key] = env_value
elif isinstance(env_var, dict):
if env_var.get("default", False):
for key, value in env_var.items():
if key not in ["prompt", "key_name", "default"]:
llm_params[key.lower()] = value
else:
print(
f"Expected env_var to be a dictionary, but got {type(env_var)}"
)
# Remove None values
llm_params = {k: v for k, v in llm_params.items() if v is not None}
# Try creating the LLM
try:
new_llm = LLM(**llm_params)
return new_llm
except Exception as e:
print(
f"Error instantiating LLM from environment/fallback: {type(e).__name__}: {e}"
)
return None
def _normalize_key_name(key_name: str) -> str:
"""
Maps environment variable names to recognized litellm parameter keys,
using patterns from LITELLM_PARAMS.
"""
for pattern in LITELLM_PARAMS:
if pattern in key_name:
return pattern
return key_name

View File

@@ -10,8 +10,24 @@ class Logger(BaseModel):
_printer: Printer = PrivateAttr(default_factory=Printer)
def log(self, level, message, color="bold_yellow"):
if self.verbose:
if self.verbose or level.upper() in ["WARNING", "ERROR"]:
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
self._printer.print(
f"\n[{timestamp}][{level.upper()}]: {message}", color=color
)
def debug(self, message: str) -> None:
"""Log a debug message if verbose is enabled."""
self.log("debug", message, color="bold_blue")
def info(self, message: str) -> None:
"""Log an info message if verbose is enabled."""
self.log("info", message, color="bold_green")
def warning(self, message: str) -> None:
"""Log a warning message."""
self.log("warning", message, color="bold_yellow")
def error(self, message: str) -> None:
"""Log an error message."""
self.log("error", message, color="bold_red")

View File

@@ -3,24 +3,17 @@ from pathlib import Path
import appdirs
"""Path management utilities for CrewAI storage and configuration."""
def db_storage_path() -> str:
"""Returns the path for SQLite database storage.
Returns:
str: Full path to the SQLite database file
"""
def db_storage_path():
app_name = get_project_directory_name()
app_author = "CrewAI"
data_dir = Path(appdirs.user_data_dir(app_name, app_author))
data_dir.mkdir(parents=True, exist_ok=True)
return str(data_dir)
return data_dir
def get_project_directory_name():
"""Returns the current project directory name."""
project_directory_name = os.environ.get("CREWAI_STORAGE_DIR")
if project_directory_name:
@@ -28,4 +21,4 @@ def get_project_directory_name():
else:
cwd = Path.cwd()
project_directory_name = cwd.name
return project_directory_name
return project_directory_name

View File

@@ -1,3 +1,4 @@
import json
import logging
from typing import Any, List, Optional
@@ -6,11 +7,10 @@ from pydantic import BaseModel, Field
from crewai.agent import Agent
from crewai.task import Task
"""Handles planning and coordination of crew tasks."""
logger = logging.getLogger(__name__)
class PlanPerTask(BaseModel):
"""Represents a plan for a specific task."""
task: str = Field(..., description="The task for which the plan is created")
plan: str = Field(
...,
@@ -19,7 +19,6 @@ class PlanPerTask(BaseModel):
class PlannerTaskPydanticOutput(BaseModel):
"""Output format for task planning results."""
list_of_plans_per_task: List[PlanPerTask] = Field(
...,
description="Step by step plan on how the agents can execute their tasks using the available tools with mastery",
@@ -27,7 +26,6 @@ class PlannerTaskPydanticOutput(BaseModel):
class CrewPlanner:
"""Plans and coordinates the execution of crew tasks."""
def __init__(self, tasks: List[Task], planning_agent_llm: Optional[Any] = None):
self.tasks = tasks
@@ -77,10 +75,10 @@ class CrewPlanner:
def _get_agent_knowledge(self, task: Task) -> List[str]:
"""
Safely retrieve knowledge source content from the task's agent.
Args:
task: The task containing an agent with potential knowledge sources
Returns:
List[str]: A list of knowledge source strings
"""
@@ -107,6 +105,6 @@ class CrewPlanner:
f"[{', '.join(str(tool) for tool in task.agent.tools)}]" if task.agent and task.agent.tools else '"agent has no tools"',
f',\n "agent_knowledge": "[\\"{knowledge_list[0]}\\"]"' if knowledge_list and str(knowledge_list) != "None" else ""
)
tasks_summary.append(task_summary)
return " ".join(tasks_summary)

View File

@@ -1,11 +1,7 @@
"""Utility for colored console output."""
from typing import Optional
class Printer:
"""Handles colored console output formatting."""
def print(self, content: str, color: Optional[str] = None):
if color == "purple":
self._print_purple(content)
@@ -21,16 +17,6 @@ class Printer:
self._print_yellow(content)
elif color == "bold_yellow":
self._print_bold_yellow(content)
elif color == "cyan":
self._print_cyan(content)
elif color == "bold_cyan":
self._print_bold_cyan(content)
elif color == "magenta":
self._print_magenta(content)
elif color == "bold_magenta":
self._print_bold_magenta(content)
elif color == "green":
self._print_green(content)
else:
print(content)
@@ -54,18 +40,3 @@ class Printer:
def _print_bold_yellow(self, content):
print("\033[1m\033[93m {}\033[00m".format(content))
def _print_cyan(self, content):
print("\033[96m {}\033[00m".format(content))
def _print_bold_cyan(self, content):
print("\033[1m\033[96m {}\033[00m".format(content))
def _print_magenta(self, content):
print("\033[35m {}\033[00m".format(content))
def _print_bold_magenta(self, content):
print("\033[1m\033[35m {}\033[00m".format(content))
def _print_green(self, content):
print("\033[32m {}\033[00m".format(content))

View File

@@ -63,16 +63,32 @@ class Prompts(BaseModel):
for component in components
if component != "task"
]
system = system_template.replace("{{ .System }}", "".join(prompt_parts))
prompt = prompt_template.replace(
"{{ .Prompt }}", "".join(self.i18n.slice("task"))
)
response = response_template.split("{{ .Response }}")[0]
prompt = f"{system}\n{prompt}\n{response}"
system = ""
if system_template:
system = system_template.replace("{{ .System }}", "".join(prompt_parts))
prompt_text = ""
if prompt_template:
prompt_text = prompt_template.replace(
"{{ .Prompt }}", "".join(self.i18n.slice("task"))
)
response = ""
if response_template:
response = response_template.split("{{ .Response }}")[0]
parts = [p for p in [system, prompt_text, response] if p]
prompt = "\n".join(parts) if parts else ""
# Get agent attributes with default values
goal = str(getattr(self.agent, 'goal', '') or '')
role = str(getattr(self.agent, 'role', '') or '')
backstory = str(getattr(self.agent, 'backstory', '') or '')
# Replace placeholders with agent attributes
prompt = (
prompt.replace("{goal}", self.agent.goal)
.replace("{role}", self.agent.role)
.replace("{backstory}", self.agent.backstory)
prompt.replace("{goal}", goal)
.replace("{role}", role)
.replace("{backstory}", backstory)
)
return prompt

View File

@@ -1,4 +1,4 @@
from typing import Dict, List, Type, Union, get_args, get_origin
from typing import Type, Union, get_args, get_origin
from pydantic import BaseModel
@@ -10,83 +10,40 @@ class PydanticSchemaParser(BaseModel):
"""
Public method to get the schema of a Pydantic model.
:param model: The Pydantic model class to generate schema for.
:return: String representation of the model schema.
"""
return "{\n" + self._get_model_schema(self.model) + "\n}"
return self._get_model_schema(self.model)
def _get_model_schema(self, model: Type[BaseModel], depth: int = 0) -> str:
indent = " " * 4 * depth
lines = [
f"{indent} {field_name}: {self._get_field_type(field, depth + 1)}"
for field_name, field in model.model_fields.items()
]
return ",\n".join(lines)
def _get_model_schema(self, model, depth=0) -> str:
indent = " " * depth
lines = [f"{indent}{{"]
for field_name, field in model.model_fields.items():
field_type_str = self._get_field_type(field, depth + 1)
lines.append(f"{indent} {field_name}: {field_type_str},")
lines[-1] = lines[-1].rstrip(",") # Remove trailing comma from last item
lines.append(f"{indent}}}")
return "\n".join(lines)
def _get_field_type(self, field, depth: int) -> str:
def _get_field_type(self, field, depth) -> str:
field_type = field.annotation
origin = get_origin(field_type)
if origin in {list, List}:
if get_origin(field_type) is list:
list_item_type = get_args(field_type)[0]
return self._format_list_type(list_item_type, depth)
if origin in {dict, Dict}:
key_type, value_type = get_args(field_type)
return f"Dict[{key_type.__name__}, {value_type.__name__}]"
if origin is Union:
return self._format_union_type(field_type, depth)
if isinstance(field_type, type) and issubclass(field_type, BaseModel):
nested_schema = self._get_model_schema(field_type, depth)
nested_indent = " " * 4 * depth
return f"{field_type.__name__}\n{nested_indent}{{\n{nested_schema}\n{nested_indent}}}"
return field_type.__name__
def _format_list_type(self, list_item_type, depth: int) -> str:
if isinstance(list_item_type, type) and issubclass(list_item_type, BaseModel):
nested_schema = self._get_model_schema(list_item_type, depth + 1)
nested_indent = " " * 4 * (depth)
return f"List[\n{nested_indent}{{\n{nested_schema}\n{nested_indent}}}\n{nested_indent}]"
return f"List[{list_item_type.__name__}]"
def _format_union_type(self, field_type, depth: int) -> str:
args = get_args(field_type)
if type(None) in args:
# It's an Optional type
non_none_args = [arg for arg in args if arg is not type(None)]
if len(non_none_args) == 1:
inner_type = self._get_field_type_for_annotation(
non_none_args[0], depth
)
return f"Optional[{inner_type}]"
if isinstance(list_item_type, type) and issubclass(
list_item_type, BaseModel
):
nested_schema = self._get_model_schema(list_item_type, depth + 1)
return f"List[\n{nested_schema}\n{' ' * 4 * depth}]"
else:
# Union with None and multiple other types
inner_types = ", ".join(
self._get_field_type_for_annotation(arg, depth)
for arg in non_none_args
)
return f"Optional[Union[{inner_types}]]"
return f"List[{list_item_type.__name__}]"
elif get_origin(field_type) is Union:
union_args = get_args(field_type)
if type(None) in union_args:
non_none_type = next(arg for arg in union_args if arg is not type(None))
return f"Optional[{self._get_field_type(field.__class__(annotation=non_none_type), depth)}]"
else:
return f"Union[{', '.join(arg.__name__ for arg in union_args)}]"
elif isinstance(field_type, type) and issubclass(field_type, BaseModel):
return self._get_model_schema(field_type, depth)
else:
# General Union type
inner_types = ", ".join(
self._get_field_type_for_annotation(arg, depth) for arg in args
)
return f"Union[{inner_types}]"
def _get_field_type_for_annotation(self, annotation, depth: int) -> str:
origin = get_origin(annotation)
if origin in {list, List}:
list_item_type = get_args(annotation)[0]
return self._format_list_type(list_item_type, depth)
if origin in {dict, Dict}:
key_type, value_type = get_args(annotation)
return f"Dict[{key_type.__name__}, {value_type.__name__}]"
if origin is Union:
return self._format_union_type(annotation, depth)
if isinstance(annotation, type) and issubclass(annotation, BaseModel):
nested_schema = self._get_model_schema(annotation, depth)
nested_indent = " " * 4 * depth
return f"{annotation.__name__}\n{nested_indent}{{\n{nested_schema}\n{nested_indent}}}"
return annotation.__name__
return getattr(field_type, "__name__", str(field_type))

View File

@@ -6,12 +6,8 @@ from pydantic import BaseModel, Field, PrivateAttr, model_validator
from crewai.utilities.logger import Logger
"""Controls request rate limiting for API calls."""
class RPMController(BaseModel):
"""Manages requests per minute limiting."""
max_rpm: Optional[int] = Field(default=None)
logger: Logger = Field(default_factory=lambda: Logger(verbose=False))
_current_rpm: int = PrivateAttr(default=0)

View File

@@ -8,10 +8,8 @@ from crewai.memory.storage.kickoff_task_outputs_storage import (
)
from crewai.task import Task
"""Handles storage and retrieval of task execution outputs."""
class ExecutionLog(BaseModel):
"""Represents a log entry for task execution."""
task_id: str
expected_output: Optional[str] = None
output: Dict[str, Any]
@@ -24,8 +22,6 @@ class ExecutionLog(BaseModel):
return getattr(self, key)
"""Manages storage and retrieval of task outputs."""
class TaskOutputStorageHandler:
def __init__(self) -> None:
self.storage = KickoffTaskOutputsSQLiteStorage()

View File

@@ -1,5 +1,4 @@
import warnings
from typing import Any, Dict, Optional
from litellm.integrations.custom_logger import CustomLogger
from litellm.types.utils import Usage
@@ -8,30 +7,20 @@ from crewai.agents.agent_builder.utilities.base_token_process import TokenProces
class TokenCalcHandler(CustomLogger):
def __init__(self, token_cost_process: Optional[TokenProcess]):
def __init__(self, token_cost_process: TokenProcess):
self.token_cost_process = token_cost_process
def log_success_event(
self,
kwargs: Dict[str, Any],
response_obj: Dict[str, Any],
start_time: float,
end_time: float,
) -> None:
def log_success_event(self, kwargs, response_obj, start_time, end_time):
if self.token_cost_process is None:
return
with warnings.catch_warnings():
warnings.simplefilter("ignore", UserWarning)
if isinstance(response_obj, dict) and "usage" in response_obj:
usage: Usage = response_obj["usage"]
if usage:
self.token_cost_process.sum_successful_requests(1)
if hasattr(usage, "prompt_tokens"):
self.token_cost_process.sum_prompt_tokens(usage.prompt_tokens)
if hasattr(usage, "completion_tokens"):
self.token_cost_process.sum_completion_tokens(usage.completion_tokens)
if hasattr(usage, "prompt_tokens_details") and usage.prompt_tokens_details:
self.token_cost_process.sum_cached_prompt_tokens(
usage.prompt_tokens_details.cached_tokens
)
usage: Usage = response_obj["usage"]
self.token_cost_process.sum_successful_requests(1)
self.token_cost_process.sum_prompt_tokens(usage.prompt_tokens)
self.token_cost_process.sum_completion_tokens(usage.completion_tokens)
if usage.prompt_tokens_details:
self.token_cost_process.sum_cached_prompt_tokens(
usage.prompt_tokens_details.cached_tokens
)

View File

@@ -0,0 +1,74 @@
"""Token processing utility for tracking and managing token usage."""
from __future__ import annotations
from typing import Any, Dict, List, Optional, Union
from crewai.types.usage_metrics import UsageMetrics
class TokenProcess:
"""Handles token processing and tracking for agents."""
def __init__(self):
"""Initialize the token processor."""
self._total_tokens = 0
self._prompt_tokens = 0
self._completion_tokens = 0
self._cached_prompt_tokens = 0
self._successful_requests = 0
def sum_prompt_tokens(self, count: int) -> None:
"""Add to prompt token count.
Args:
count (int): Number of prompt tokens to add
"""
self._prompt_tokens += count
self._total_tokens += count
def sum_completion_tokens(self, count: int) -> None:
"""Add to completion token count.
Args:
count (int): Number of completion tokens to add
"""
self._completion_tokens += count
self._total_tokens += count
def sum_cached_prompt_tokens(self, count: int) -> None:
"""Add to cached prompt token count.
Args:
count (int): Number of cached prompt tokens to add
"""
self._cached_prompt_tokens += count
def sum_successful_requests(self, count: int) -> None:
"""Add to successful requests count.
Args:
count (int): Number of successful requests to add
"""
self._successful_requests += count
def reset(self) -> None:
"""Reset all token counts to zero."""
self._total_tokens = 0
self._prompt_tokens = 0
self._completion_tokens = 0
self._cached_prompt_tokens = 0
self._successful_requests = 0
def get_summary(self) -> UsageMetrics:
"""Get a summary of token usage.
Returns:
UsageMetrics: Object containing token usage metrics
"""
return UsageMetrics(
total_tokens=self._total_tokens,
prompt_tokens=self._prompt_tokens,
cached_prompt_tokens=self._cached_prompt_tokens,
completion_tokens=self._completion_tokens,
successful_requests=self._successful_requests
)

View File

@@ -16,7 +16,7 @@ from crewai.tools import tool
from crewai.tools.tool_calling import InstructorToolCalling
from crewai.tools.tool_usage import ToolUsage
from crewai.tools.tool_usage_events import ToolUsageFinished
from crewai.utilities import Printer, RPMController
from crewai.utilities import RPMController
from crewai.utilities.events import Emitter
@@ -114,6 +114,35 @@ def test_custom_llm_temperature_preservation():
assert agent.llm.temperature == 0.7
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_execute_task():
from langchain_openai import ChatOpenAI
from crewai import Task
agent = Agent(
role="Math Tutor",
goal="Solve math problems accurately",
backstory="You are an experienced math tutor with a knack for explaining complex concepts simply.",
llm=ChatOpenAI(temperature=0.7, model="gpt-4o-mini"),
)
task = Task(
description="Calculate the area of a circle with radius 5 cm.",
expected_output="The calculated area of the circle in square centimeters.",
agent=agent,
)
result = agent.execute_task(task)
assert result is not None
assert (
result
== "The calculated area of the circle is approximately 78.5 square centimeters."
)
assert "square centimeters" in result.lower()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_execution():
agent = Agent(
@@ -536,7 +565,7 @@ def test_agent_moved_on_after_max_iterations():
task=task,
tools=[get_final_answer],
)
assert output == "42"
assert output == "The final answer is 42."
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -545,6 +574,7 @@ def test_agent_respect_the_max_rpm_set(capsys):
def get_final_answer() -> float:
"""Get the final answer but don't give it yet, just re-use this
tool non-stop."""
return 42
agent = Agent(
role="test role",
@@ -611,14 +641,15 @@ def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_without_max_rpm_respects_crew_rpm(capsys):
def test_agent_without_max_rpm_respet_crew_rpm(capsys):
from unittest.mock import patch
from crewai.tools import tool
@tool
def get_final_answer() -> float:
"""Get the final answer but don't give it yet, just re-use this tool non-stop."""
"""Get the final answer but don't give it yet, just re-use this
tool non-stop."""
return 42
agent1 = Agent(
@@ -635,30 +666,23 @@ def test_agent_without_max_rpm_respects_crew_rpm(capsys):
role="test role2",
goal="test goal2",
backstory="test backstory2",
max_iter=5,
max_iter=1,
verbose=True,
allow_delegation=False,
)
tasks = [
Task(
description="Just say hi.",
agent=agent1,
expected_output="Your greeting.",
description="Just say hi.", agent=agent1, expected_output="Your greeting."
),
Task(
description=(
"NEVER give a Final Answer, unless you are told otherwise, "
"instead keep using the `get_final_answer` tool non-stop, "
"until you must give your best final answer"
),
description="NEVER give a Final Answer, unless you are told otherwise, instead keep using the `get_final_answer` tool non-stop, until you must give you best final answer",
expected_output="The final answer",
tools=[get_final_answer],
agent=agent2,
),
]
# Set crew's max_rpm to 1 to trigger RPM limit
crew = Crew(agents=[agent1, agent2], tasks=tasks, max_rpm=1, verbose=True)
with patch.object(RPMController, "_wait_for_next_minute") as moveon:
@@ -1433,7 +1457,7 @@ def test_agent_with_ollama_llama3():
assert agent.llm.model == "ollama/llama3.2:3b"
assert agent.llm.base_url == "http://localhost:11434"
task = "Respond in 20 words. Which model are you?"
task = "Respond in 20 words. Who are you?"
response = agent.llm.call([{"role": "user", "content": task}])
assert response
@@ -1449,9 +1473,7 @@ def test_llm_call_with_ollama_llama3():
temperature=0.7,
max_tokens=30,
)
messages = [
{"role": "user", "content": "Respond in 20 words. Which model are you?"}
]
messages = [{"role": "user", "content": "Respond in 20 words. Who are you?"}]
response = llm.call(messages)
@@ -1466,7 +1488,7 @@ def test_agent_execute_task_basic():
role="test role",
goal="test goal",
backstory="test backstory",
llm="gpt-4o-mini",
llm=LLM(model="gpt-3.5-turbo"),
)
task = Task(
@@ -1600,142 +1622,3 @@ def test_agent_with_knowledge_sources():
# Assert that the agent provides the correct information
assert "red" in result.raw.lower()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_litellm_auth_error_handling():
"""Test that LiteLLM authentication errors are handled correctly and not retried."""
from litellm import AuthenticationError as LiteLLMAuthenticationError
# Create an agent with a mocked LLM and max_retry_limit=0
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm=LLM(model="gpt-4"),
max_retry_limit=0, # Disable retries for authentication errors
)
# Create a task
task = Task(
description="Test task",
expected_output="Test output",
agent=agent,
)
# Mock the LLM call to raise AuthenticationError
with (
patch.object(LLM, "call") as mock_llm_call,
pytest.raises(LiteLLMAuthenticationError, match="Invalid API key"),
):
mock_llm_call.side_effect = LiteLLMAuthenticationError(
message="Invalid API key", llm_provider="openai", model="gpt-4"
)
agent.execute_task(task)
# Verify the call was only made once (no retries)
mock_llm_call.assert_called_once()
def test_crew_agent_executor_litellm_auth_error():
"""Test that CrewAgentExecutor handles LiteLLM authentication errors by raising them."""
from litellm.exceptions import AuthenticationError
from crewai.agents.tools_handler import ToolsHandler
from crewai.utilities import Printer
# Create an agent and executor
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm=LLM(model="gpt-4", api_key="invalid_api_key"),
)
task = Task(
description="Test task",
expected_output="Test output",
agent=agent,
)
# Create executor with all required parameters
executor = CrewAgentExecutor(
agent=agent,
task=task,
llm=agent.llm,
crew=None,
prompt={"system": "You are a test agent", "user": "Execute the task: {input}"},
max_iter=5,
tools=[],
tools_names="",
stop_words=[],
tools_description="",
tools_handler=ToolsHandler(),
)
# Mock the LLM call to raise AuthenticationError
with (
patch.object(LLM, "call") as mock_llm_call,
patch.object(Printer, "print") as mock_printer,
pytest.raises(AuthenticationError) as exc_info,
):
mock_llm_call.side_effect = AuthenticationError(
message="Invalid API key", llm_provider="openai", model="gpt-4"
)
executor.invoke(
{
"input": "test input",
"tool_names": "",
"tools": "",
}
)
# Verify error handling messages
error_message = f"Error during LLM call: {str(mock_llm_call.side_effect)}"
mock_printer.assert_any_call(
content=error_message,
color="red",
)
# Verify the call was only made once (no retries)
mock_llm_call.assert_called_once()
# Assert that the exception was raised and has the expected attributes
assert exc_info.type is AuthenticationError
assert "Invalid API key".lower() in exc_info.value.message.lower()
assert exc_info.value.llm_provider == "openai"
assert exc_info.value.model == "gpt-4"
def test_litellm_anthropic_error_handling():
"""Test that AnthropicError from LiteLLM is handled correctly and not retried."""
from litellm.llms.anthropic.common_utils import AnthropicError
# Create an agent with a mocked LLM that uses an Anthropic model
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm=LLM(model="claude-3.5-sonnet-20240620"),
max_retry_limit=0,
)
# Create a task
task = Task(
description="Test task",
expected_output="Test output",
agent=agent,
)
# Mock the LLM call to raise AnthropicError
with (
patch.object(LLM, "call") as mock_llm_call,
pytest.raises(AnthropicError, match="Test Anthropic error"),
):
mock_llm_call.side_effect = AnthropicError(
status_code=500,
message="Test Anthropic error",
)
agent.execute_task(task)
# Verify the LLM call was only made once (no retries)
mock_llm_call.assert_called_once()

View File

@@ -7,7 +7,7 @@ from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.base_tool import BaseTool
class MockAgent(BaseAgent):
class TestAgent(BaseAgent):
def execute_task(
self,
task: Any,
@@ -29,7 +29,7 @@ class MockAgent(BaseAgent):
def test_key():
agent = MockAgent(
agent = TestAgent(
role="test role",
goal="test goal",
backstory="test backstory",

View File

@@ -2,22 +2,22 @@ interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nUse the following format:\n\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [get_final_answer], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple python dictionary, enclosed in
curly braces, using \" to wrap keys and values.\nObservation: the result of
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
know the final answer\nFinal Answer: the final answer to the original input
question"}, {"role": "user", "content": "\nCurrent Task: The final answer is
42. But don''t give it yet, instead keep using the `get_final_answer` tool.\n\nThis
is the expect criteria for your final answer: The final answer\nyou MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"],
"stream": false}'
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer(*args:
Any, **kwargs: Any) -> Any\nTool Description: get_final_answer() - Get the final
answer but don''t give it yet, just re-use this tool non-stop. \nTool
Arguments: {}\n\nUse the following format:\n\nThought: you should always think
about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n"}, {"role": "user", "content":
"\nCurrent Task: The final answer is 42. But don''t give it yet, instead keep
using the `get_final_answer` tool.\n\nThis is the expect criteria for your final
answer: The final answer\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
@@ -26,15 +26,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1377'
- '1417'
content-type:
- application/json
cookie:
- _cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -44,35 +45,30 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-An9sn6yimejzB3twOt8E2VAj4Bfmm\",\n \"object\":
\"chat.completion\",\n \"created\": 1736279425,\n \"model\": \"gpt-4o-2024-08-06\",\n
content: "{\n \"id\": \"chatcmpl-AB7NCE9qkjnVxfeWuK9NjyCdymuXJ\",\n \"object\":
\"chat.completion\",\n \"created\": 1727213314,\n \"model\": \"gpt-4o-2024-05-13\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I need to use the `get_final_answer`
tool to fulfill the current task requirement.\\n\\nAction: get_final_answer\\nAction
Input: {}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
273,\n \"completion_tokens\": 30,\n \"total_tokens\": 303,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_5f20662549\"\n}\n"
tool as instructed.\\n\\nAction: get_final_answer\\nAction Input: {}\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 291,\n \"completion_tokens\":
26,\n \"total_tokens\": 317,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8fe67a03ce78ed83-ATL
- 8c85dd6b5f411cf3-GRU
Connection:
- keep-alive
Content-Encoding:
@@ -80,27 +76,19 @@ interactions:
Content-Type:
- application/json
Date:
- Tue, 07 Jan 2025 19:50:25 GMT
- Tue, 24 Sep 2024 21:28:34 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=PsMOhP_yeSFIMA.FfRlNbisoG88z4l9NSd0zfS5UrOQ-1736279425-1.0.1.1-mdXy_XDkelJX2.9BSuZsl5IsPRGBdcHgIMc_SRz83WcmGCYUkTm1j_f892xrJbOVheWWH9ULwCQrVESupV37Sg;
path=/; expires=Tue, 07-Jan-25 20:20:25 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=EYb4UftLm_C7qM4YT78IJt46hRSubZHKnfTXhFp6ZRU-1736279425874-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '1218'
- '526'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -112,38 +100,38 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999681'
- '29999666'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_779992da2a3eb4a25f0b57905c9e8e41
- req_ed8ca24c64cfdc2b6266c9c8438749f5
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nUse the following format:\n\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [get_final_answer], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple python dictionary, enclosed in
curly braces, using \" to wrap keys and values.\nObservation: the result of
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
know the final answer\nFinal Answer: the final answer to the original input
question"}, {"role": "user", "content": "\nCurrent Task: The final answer is
42. But don''t give it yet, instead keep using the `get_final_answer` tool.\n\nThis
is the expect criteria for your final answer: The final answer\nyou MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
I need to use the `get_final_answer` tool to fulfill the current task requirement.\n\nAction:
get_final_answer\nAction Input: {}\nObservation: 42\nNow it''s time you MUST
give your absolute best final answer. You''ll ignore all previous instructions,
stop using any tools, and just return your absolute BEST Final answer."}], "model":
"gpt-4o", "stop": ["\nObservation:"], "stream": false}'
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer(*args:
Any, **kwargs: Any) -> Any\nTool Description: get_final_answer() - Get the final
answer but don''t give it yet, just re-use this tool non-stop. \nTool
Arguments: {}\n\nUse the following format:\n\nThought: you should always think
about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n"}, {"role": "user", "content":
"\nCurrent Task: The final answer is 42. But don''t give it yet, instead keep
using the `get_final_answer` tool.\n\nThis is the expect criteria for your final
answer: The final answer\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"},
{"role": "assistant", "content": "Thought: I need to use the `get_final_answer`
tool as instructed.\n\nAction: get_final_answer\nAction Input: {}\nObservation:
42\nNow it''s time you MUST give your absolute best final answer. You''ll ignore
all previous instructions, stop using any tools, and just return your absolute
BEST Final answer."}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
@@ -152,16 +140,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1743'
- '1757'
content-type:
- application/json
cookie:
- _cfuvid=EYb4UftLm_C7qM4YT78IJt46hRSubZHKnfTXhFp6ZRU-1736279425874-0.0.1.1-604800000;
__cf_bm=PsMOhP_yeSFIMA.FfRlNbisoG88z4l9NSd0zfS5UrOQ-1736279425-1.0.1.1-mdXy_XDkelJX2.9BSuZsl5IsPRGBdcHgIMc_SRz83WcmGCYUkTm1j_f892xrJbOVheWWH9ULwCQrVESupV37Sg
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -171,34 +159,29 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-An9soTDQVS0ANTzaTZeo6lYN44ZPR\",\n \"object\":
\"chat.completion\",\n \"created\": 1736279426,\n \"model\": \"gpt-4o-2024-08-06\",\n
content: "{\n \"id\": \"chatcmpl-AB7NDCKCn3PlhjPvgqbywxUumo3Qt\",\n \"object\":
\"chat.completion\",\n \"created\": 1727213315,\n \"model\": \"gpt-4o-2024-05-13\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now know the final answer.\\n\\nFinal
Answer: 42\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
344,\n \"completion_tokens\": 12,\n \"total_tokens\": 356,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_5f20662549\"\n}\n"
\"assistant\",\n \"content\": \"Thought: I now know the final answer\\nFinal
Answer: The final answer is 42.\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
358,\n \"completion_tokens\": 19,\n \"total_tokens\": 377,\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8fe67a0c4dbeed83-ATL
- 8c85dd72daa31cf3-GRU
Connection:
- keep-alive
Content-Encoding:
@@ -206,7 +189,7 @@ interactions:
Content-Type:
- application/json
Date:
- Tue, 07 Jan 2025 19:50:26 GMT
- Tue, 24 Sep 2024 21:28:36 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -215,12 +198,10 @@ interactions:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '434'
- '468'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -232,13 +213,13 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999598'
- '29999591'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_1184308c5a4ed9130d397fe1645f317e
- req_3f49e6033d3b0400ea55125ca2cf4ee0
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -2,21 +2,21 @@ interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [get_final_answer], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"}, {"role": "user",
"content": "\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect
criteria for your final answer: The final answer\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nBegin! This is VERY
important to you, use the tools available and give your best Final Answer, your
job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer(*args:
Any, **kwargs: Any) -> Any\nTool Description: get_final_answer() - Get the final
answer but don''t give it yet, just re-use this tool non-stop. \nTool
Arguments: {}\n\nUse the following format:\n\nThought: you should always think
about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n"}, {"role": "user", "content":
"\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect criteria
for your final answer: The final answer\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nBegin! This is VERY important
to you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}], "model": "gpt-4o"}'
headers:
accept:
- application/json
@@ -25,13 +25,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1367'
- '1325'
content-type:
- application/json
cookie:
- _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000;
__cf_bm=3giyBOIM0GNudFELtsBWYXwLrpLBTNLsh81wfXgu2tg-1727226247-1.0.1.1-ugUDz0c5EhmfVpyGtcdedlIWeDGuy2q0tXQTKVpv83HZhvxgBcS7SBL1wS4rapPM38yhfEcfwA79ARt3HQEzKA
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -41,35 +44,30 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AsXdf4OZKCZSigmN4k0gyh67NciqP\",\n \"object\":
\"chat.completion\",\n \"created\": 1737562383,\n \"model\": \"gpt-4o-2024-08-06\",\n
content: "{\n \"id\": \"chatcmpl-ABAtOWmVjvzQ9X58tKAUcOF4gmXwx\",\n \"object\":
\"chat.completion\",\n \"created\": 1727226842,\n \"model\": \"gpt-4o-2024-05-13\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"```\\nThought: I have to use the available
tool to get the final answer. Let's proceed with executing it.\\nAction: get_final_answer\\nAction
Input: {}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
274,\n \"completion_tokens\": 33,\n \"total_tokens\": 307,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_50cad350e4\"\n}\n"
\"assistant\",\n \"content\": \"Thought: I need to use the get_final_answer
tool to determine the final answer.\\nAction: get_final_answer\\nAction Input:
{}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 274,\n \"completion_tokens\":
27,\n \"total_tokens\": 301,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 9060d43e3be1d690-IAD
- 8c8727b3492f31e6-MIA
Connection:
- keep-alive
Content-Encoding:
@@ -77,27 +75,19 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 22 Jan 2025 16:13:03 GMT
- Wed, 25 Sep 2024 01:14:03 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=_Jcp7wnO_mXdvOnborCN6j8HwJxJXbszedJC1l7pFUg-1737562383-1.0.1.1-pDSLXlg.nKjG4wsT7mTJPjUvOX1UJITiS4MqKp6yfMWwRSJINsW1qC48SAcjBjakx2H5I1ESVk9JtUpUFDtf4g;
path=/; expires=Wed, 22-Jan-25 16:43:03 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=x3SYvzL2nq_PTBGtE8R9cl5CkeaaDzZFQIrYfo91S2s-1737562383916-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '791'
- '348'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -109,59 +99,45 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999680'
- '29999682'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_eeed99acafd3aeb1e3d4a6c8063192b0
- req_be929caac49706f487950548bdcdd46e
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [get_final_answer], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"}, {"role": "user",
"content": "\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect
criteria for your final answer: The final answer\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nBegin! This is VERY
important to you, use the tools available and give your best Final Answer, your
job depends on it!\n\nThought:"}, {"role": "assistant", "content": "```\nThought:
I have to use the available tool to get the final answer. Let''s proceed with
executing it.\nAction: get_final_answer\nAction Input: {}\nObservation: I encountered
an error: Error on parsing tool.\nMoving on then. I MUST either use a tool (use
one at time) OR give my best final answer not both at the same time. When responding,
I must use the following format:\n\n```\nThought: you should always think about
what to do\nAction: the action to take, should be one of [get_final_answer]\nAction
Input: the input to the action, dictionary enclosed in curly braces\nObservation:
the result of the action\n```\nThis Thought/Action/Action Input/Result can repeat
N times. Once I know the final answer, I must return the following format:\n\n```\nThought:
I now can give a great answer\nFinal Answer: Your final answer must be the great
and the most complete as possible, it must be outcome described\n\n```"}, {"role":
"assistant", "content": "```\nThought: I have to use the available tool to get
the final answer. Let''s proceed with executing it.\nAction: get_final_answer\nAction
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer(*args:
Any, **kwargs: Any) -> Any\nTool Description: get_final_answer() - Get the final
answer but don''t give it yet, just re-use this tool non-stop. \nTool
Arguments: {}\n\nUse the following format:\n\nThought: you should always think
about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n"}, {"role": "user", "content":
"\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect criteria
for your final answer: The final answer\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nBegin! This is VERY important
to you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}, {"role": "user", "content": "Thought: I need to use the
get_final_answer tool to determine the final answer.\nAction: get_final_answer\nAction
Input: {}\nObservation: I encountered an error: Error on parsing tool.\nMoving
on then. I MUST either use a tool (use one at time) OR give my best final answer
not both at the same time. When responding, I must use the following format:\n\n```\nThought:
you should always think about what to do\nAction: the action to take, should
be one of [get_final_answer]\nAction Input: the input to the action, dictionary
enclosed in curly braces\nObservation: the result of the action\n```\nThis Thought/Action/Action
Input/Result can repeat N times. Once I know the final answer, I must return
the following format:\n\n```\nThought: I now can give a great answer\nFinal
not both at the same time. To Use the following format:\n\nThought: you should
always think about what to do\nAction: the action to take, should be one of
[get_final_answer]\nAction Input: the input to the action, dictionary enclosed
in curly braces\nObservation: the result of the action\n... (this Thought/Action/Action
Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal
Answer: Your final answer must be the great and the most complete as possible,
it must be outcome described\n\n```\nNow it''s time you MUST give your absolute
it must be outcome described\n\n \nNow it''s time you MUST give your absolute
best final answer. You''ll ignore all previous instructions, stop using any
tools, and just return your absolute BEST Final answer."}], "model": "gpt-4o",
"stop": ["\nObservation:"]}'
tools, and just return your absolute BEST Final answer."}], "model": "gpt-4o"}'
headers:
accept:
- application/json
@@ -170,16 +146,16 @@ interactions:
connection:
- keep-alive
content-length:
- '3445'
- '2320'
content-type:
- application/json
cookie:
- __cf_bm=_Jcp7wnO_mXdvOnborCN6j8HwJxJXbszedJC1l7pFUg-1737562383-1.0.1.1-pDSLXlg.nKjG4wsT7mTJPjUvOX1UJITiS4MqKp6yfMWwRSJINsW1qC48SAcjBjakx2H5I1ESVk9JtUpUFDtf4g;
_cfuvid=x3SYvzL2nq_PTBGtE8R9cl5CkeaaDzZFQIrYfo91S2s-1737562383916-0.0.1.1-604800000
- _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000;
__cf_bm=3giyBOIM0GNudFELtsBWYXwLrpLBTNLsh81wfXgu2tg-1727226247-1.0.1.1-ugUDz0c5EhmfVpyGtcdedlIWeDGuy2q0tXQTKVpv83HZhvxgBcS7SBL1wS4rapPM38yhfEcfwA79ARt3HQEzKA
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -189,36 +165,29 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AsXdg9UrLvAiqWP979E6DszLsQ84k\",\n \"object\":
\"chat.completion\",\n \"created\": 1737562384,\n \"model\": \"gpt-4o-2024-08-06\",\n
content: "{\n \"id\": \"chatcmpl-ABAtPaaeRfdNsZ3k06CfAmrEW8IJu\",\n \"object\":
\"chat.completion\",\n \"created\": 1727226843,\n \"model\": \"gpt-4o-2024-05-13\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"```\\nThought: I now know the final answer\\nFinal
Answer: The final answer must be the great and the most complete as possible,
it must be outcome described.\\n```\",\n \"refusal\": null\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 719,\n \"completion_tokens\": 35,\n
\ \"total_tokens\": 754,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_50cad350e4\"\n}\n"
\"assistant\",\n \"content\": \"Final Answer: The final answer\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 483,\n \"completion_tokens\":
6,\n \"total_tokens\": 489,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 9060d4441edad690-IAD
- 8c8727b9da1f31e6-MIA
Connection:
- keep-alive
Content-Encoding:
@@ -226,7 +195,7 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 22 Jan 2025 16:13:05 GMT
- Wed, 25 Sep 2024 01:14:03 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -240,7 +209,7 @@ interactions:
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '928'
- '188'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -252,13 +221,13 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999187'
- '29999445'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 1ms
x-request-id:
- req_61fc7506e6db326ec572224aec81ef23
- req_d8e32538689fe064627468bad802d9a8
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,121 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Math Tutor. You are
an experienced math tutor with a knack for explaining complex concepts simply.\nYour
personal goal is: Solve math problems accurately\nTo give my best complete final
answer to the task use the exact following format:\n\nThought: I now can give
a great answer\nFinal Answer: Your final answer must be the great and the most
complete as possible, it must be outcome described.\n\nI MUST use these formats,
my job depends on it!"}, {"role": "user", "content": "\nCurrent Task: Calculate
the area of a circle with radius 5 cm.\n\nThis is the expect criteria for your
final answer: The calculated area of the circle in square centimeters.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "temperature":
0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '969'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AB7LEfa5gX4cncpI4avsK0CJG8pCb\",\n \"object\":
\"chat.completion\",\n \"created\": 1727213192,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer\\n\\nTo
calculate the area of a circle, we use the formula:\\n\\n\\\\[ A = \\\\pi r^2
\\\\]\\n\\nwhere \\\\( A \\\\) is the area, \\\\( \\\\pi \\\\) (approximately
3.14), and \\\\( r \\\\) is the radius of the circle.\\n\\nGiven that the radius
\\\\( r \\\\) is 5 cm, we can substitute this value into the formula:\\n\\n\\\\[
A = \\\\pi (5 \\\\, \\\\text{cm})^2 \\\\]\\n\\nCalculating this step-by-step:\\n\\n1.
First, square the radius:\\n \\\\[ (5 \\\\, \\\\text{cm})^2 = 25 \\\\, \\\\text{cm}^2
\\\\]\\n\\n2. Then, multiply by \\\\( \\\\pi \\\\):\\n \\\\[ A = \\\\pi \\\\times
25 \\\\, \\\\text{cm}^2 \\\\]\\n\\nUsing the approximate value of \\\\( \\\\pi
\\\\):\\n \\\\[ A \\\\approx 3.14 \\\\times 25 \\\\, \\\\text{cm}^2 \\\\]\\n
\ \\\\[ A \\\\approx 78.5 \\\\, \\\\text{cm}^2 \\\\]\\n\\nThus, the area of
the circle is approximately 78.5 square centimeters.\\n\\nFinal Answer: The
calculated area of the circle is approximately 78.5 square centimeters.\",\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 182,\n \"completion_tokens\":
270,\n \"total_tokens\": 452,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_1bb46167f9\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c85da71fcac1cf3-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 24 Sep 2024 21:26:34 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
path=/; expires=Tue, 24-Sep-24 21:56:34 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '2244'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999774'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_2e565b5f24c38968e4e923a47ecc6233
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -2,15 +2,14 @@ interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nTo give my best complete final answer to the task
respond using the exact following format:\n\nThought: I now can give a great
answer\nFinal Answer: Your final answer must be the great and the most complete
as possible, it must be outcome described.\n\nI MUST use these formats, my job
depends on it!"}, {"role": "user", "content": "\nCurrent Task: Calculate 2 +
2\n\nThis is the expect criteria for your final answer: The result of the calculation\nyou
use the exact following format:\n\nThought: I now can give a great answer\nFinal
Answer: Your final answer must be the great and the most complete as possible,
it must be outcome described.\n\nI MUST use these formats, my job depends on
it!"}, {"role": "user", "content": "\nCurrent Task: Calculate 2 + 2\n\nThis
is the expect criteria for your final answer: The result of the calculation\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
["\nObservation:"]}'
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-3.5-turbo"}'
headers:
accept:
- application/json
@@ -19,13 +18,16 @@ interactions:
connection:
- keep-alive
content-length:
- '833'
- '797'
content-type:
- application/json
cookie:
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -35,35 +37,29 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AoJqi2nPubKHXLut6gkvISe0PizvR\",\n \"object\":
\"chat.completion\",\n \"created\": 1736556064,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
content: "{\n \"id\": \"chatcmpl-AB7WSAKkoU8Nfy5KZwYNlMSpoaSeY\",\n \"object\":
\"chat.completion\",\n \"created\": 1727213888,\n \"model\": \"gpt-3.5-turbo-0125\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: The result of the calculation 2 + 2 is 4.\",\n \"refusal\": null\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 161,\n \"completion_tokens\":
25,\n \"total_tokens\": 186,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_bd83329f63\"\n}\n"
\"assistant\",\n \"content\": \"I now can give a great answer\\n\\nFinal
Answer: 2 + 2 = 4\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
159,\n \"completion_tokens\": 19,\n \"total_tokens\": 178,\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": null\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 9000dbe81c55bf7f-ATL
- 8c85eb70a9401cf3-GRU
Connection:
- keep-alive
Content-Encoding:
@@ -71,45 +67,37 @@ interactions:
Content-Type:
- application/json
Date:
- Sat, 11 Jan 2025 00:41:05 GMT
- Tue, 24 Sep 2024 21:38:08 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=LCNQO7gfz6xDjDqEOZ7ha3jDwPnDlsjsmJyScVf4UUw-1736556065-1.0.1.1-2ZcyBDpLvmxy7UOdCrLd6falFapRDuAu6WcVrlOXN0QIgZiDVYD0bCFWGCKeeE.6UjPHoPY6QdlEZZx8.0Pggw;
path=/; expires=Sat, 11-Jan-25 01:11:05 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=cRATWhxkeoeSGFg3z7_5BrHO3JDsmDX2Ior2i7bNF4M-1736556065175-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '1060'
- '489'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
- '10000'
x-ratelimit-limit-tokens:
- '150000000'
- '50000000'
x-ratelimit-remaining-requests:
- '29999'
- '9999'
x-ratelimit-remaining-tokens:
- '149999810'
- '49999813'
x-ratelimit-reset-requests:
- 2ms
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_463fbd324e01320dc253008f919713bd
- req_66c2e9625c005de2d6ffcec951018ec9
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -2,11 +2,11 @@ interactions:
- request:
body: '{"model": "llama3.2:3b", "prompt": "### System:\nYou are test role. test
backstory\nYour personal goal is: test goal\nTo give my best complete final
answer to the task respond using the exact following format:\n\nThought: I now
can give a great answer\nFinal Answer: Your final answer must be the great and
the most complete as possible, it must be outcome described.\n\nI MUST use these
formats, my job depends on it!\n\n### User:\n\nCurrent Task: Explain what AI
is in one sentence\n\nThis is the expect criteria for your final answer: A one-sentence
answer to the task use the exact following format:\n\nThought: I now can give
a great answer\nFinal Answer: Your final answer must be the great and the most
complete as possible, it must be outcome described.\n\nI MUST use these formats,
my job depends on it!\n\n### User:\n\nCurrent Task: Explain what AI is in one
sentence\n\nThis is the expect criteria for your final answer: A one-sentence
explanation of AI\nyou MUST return the actual complete content as the final
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
available and give your best Final Answer, your job depends on it!\n\nThought:\n\n",
@@ -19,25 +19,25 @@ interactions:
connection:
- keep-alive
content-length:
- '849'
- '839'
host:
- localhost:11434
user-agent:
- litellm/1.57.4
- litellm/1.56.4
method: POST
uri: http://localhost:11434/api/generate
response:
content: '{"model":"llama3.2:3b","created_at":"2025-01-10T18:39:31.893206Z","response":"Final
content: '{"model":"llama3.2:3b","created_at":"2024-12-31T16:56:15.759718Z","response":"Final
Answer: Artificial Intelligence (AI) refers to the development of computer systems
that can perform tasks that typically require human intelligence, including
learning, problem-solving, decision-making, and perception.","done":true,"done_reason":"stop","context":[128006,9125,128007,271,38766,1303,33025,2696,25,6790,220,2366,18,271,128009,128006,882,128007,271,14711,744,512,2675,527,1296,3560,13,1296,93371,198,7927,4443,5915,374,25,1296,5915,198,1271,3041,856,1888,4686,1620,4320,311,279,3465,6013,1701,279,4839,2768,3645,1473,85269,25,358,1457,649,3041,264,2294,4320,198,19918,22559,25,4718,1620,4320,2011,387,279,2294,323,279,1455,4686,439,3284,11,433,2011,387,15632,7633,382,40,28832,1005,1521,20447,11,856,2683,14117,389,433,2268,14711,2724,1473,5520,5546,25,83017,1148,15592,374,304,832,11914,271,2028,374,279,1755,13186,369,701,1620,4320,25,362,832,1355,18886,16540,315,15592,198,9514,28832,471,279,5150,4686,2262,439,279,1620,4320,11,539,264,12399,382,11382,0,1115,374,48174,3062,311,499,11,1005,279,7526,2561,323,3041,701,1888,13321,22559,11,701,2683,14117,389,433,2268,85269,1473,128009,128006,78191,128007,271,19918,22559,25,59294,22107,320,15836,8,19813,311,279,4500,315,6500,6067,430,649,2804,9256,430,11383,1397,3823,11478,11,2737,6975,11,3575,99246,11,5597,28846,11,323,21063,13],"total_duration":2216514375,"load_duration":38144042,"prompt_eval_count":182,"prompt_eval_duration":1415000000,"eval_count":38,"eval_duration":759000000}'
able to perform tasks that typically require human intelligence, including learning,
problem-solving, decision-making, and perception.","done":true,"done_reason":"stop","context":[128006,9125,128007,271,38766,1303,33025,2696,25,6790,220,2366,18,271,128009,128006,882,128007,271,14711,744,512,2675,527,1296,3560,13,1296,93371,198,7927,4443,5915,374,25,1296,5915,198,1271,3041,856,1888,4686,1620,4320,311,279,3465,1005,279,4839,2768,3645,1473,85269,25,358,1457,649,3041,264,2294,4320,198,19918,22559,25,4718,1620,4320,2011,387,279,2294,323,279,1455,4686,439,3284,11,433,2011,387,15632,7633,382,40,28832,1005,1521,20447,11,856,2683,14117,389,433,2268,14711,2724,1473,5520,5546,25,83017,1148,15592,374,304,832,11914,271,2028,374,279,1755,13186,369,701,1620,4320,25,362,832,1355,18886,16540,315,15592,198,9514,28832,471,279,5150,4686,2262,439,279,1620,4320,11,539,264,12399,382,11382,0,1115,374,48174,3062,311,499,11,1005,279,7526,2561,323,3041,701,1888,13321,22559,11,701,2683,14117,389,433,2268,85269,1473,128009,128006,78191,128007,271,19918,22559,25,59294,22107,320,15836,8,19813,311,279,4500,315,6500,6067,3025,311,2804,9256,430,11383,1397,3823,11478,11,2737,6975,11,3575,99246,11,5597,28846,11,323,21063,13],"total_duration":1156303250,"load_duration":35999125,"prompt_eval_count":181,"prompt_eval_duration":408000000,"eval_count":38,"eval_duration":711000000}'
headers:
Content-Length:
- '1534'
- '1528'
Content-Type:
- application/json; charset=utf-8
Date:
- Fri, 10 Jan 2025 18:39:31 GMT
- Tue, 31 Dec 2024 16:56:15 GMT
http_version: HTTP/1.1
status_code: 200
- request:
@@ -56,7 +56,7 @@ interactions:
host:
- localhost:11434
user-agent:
- litellm/1.57.4
- litellm/1.56.4
method: POST
uri: http://localhost:11434/api/show
response:
@@ -450,7 +450,422 @@ interactions:
Content-Type:
- application/json; charset=utf-8
Date:
- Fri, 10 Jan 2025 18:39:31 GMT
- Tue, 31 Dec 2024 16:56:15 GMT
Transfer-Encoding:
- chunked
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"name": "llama3.2:3b"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '23'
content-type:
- application/json
host:
- localhost:11434
user-agent:
- litellm/1.56.4
method: POST
uri: http://localhost:11434/api/show
response:
content: "{\"license\":\"LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\\nLlama 3.2 Version
Release Date: September 25, 2024\\n\\n\u201CAgreement\u201D means the terms
and conditions for use, reproduction, distribution \\nand modification of the
Llama Materials set forth herein.\\n\\n\u201CDocumentation\u201D means the specifications,
manuals and documentation accompanying Llama 3.2\\ndistributed by Meta at https://llama.meta.com/doc/overview.\\n\\n\u201CLicensee\u201D
or \u201Cyou\u201D means you, or your employer or any other person or entity
(if you are \\nentering into this Agreement on such person or entity\u2019s
behalf), of the age required under\\napplicable laws, rules or regulations to
provide legal consent and that has legal authority\\nto bind your employer or
such other person or entity if you are entering in this Agreement\\non their
behalf.\\n\\n\u201CLlama 3.2\u201D means the foundational large language models
and software and algorithms, including\\nmachine-learning model code, trained
model weights, inference-enabling code, training-enabling code,\\nfine-tuning
enabling code and other elements of the foregoing distributed by Meta at \\nhttps://www.llama.com/llama-downloads.\\n\\n\u201CLlama
Materials\u201D means, collectively, Meta\u2019s proprietary Llama 3.2 and Documentation
(and \\nany portion thereof) made available under this Agreement.\\n\\n\u201CMeta\u201D
or \u201Cwe\u201D means Meta Platforms Ireland Limited (if you are located in
or, \\nif you are an entity, your principal place of business is in the EEA
or Switzerland) \\nand Meta Platforms, Inc. (if you are located outside of the
EEA or Switzerland). \\n\\n\\nBy clicking \u201CI Accept\u201D below or by using
or distributing any portion or element of the Llama Materials,\\nyou agree to
be bound by this Agreement.\\n\\n\\n1. License Rights and Redistribution.\\n\\n
\ a. Grant of Rights. You are granted a non-exclusive, worldwide, \\nnon-transferable
and royalty-free limited license under Meta\u2019s intellectual property or
other rights \\nowned by Meta embodied in the Llama Materials to use, reproduce,
distribute, copy, create derivative works \\nof, and make modifications to the
Llama Materials. \\n\\n b. Redistribution and Use. \\n\\n i. If
you distribute or make available the Llama Materials (or any derivative works
thereof), \\nor a product or service (including another AI model) that contains
any of them, you shall (A) provide\\na copy of this Agreement with any such
Llama Materials; and (B) prominently display \u201CBuilt with Llama\u201D\\non
a related website, user interface, blogpost, about page, or product documentation.
If you use the\\nLlama Materials or any outputs or results of the Llama Materials
to create, train, fine tune, or\\notherwise improve an AI model, which is distributed
or made available, you shall also include \u201CLlama\u201D\\nat the beginning
of any such AI model name.\\n\\n ii. If you receive Llama Materials,
or any derivative works thereof, from a Licensee as part\\nof an integrated
end user product, then Section 2 of this Agreement will not apply to you. \\n\\n
\ iii. You must retain in all copies of the Llama Materials that you distribute
the \\nfollowing attribution notice within a \u201CNotice\u201D text file distributed
as a part of such copies: \\n\u201CLlama 3.2 is licensed under the Llama 3.2
Community License, Copyright \xA9 Meta Platforms,\\nInc. All Rights Reserved.\u201D\\n\\n
\ iv. Your use of the Llama Materials must comply with applicable laws
and regulations\\n(including trade compliance laws and regulations) and adhere
to the Acceptable Use Policy for\\nthe Llama Materials (available at https://www.llama.com/llama3_2/use-policy),
which is hereby \\nincorporated by reference into this Agreement.\\n \\n2.
Additional Commercial Terms. If, on the Llama 3.2 version release date, the
monthly active users\\nof the products or services made available by or for
Licensee, or Licensee\u2019s affiliates, \\nis greater than 700 million monthly
active users in the preceding calendar month, you must request \\na license
from Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to\\nexercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.\\n\\n3. Disclaimer of Warranty.
UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND \\nRESULTS
THEREFROM ARE PROVIDED ON AN \u201CAS IS\u201D BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS\\nALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND
IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\\nOF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE\\nFOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS
AND ASSUME ANY RISKS ASSOCIATED\\nWITH YOUR USE OF THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS.\\n\\n4. Limitation of Liability. IN NO EVENT WILL META OR
ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, \\nWHETHER IN CONTRACT,
TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
\\nFOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,
EXEMPLARY OR PUNITIVE DAMAGES, EVEN \\nIF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.\\n\\n5. Intellectual Property.\\n\\n
\ a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, \\nneither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, \\nexcept as
required for reasonable and customary use in describing and redistributing the
Llama Materials or as \\nset forth in this Section 5(a). Meta hereby grants
you a license to use \u201CLlama\u201D (the \u201CMark\u201D) solely as required
\\nto comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s
brand guidelines (currently accessible \\nat https://about.meta.com/brand/resources/meta/company-brand/).
All goodwill arising out of your use of the Mark \\nwill inure to the benefit
of Meta.\\n\\n b. Subject to Meta\u2019s ownership of Llama Materials and
derivatives made by or for Meta, with respect to any\\n derivative works
and modifications of the Llama Materials that are made by you, as between you
and Meta,\\n you are and will be the owner of such derivative works and modifications.\\n\\n
\ c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or\\n counterclaim in a lawsuit) alleging
that the Llama Materials or Llama 3.2 outputs or results, or any portion\\n
\ of any of the foregoing, constitutes infringement of intellectual property
or other rights owned or licensable\\n by you, then any licenses granted
to you under this Agreement shall terminate as of the date such litigation or\\n
\ claim is filed or instituted. You will indemnify and hold harmless Meta
from and against any claim by any third\\n party arising out of or related
to your use or distribution of the Llama Materials.\\n\\n6. Term and Termination.
The term of this Agreement will commence upon your acceptance of this Agreement
or access\\nto the Llama Materials and will continue in full force and effect
until terminated in accordance with the terms\\nand conditions herein. Meta
may terminate this Agreement if you are in breach of any term or condition of
this\\nAgreement. Upon termination of this Agreement, you shall delete and cease
use of the Llama Materials. Sections 3,\\n4 and 7 shall survive the termination
of this Agreement. \\n\\n7. Governing Law and Jurisdiction. This Agreement will
be governed and construed under the laws of the State of \\nCalifornia without
regard to choice of law principles, and the UN Convention on Contracts for the
International\\nSale of Goods does not apply to this Agreement. The courts of
California shall have exclusive jurisdiction of\\nany dispute arising out of
this Agreement.\\n**Llama 3.2** **Acceptable Use Policy**\\n\\nMeta is committed
to promoting safe and fair use of its tools and features, including Llama 3.2.
If you access or use Llama 3.2, you agree to this Acceptable Use Policy (\u201C**Policy**\u201D).
The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\\n\\n**Prohibited
Uses**\\n\\nWe want everyone to use Llama 3.2 safely and responsibly. You agree
you will not use, or allow others to use, Llama 3.2 to:\\n\\n\\n\\n1. Violate
the law or others\u2019 rights, including to:\\n 1. Engage in, promote, generate,
contribute to, encourage, plan, incite, or further illegal or unlawful activity
or content, such as:\\n 1. Violence or terrorism\\n 2. Exploitation
or harm to children, including the solicitation, creation, acquisition, or dissemination
of child exploitative content or failure to report Child Sexual Abuse Material\\n
\ 3. Human trafficking, exploitation, and sexual violence\\n 4.
The illegal distribution of information or materials to minors, including obscene
materials, or failure to employ legally required age-gating in connection with
such information or materials.\\n 5. Sexual solicitation\\n 6.
Any other criminal activity\\n 1. Engage in, promote, incite, or facilitate
the harassment, abuse, threatening, or bullying of individuals or groups of
individuals\\n 2. Engage in, promote, incite, or facilitate discrimination
or other unlawful or harmful conduct in the provision of employment, employment
benefits, credit, housing, other economic benefits, or other essential goods
and services\\n 3. Engage in the unauthorized or unlicensed practice of any
profession including, but not limited to, financial, legal, medical/health,
or related professional practices\\n 4. Collect, process, disclose, generate,
or infer private or sensitive information about individuals, including information
about individuals\u2019 identity, health, or demographic information, unless
you have obtained the right to do so in accordance with applicable law\\n 5.
Engage in or facilitate any action or generate any content that infringes, misappropriates,
or otherwise violates any third-party rights, including the outputs or results
of any products or services using the Llama Materials\\n 6. Create, generate,
or facilitate the creation of malicious code, malware, computer viruses or do
anything else that could disable, overburden, interfere with or impair the proper
working, integrity, operation or appearance of a website or computer system\\n
\ 7. Engage in any action, or facilitate any action, to intentionally circumvent
or remove usage restrictions or other safety measures, or to enable functionality
disabled by Meta\\n2. Engage in, promote, incite, facilitate, or assist in the
planning or development of activities that present a risk of death or bodily
harm to individuals, including use of Llama 3.2 related to the following:\\n
\ 8. Military, warfare, nuclear industries or applications, espionage, use
for materials or activities that are subject to the International Traffic Arms
Regulations (ITAR) maintained by the United States Department of State or to
the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons
Convention Implementation Act of 1997\\n 9. Guns and illegal weapons (including
weapon development)\\n 10. Illegal drugs and regulated/controlled substances\\n
\ 11. Operation of critical infrastructure, transportation technologies, or
heavy machinery\\n 12. Self-harm or harm to others, including suicide, cutting,
and eating disorders\\n 13. Any content intended to incite or promote violence,
abuse, or any infliction of bodily harm to an individual\\n3. Intentionally
deceive or mislead others, including use of Llama 3.2 related to the following:\\n
\ 14. Generating, promoting, or furthering fraud or the creation or promotion
of disinformation\\n 15. Generating, promoting, or furthering defamatory
content, including the creation of defamatory statements, images, or other content\\n
\ 16. Generating, promoting, or further distributing spam\\n 17. Impersonating
another individual without consent, authorization, or legal right\\n 18.
Representing that the use of Llama 3.2 or outputs are human-generated\\n 19.
Generating or facilitating false online engagement, including fake reviews and
other means of fake online engagement\\n4. Fail to appropriately disclose to
end users any known dangers of your AI system\\n5. Interact with third party
tools, models, or software designed to generate unlawful content or engage in
unlawful or harmful conduct and/or represent that the outputs of such tools,
models, or software are associated with Meta or Llama 3.2\\n\\nWith respect
to any multimodal models included in Llama 3.2, the rights granted under Section
1(a) of the Llama 3.2 Community License Agreement are not being granted to you
if you are an individual domiciled in, or a company with a principal place of
business in, the European Union. This restriction does not apply to end users
of a product or service that incorporates any such multimodal models.\\n\\nPlease
report any violation of this Policy, software \u201Cbug,\u201D or other problems
that could lead to a violation of this Policy through one of the following means:\\n\\n\\n\\n*
Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues\\u0026h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\\n*
Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\\n*
Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\\n*
Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama
3.2: LlamaUseReport@meta.com\",\"modelfile\":\"# Modelfile generated by \\\"ollama
show\\\"\\n# To build a new Modelfile based on this, replace FROM with:\\n#
FROM llama3.2:3b\\n\\nFROM /Users/brandonhancock/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff\\nTEMPLATE
\\\"\\\"\\\"\\u003c|start_header_id|\\u003esystem\\u003c|end_header_id|\\u003e\\n\\nCutting
Knowledge Date: December 2023\\n\\n{{ if .System }}{{ .System }}\\n{{- end }}\\n{{-
if .Tools }}When you receive a tool call response, use the output to format
an answer to the orginal user question.\\n\\nYou are a helpful assistant with
tool calling capabilities.\\n{{- end }}\\u003c|eot_id|\\u003e\\n{{- range $i,
$_ := .Messages }}\\n{{- $last := eq (len (slice $.Messages $i)) 1 }}\\n{{-
if eq .Role \\\"user\\\" }}\\u003c|start_header_id|\\u003euser\\u003c|end_header_id|\\u003e\\n{{-
if and $.Tools $last }}\\n\\nGiven the following functions, please respond with
a JSON for a function call with its proper arguments that best answers the given
prompt.\\n\\nRespond in the format {\\\"name\\\": function name, \\\"parameters\\\":
dictionary of argument name and its value}. Do not use variables.\\n\\n{{ range
$.Tools }}\\n{{- . }}\\n{{ end }}\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{-
else }}\\n\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{- end }}{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- else if eq .Role \\\"assistant\\\" }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n{{-
if .ToolCalls }}\\n{{ range .ToolCalls }}\\n{\\\"name\\\": \\\"{{ .Function.Name
}}\\\", \\\"parameters\\\": {{ .Function.Arguments }}}{{ end }}\\n{{- else }}\\n\\n{{
.Content }}\\n{{- end }}{{ if not $last }}\\u003c|eot_id|\\u003e{{ end }}\\n{{-
else if eq .Role \\\"tool\\\" }}\\u003c|start_header_id|\\u003eipython\\u003c|end_header_id|\\u003e\\n\\n{{
.Content }}\\u003c|eot_id|\\u003e{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- end }}\\n{{- end }}\\\"\\\"\\\"\\nPARAMETER stop \\u003c|start_header_id|\\u003e\\nPARAMETER
stop \\u003c|end_header_id|\\u003e\\nPARAMETER stop \\u003c|eot_id|\\u003e\\nLICENSE
\\\"LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\\nLlama 3.2 Version Release Date:
September 25, 2024\\n\\n\u201CAgreement\u201D means the terms and conditions
for use, reproduction, distribution \\nand modification of the Llama Materials
set forth herein.\\n\\n\u201CDocumentation\u201D means the specifications, manuals
and documentation accompanying Llama 3.2\\ndistributed by Meta at https://llama.meta.com/doc/overview.\\n\\n\u201CLicensee\u201D
or \u201Cyou\u201D means you, or your employer or any other person or entity
(if you are \\nentering into this Agreement on such person or entity\u2019s
behalf), of the age required under\\napplicable laws, rules or regulations to
provide legal consent and that has legal authority\\nto bind your employer or
such other person or entity if you are entering in this Agreement\\non their
behalf.\\n\\n\u201CLlama 3.2\u201D means the foundational large language models
and software and algorithms, including\\nmachine-learning model code, trained
model weights, inference-enabling code, training-enabling code,\\nfine-tuning
enabling code and other elements of the foregoing distributed by Meta at \\nhttps://www.llama.com/llama-downloads.\\n\\n\u201CLlama
Materials\u201D means, collectively, Meta\u2019s proprietary Llama 3.2 and Documentation
(and \\nany portion thereof) made available under this Agreement.\\n\\n\u201CMeta\u201D
or \u201Cwe\u201D means Meta Platforms Ireland Limited (if you are located in
or, \\nif you are an entity, your principal place of business is in the EEA
or Switzerland) \\nand Meta Platforms, Inc. (if you are located outside of the
EEA or Switzerland). \\n\\n\\nBy clicking \u201CI Accept\u201D below or by using
or distributing any portion or element of the Llama Materials,\\nyou agree to
be bound by this Agreement.\\n\\n\\n1. License Rights and Redistribution.\\n\\n
\ a. Grant of Rights. You are granted a non-exclusive, worldwide, \\nnon-transferable
and royalty-free limited license under Meta\u2019s intellectual property or
other rights \\nowned by Meta embodied in the Llama Materials to use, reproduce,
distribute, copy, create derivative works \\nof, and make modifications to the
Llama Materials. \\n\\n b. Redistribution and Use. \\n\\n i. If
you distribute or make available the Llama Materials (or any derivative works
thereof), \\nor a product or service (including another AI model) that contains
any of them, you shall (A) provide\\na copy of this Agreement with any such
Llama Materials; and (B) prominently display \u201CBuilt with Llama\u201D\\non
a related website, user interface, blogpost, about page, or product documentation.
If you use the\\nLlama Materials or any outputs or results of the Llama Materials
to create, train, fine tune, or\\notherwise improve an AI model, which is distributed
or made available, you shall also include \u201CLlama\u201D\\nat the beginning
of any such AI model name.\\n\\n ii. If you receive Llama Materials,
or any derivative works thereof, from a Licensee as part\\nof an integrated
end user product, then Section 2 of this Agreement will not apply to you. \\n\\n
\ iii. You must retain in all copies of the Llama Materials that you distribute
the \\nfollowing attribution notice within a \u201CNotice\u201D text file distributed
as a part of such copies: \\n\u201CLlama 3.2 is licensed under the Llama 3.2
Community License, Copyright \xA9 Meta Platforms,\\nInc. All Rights Reserved.\u201D\\n\\n
\ iv. Your use of the Llama Materials must comply with applicable laws
and regulations\\n(including trade compliance laws and regulations) and adhere
to the Acceptable Use Policy for\\nthe Llama Materials (available at https://www.llama.com/llama3_2/use-policy),
which is hereby \\nincorporated by reference into this Agreement.\\n \\n2.
Additional Commercial Terms. If, on the Llama 3.2 version release date, the
monthly active users\\nof the products or services made available by or for
Licensee, or Licensee\u2019s affiliates, \\nis greater than 700 million monthly
active users in the preceding calendar month, you must request \\na license
from Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to\\nexercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.\\n\\n3. Disclaimer of Warranty.
UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND \\nRESULTS
THEREFROM ARE PROVIDED ON AN \u201CAS IS\u201D BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS\\nALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND
IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\\nOF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE\\nFOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS
AND ASSUME ANY RISKS ASSOCIATED\\nWITH YOUR USE OF THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS.\\n\\n4. Limitation of Liability. IN NO EVENT WILL META OR
ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, \\nWHETHER IN CONTRACT,
TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
\\nFOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,
EXEMPLARY OR PUNITIVE DAMAGES, EVEN \\nIF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.\\n\\n5. Intellectual Property.\\n\\n
\ a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, \\nneither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, \\nexcept as
required for reasonable and customary use in describing and redistributing the
Llama Materials or as \\nset forth in this Section 5(a). Meta hereby grants
you a license to use \u201CLlama\u201D (the \u201CMark\u201D) solely as required
\\nto comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s
brand guidelines (currently accessible \\nat https://about.meta.com/brand/resources/meta/company-brand/).
All goodwill arising out of your use of the Mark \\nwill inure to the benefit
of Meta.\\n\\n b. Subject to Meta\u2019s ownership of Llama Materials and
derivatives made by or for Meta, with respect to any\\n derivative works
and modifications of the Llama Materials that are made by you, as between you
and Meta,\\n you are and will be the owner of such derivative works and modifications.\\n\\n
\ c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or\\n counterclaim in a lawsuit) alleging
that the Llama Materials or Llama 3.2 outputs or results, or any portion\\n
\ of any of the foregoing, constitutes infringement of intellectual property
or other rights owned or licensable\\n by you, then any licenses granted
to you under this Agreement shall terminate as of the date such litigation or\\n
\ claim is filed or instituted. You will indemnify and hold harmless Meta
from and against any claim by any third\\n party arising out of or related
to your use or distribution of the Llama Materials.\\n\\n6. Term and Termination.
The term of this Agreement will commence upon your acceptance of this Agreement
or access\\nto the Llama Materials and will continue in full force and effect
until terminated in accordance with the terms\\nand conditions herein. Meta
may terminate this Agreement if you are in breach of any term or condition of
this\\nAgreement. Upon termination of this Agreement, you shall delete and cease
use of the Llama Materials. Sections 3,\\n4 and 7 shall survive the termination
of this Agreement. \\n\\n7. Governing Law and Jurisdiction. This Agreement will
be governed and construed under the laws of the State of \\nCalifornia without
regard to choice of law principles, and the UN Convention on Contracts for the
International\\nSale of Goods does not apply to this Agreement. The courts of
California shall have exclusive jurisdiction of\\nany dispute arising out of
this Agreement.\\\"\\nLICENSE \\\"**Llama 3.2** **Acceptable Use Policy**\\n\\nMeta
is committed to promoting safe and fair use of its tools and features, including
Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use
Policy (\u201C**Policy**\u201D). The most recent copy of this policy can be
found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\\n\\n**Prohibited
Uses**\\n\\nWe want everyone to use Llama 3.2 safely and responsibly. You agree
you will not use, or allow others to use, Llama 3.2 to:\\n\\n\\n\\n1. Violate
the law or others\u2019 rights, including to:\\n 1. Engage in, promote, generate,
contribute to, encourage, plan, incite, or further illegal or unlawful activity
or content, such as:\\n 1. Violence or terrorism\\n 2. Exploitation
or harm to children, including the solicitation, creation, acquisition, or dissemination
of child exploitative content or failure to report Child Sexual Abuse Material\\n
\ 3. Human trafficking, exploitation, and sexual violence\\n 4.
The illegal distribution of information or materials to minors, including obscene
materials, or failure to employ legally required age-gating in connection with
such information or materials.\\n 5. Sexual solicitation\\n 6.
Any other criminal activity\\n 1. Engage in, promote, incite, or facilitate
the harassment, abuse, threatening, or bullying of individuals or groups of
individuals\\n 2. Engage in, promote, incite, or facilitate discrimination
or other unlawful or harmful conduct in the provision of employment, employment
benefits, credit, housing, other economic benefits, or other essential goods
and services\\n 3. Engage in the unauthorized or unlicensed practice of any
profession including, but not limited to, financial, legal, medical/health,
or related professional practices\\n 4. Collect, process, disclose, generate,
or infer private or sensitive information about individuals, including information
about individuals\u2019 identity, health, or demographic information, unless
you have obtained the right to do so in accordance with applicable law\\n 5.
Engage in or facilitate any action or generate any content that infringes, misappropriates,
or otherwise violates any third-party rights, including the outputs or results
of any products or services using the Llama Materials\\n 6. Create, generate,
or facilitate the creation of malicious code, malware, computer viruses or do
anything else that could disable, overburden, interfere with or impair the proper
working, integrity, operation or appearance of a website or computer system\\n
\ 7. Engage in any action, or facilitate any action, to intentionally circumvent
or remove usage restrictions or other safety measures, or to enable functionality
disabled by Meta\\n2. Engage in, promote, incite, facilitate, or assist in the
planning or development of activities that present a risk of death or bodily
harm to individuals, including use of Llama 3.2 related to the following:\\n
\ 8. Military, warfare, nuclear industries or applications, espionage, use
for materials or activities that are subject to the International Traffic Arms
Regulations (ITAR) maintained by the United States Department of State or to
the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons
Convention Implementation Act of 1997\\n 9. Guns and illegal weapons (including
weapon development)\\n 10. Illegal drugs and regulated/controlled substances\\n
\ 11. Operation of critical infrastructure, transportation technologies, or
heavy machinery\\n 12. Self-harm or harm to others, including suicide, cutting,
and eating disorders\\n 13. Any content intended to incite or promote violence,
abuse, or any infliction of bodily harm to an individual\\n3. Intentionally
deceive or mislead others, including use of Llama 3.2 related to the following:\\n
\ 14. Generating, promoting, or furthering fraud or the creation or promotion
of disinformation\\n 15. Generating, promoting, or furthering defamatory
content, including the creation of defamatory statements, images, or other content\\n
\ 16. Generating, promoting, or further distributing spam\\n 17. Impersonating
another individual without consent, authorization, or legal right\\n 18.
Representing that the use of Llama 3.2 or outputs are human-generated\\n 19.
Generating or facilitating false online engagement, including fake reviews and
other means of fake online engagement\\n4. Fail to appropriately disclose to
end users any known dangers of your AI system\\n5. Interact with third party
tools, models, or software designed to generate unlawful content or engage in
unlawful or harmful conduct and/or represent that the outputs of such tools,
models, or software are associated with Meta or Llama 3.2\\n\\nWith respect
to any multimodal models included in Llama 3.2, the rights granted under Section
1(a) of the Llama 3.2 Community License Agreement are not being granted to you
if you are an individual domiciled in, or a company with a principal place of
business in, the European Union. This restriction does not apply to end users
of a product or service that incorporates any such multimodal models.\\n\\nPlease
report any violation of this Policy, software \u201Cbug,\u201D or other problems
that could lead to a violation of this Policy through one of the following means:\\n\\n\\n\\n*
Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues\\u0026h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\\n*
Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\\n*
Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\\n*
Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama
3.2: LlamaUseReport@meta.com\\\"\\n\",\"parameters\":\"stop \\\"\\u003c|start_header_id|\\u003e\\\"\\nstop
\ \\\"\\u003c|end_header_id|\\u003e\\\"\\nstop \\\"\\u003c|eot_id|\\u003e\\\"\",\"template\":\"\\u003c|start_header_id|\\u003esystem\\u003c|end_header_id|\\u003e\\n\\nCutting
Knowledge Date: December 2023\\n\\n{{ if .System }}{{ .System }}\\n{{- end }}\\n{{-
if .Tools }}When you receive a tool call response, use the output to format
an answer to the orginal user question.\\n\\nYou are a helpful assistant with
tool calling capabilities.\\n{{- end }}\\u003c|eot_id|\\u003e\\n{{- range $i,
$_ := .Messages }}\\n{{- $last := eq (len (slice $.Messages $i)) 1 }}\\n{{-
if eq .Role \\\"user\\\" }}\\u003c|start_header_id|\\u003euser\\u003c|end_header_id|\\u003e\\n{{-
if and $.Tools $last }}\\n\\nGiven the following functions, please respond with
a JSON for a function call with its proper arguments that best answers the given
prompt.\\n\\nRespond in the format {\\\"name\\\": function name, \\\"parameters\\\":
dictionary of argument name and its value}. Do not use variables.\\n\\n{{ range
$.Tools }}\\n{{- . }}\\n{{ end }}\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{-
else }}\\n\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{- end }}{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- else if eq .Role \\\"assistant\\\" }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n{{-
if .ToolCalls }}\\n{{ range .ToolCalls }}\\n{\\\"name\\\": \\\"{{ .Function.Name
}}\\\", \\\"parameters\\\": {{ .Function.Arguments }}}{{ end }}\\n{{- else }}\\n\\n{{
.Content }}\\n{{- end }}{{ if not $last }}\\u003c|eot_id|\\u003e{{ end }}\\n{{-
else if eq .Role \\\"tool\\\" }}\\u003c|start_header_id|\\u003eipython\\u003c|end_header_id|\\u003e\\n\\n{{
.Content }}\\u003c|eot_id|\\u003e{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- end }}\\n{{- end }}\",\"details\":{\"parent_model\":\"\",\"format\":\"gguf\",\"family\":\"llama\",\"families\":[\"llama\"],\"parameter_size\":\"3.2B\",\"quantization_level\":\"Q4_K_M\"},\"model_info\":{\"general.architecture\":\"llama\",\"general.basename\":\"Llama-3.2\",\"general.file_type\":15,\"general.finetune\":\"Instruct\",\"general.languages\":[\"en\",\"de\",\"fr\",\"it\",\"pt\",\"hi\",\"es\",\"th\"],\"general.parameter_count\":3212749888,\"general.quantization_version\":2,\"general.size_label\":\"3B\",\"general.tags\":[\"facebook\",\"meta\",\"pytorch\",\"llama\",\"llama-3\",\"text-generation\"],\"general.type\":\"model\",\"llama.attention.head_count\":24,\"llama.attention.head_count_kv\":8,\"llama.attention.key_length\":128,\"llama.attention.layer_norm_rms_epsilon\":0.00001,\"llama.attention.value_length\":128,\"llama.block_count\":28,\"llama.context_length\":131072,\"llama.embedding_length\":3072,\"llama.feed_forward_length\":8192,\"llama.rope.dimension_count\":128,\"llama.rope.freq_base\":500000,\"llama.vocab_size\":128256,\"tokenizer.ggml.bos_token_id\":128000,\"tokenizer.ggml.eos_token_id\":128009,\"tokenizer.ggml.merges\":null,\"tokenizer.ggml.model\":\"gpt2\",\"tokenizer.ggml.pre\":\"llama-bpe\",\"tokenizer.ggml.token_type\":null,\"tokenizer.ggml.tokens\":null},\"modified_at\":\"2024-12-31T11:53:14.529771974-05:00\"}"
headers:
Content-Type:
- application/json; charset=utf-8
Date:
- Tue, 31 Dec 2024 16:56:15 GMT
Transfer-Encoding:
- chunked
http_version: HTTP/1.1

View File

@@ -2,22 +2,22 @@ interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: dummy_tool\nTool
Arguments: {''query'': {''description'': None, ''type'': ''str''}}\nTool Description:
Useful for when you need to get a dummy result for a query.\n\nUse the following
should NEVER make up tools that are not listed here:\n\nTool Name: dummy_tool(*args:
Any, **kwargs: Any) -> Any\nTool Description: dummy_tool(query: ''string'')
- Useful for when you need to get a dummy result for a query. \nTool Arguments:
{''query'': {''title'': ''Query'', ''type'': ''string''}}\n\nUse the following
format:\n\nThought: you should always think about what to do\nAction: the action
to take, only one name of [dummy_tool], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple python dictionary, enclosed in
curly braces, using \" to wrap keys and values.\nObservation: the result of
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
know the final answer\nFinal Answer: the final answer to the original input
question"}, {"role": "user", "content": "\nCurrent Task: Use the dummy tool
question\n"}, {"role": "user", "content": "\nCurrent Task: Use the dummy tool
to get a result for ''test query''\n\nThis is the expect criteria for your final
answer: The result from the dummy tool\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nBegin! This is VERY important
to you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}], "model": "gpt-3.5-turbo", "stop": ["\nObservation:"],
"stream": false}'
on it!\n\nThought:"}], "model": "gpt-3.5-turbo"}'
headers:
accept:
- application/json
@@ -26,13 +26,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1363'
- '1385'
content-type:
- application/json
cookie:
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -42,35 +45,32 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AmjTkjHtNtJfKGo6wS35grXEzfoqv\",\n \"object\":
\"chat.completion\",\n \"created\": 1736177928,\n \"model\": \"gpt-3.5-turbo-0125\",\n
content: "{\n \"id\": \"chatcmpl-AB7WUJAvkljJUylKUDdFnV9mN0X17\",\n \"object\":
\"chat.completion\",\n \"created\": 1727213890,\n \"model\": \"gpt-3.5-turbo-0125\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I should use the dummy tool to get a
result for the 'test query'.\\n\\nAction: dummy_tool\\nAction Input: {\\\"query\\\":
\\\"test query\\\"}\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
271,\n \"completion_tokens\": 31,\n \"total_tokens\": 302,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
null\n}\n"
\"assistant\",\n \"content\": \"I now need to use the dummy tool to get
a result for 'test query'.\\n\\nAction: dummy_tool\\nAction Input: {\\\"query\\\":
\\\"test query\\\"}\\nObservation: Result from the dummy tool\\n\\nThought:
I now know the final answer\\n\\nFinal Answer: Result from the dummy tool\",\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 295,\n \"completion_tokens\":
58,\n \"total_tokens\": 353,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": null\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8fdccc13af387bb2-ATL
- 8c85eb7b4f961cf3-GRU
Connection:
- keep-alive
Content-Encoding:
@@ -78,23 +78,245 @@ interactions:
Content-Type:
- application/json
Date:
- Mon, 06 Jan 2025 15:38:48 GMT
- Tue, 24 Sep 2024 21:38:11 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '585'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '50000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '49999668'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_8916660d6db980eb28e06716389f5789
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: dummy_tool(*args:
Any, **kwargs: Any) -> Any\nTool Description: dummy_tool(query: ''string'')
- Useful for when you need to get a dummy result for a query. \nTool Arguments:
{''query'': {''title'': ''Query'', ''type'': ''string''}}\n\nUse the following
format:\n\nThought: you should always think about what to do\nAction: the action
to take, only one name of [dummy_tool], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple python dictionary, enclosed in
curly braces, using \" to wrap keys and values.\nObservation: the result of
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
know the final answer\nFinal Answer: the final answer to the original input
question\n"}, {"role": "user", "content": "\nCurrent Task: Use the dummy tool
to get a result for ''test query''\n\nThis is the expect criteria for your final
answer: The result from the dummy tool\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nBegin! This is VERY important
to you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}, {"role": "user", "content": "I did it wrong. Tried to
both perform Action and give a Final Answer at the same time, I must do one
or the other"}], "model": "gpt-3.5-turbo"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1531'
content-type:
- application/json
cookie:
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AB7WVumBpjMm6lKm9dYzm7bo2IVif\",\n \"object\":
\"chat.completion\",\n \"created\": 1727213891,\n \"model\": \"gpt-3.5-turbo-0125\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I need to use the dummy_tool
to generate a result for the query 'test query'.\\n\\nAction: dummy_tool\\nAction
Input: {\\\"query\\\": \\\"test query\\\"}\\n\\nObservation: A dummy result
for the query 'test query'.\\n\\nThought: I now know the final answer\\n\\nFinal
Answer: A dummy result for the query 'test query'.\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 326,\n \"completion_tokens\":
70,\n \"total_tokens\": 396,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": null\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c85eb84ccba1cf3-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 24 Sep 2024 21:38:12 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '1356'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '50000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '49999639'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_69152ef136c5823858be1d75cafd7d54
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: dummy_tool(*args:
Any, **kwargs: Any) -> Any\nTool Description: dummy_tool(query: ''string'')
- Useful for when you need to get a dummy result for a query. \nTool Arguments:
{''query'': {''title'': ''Query'', ''type'': ''string''}}\n\nUse the following
format:\n\nThought: you should always think about what to do\nAction: the action
to take, only one name of [dummy_tool], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple python dictionary, enclosed in
curly braces, using \" to wrap keys and values.\nObservation: the result of
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
know the final answer\nFinal Answer: the final answer to the original input
question\n"}, {"role": "user", "content": "\nCurrent Task: Use the dummy tool
to get a result for ''test query''\n\nThis is the expect criteria for your final
answer: The result from the dummy tool\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nBegin! This is VERY important
to you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}, {"role": "user", "content": "I did it wrong. Tried to
both perform Action and give a Final Answer at the same time, I must do one
or the other"}, {"role": "user", "content": "I did it wrong. Tried to both perform
Action and give a Final Answer at the same time, I must do one or the other"}],
"model": "gpt-3.5-turbo"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1677'
content-type:
- application/json
cookie:
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AB7WXrUKc139TroLpiu5eTSwlhaOI\",\n \"object\":
\"chat.completion\",\n \"created\": 1727213893,\n \"model\": \"gpt-3.5-turbo-0125\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I need to use the dummy tool
to get a result for 'test query'.\\n\\nAction: \\nAction: dummy_tool\\nAction
Input: {\\\"query\\\": \\\"test query\\\"}\\n\\nObservation: Result from the
dummy tool.\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
357,\n \"completion_tokens\": 45,\n \"total_tokens\": 402,\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": null\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c85eb8f1c701cf3-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 24 Sep 2024 21:38:13 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=PdbRW9vzO7559czIqn0xmXQjbN8_vV_J7k1DlkB4d_Y-1736177928-1.0.1.1-7yNcyljwqHI.TVflr9ZnkS705G.K5hgPbHpxRzcO3ZMFi5lHCBPs_KB5pFE043wYzPmDIHpn6fu6jIY9mlNoLQ;
path=/; expires=Mon, 06-Jan-25 16:08:48 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=lOOz0FbrrPaRb4IFEeHNcj7QghHzxI1tTV2N0jD9icA-1736177928767-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
@@ -110,36 +332,53 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '49999686'
- '49999611'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_5b3e93f5d4e6ab8feef83dc26b6eb623
- req_afbc43100994c16954c17156d5b82d72
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: dummy_tool\nTool
Arguments: {''query'': {''description'': None, ''type'': ''str''}}\nTool Description:
Useful for when you need to get a dummy result for a query.\n\nUse the following
should NEVER make up tools that are not listed here:\n\nTool Name: dummy_tool(*args:
Any, **kwargs: Any) -> Any\nTool Description: dummy_tool(query: ''string'')
- Useful for when you need to get a dummy result for a query. \nTool Arguments:
{''query'': {''title'': ''Query'', ''type'': ''string''}}\n\nUse the following
format:\n\nThought: you should always think about what to do\nAction: the action
to take, only one name of [dummy_tool], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple python dictionary, enclosed in
curly braces, using \" to wrap keys and values.\nObservation: the result of
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
know the final answer\nFinal Answer: the final answer to the original input
question"}, {"role": "user", "content": "\nCurrent Task: Use the dummy tool
question\n"}, {"role": "user", "content": "\nCurrent Task: Use the dummy tool
to get a result for ''test query''\n\nThis is the expect criteria for your final
answer: The result from the dummy tool\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nBegin! This is VERY important
to you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}, {"role": "assistant", "content": "I should use the dummy
tool to get a result for the ''test query''.\n\nAction: dummy_tool\nAction Input:
{\"query\": \"test query\"}\nObservation: Dummy result for: test query"}], "model":
"gpt-3.5-turbo", "stop": ["\nObservation:"], "stream": false}'
on it!\n\nThought:"}, {"role": "user", "content": "I did it wrong. Tried to
both perform Action and give a Final Answer at the same time, I must do one
or the other"}, {"role": "user", "content": "I did it wrong. Tried to both perform
Action and give a Final Answer at the same time, I must do one or the other"},
{"role": "assistant", "content": "Thought: I need to use the dummy tool to get
a result for ''test query''.\n\nAction: \nAction: dummy_tool\nAction Input:
{\"query\": \"test query\"}\n\nObservation: Result from the dummy tool.\nObservation:
I encountered an error: Action ''Action: dummy_tool'' don''t exist, these are
the only available Actions:\nTool Name: dummy_tool(*args: Any, **kwargs: Any)
-> Any\nTool Description: dummy_tool(query: ''string'') - Useful for when you
need to get a dummy result for a query. \nTool Arguments: {''query'': {''title'':
''Query'', ''type'': ''string''}}\nMoving on then. I MUST either use a tool
(use one at time) OR give my best final answer not both at the same time. To
Use the following format:\n\nThought: you should always think about what to
do\nAction: the action to take, should be one of [dummy_tool]\nAction Input:
the input to the action, dictionary enclosed in curly braces\nObservation: the
result of the action\n... (this Thought/Action/Action Input/Result can repeat
N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer
must be the great and the most complete as possible, it must be outcome described\n\n
"}], "model": "gpt-3.5-turbo"}'
headers:
accept:
- application/json
@@ -148,16 +387,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1574'
- '2852'
content-type:
- application/json
cookie:
- __cf_bm=PdbRW9vzO7559czIqn0xmXQjbN8_vV_J7k1DlkB4d_Y-1736177928-1.0.1.1-7yNcyljwqHI.TVflr9ZnkS705G.K5hgPbHpxRzcO3ZMFi5lHCBPs_KB5pFE043wYzPmDIHpn6fu6jIY9mlNoLQ;
_cfuvid=lOOz0FbrrPaRb4IFEeHNcj7QghHzxI1tTV2N0jD9icA-1736177928767-0.0.1.1-604800000
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -167,34 +406,31 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AmjTkjtDnt98YQ3k4y71C523EQM9p\",\n \"object\":
\"chat.completion\",\n \"created\": 1736177928,\n \"model\": \"gpt-3.5-turbo-0125\",\n
content: "{\n \"id\": \"chatcmpl-AB7WYIfj6686sT8HJdwJDcdaEcJb3\",\n \"object\":
\"chat.completion\",\n \"created\": 1727213894,\n \"model\": \"gpt-3.5-turbo-0125\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Final Answer: Dummy result for: test
query\",\n \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 315,\n \"completion_tokens\":
9,\n \"total_tokens\": 324,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
null\n}\n"
\"assistant\",\n \"content\": \"Thought: I need to use the dummy tool
to get a result for 'test query'.\\n\\nAction: dummy_tool\\nAction Input: {\\\"query\\\":
\\\"test query\\\"}\\n\\nObservation: Result from the dummy tool.\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 629,\n \"completion_tokens\":
42,\n \"total_tokens\": 671,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": null\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8fdccc171b647bb2-ATL
- 8c85eb943bca1cf3-GRU
Connection:
- keep-alive
Content-Encoding:
@@ -202,7 +438,7 @@ interactions:
Content-Type:
- application/json
Date:
- Mon, 06 Jan 2025 15:38:49 GMT
- Tue, 24 Sep 2024 21:38:14 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -211,12 +447,10 @@ interactions:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '249'
- '654'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -228,13 +462,144 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '49999643'
- '49999332'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_cdc7b25a3877bb9a7cb7c6d2645ff447
- req_005a34569e834bf029582d141f16a419
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: dummy_tool(*args:
Any, **kwargs: Any) -> Any\nTool Description: dummy_tool(query: ''string'')
- Useful for when you need to get a dummy result for a query. \nTool Arguments:
{''query'': {''title'': ''Query'', ''type'': ''string''}}\n\nUse the following
format:\n\nThought: you should always think about what to do\nAction: the action
to take, only one name of [dummy_tool], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple python dictionary, enclosed in
curly braces, using \" to wrap keys and values.\nObservation: the result of
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
know the final answer\nFinal Answer: the final answer to the original input
question\n"}, {"role": "user", "content": "\nCurrent Task: Use the dummy tool
to get a result for ''test query''\n\nThis is the expect criteria for your final
answer: The result from the dummy tool\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nBegin! This is VERY important
to you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}, {"role": "user", "content": "I did it wrong. Tried to
both perform Action and give a Final Answer at the same time, I must do one
or the other"}, {"role": "user", "content": "I did it wrong. Tried to both perform
Action and give a Final Answer at the same time, I must do one or the other"},
{"role": "assistant", "content": "Thought: I need to use the dummy tool to get
a result for ''test query''.\n\nAction: \nAction: dummy_tool\nAction Input:
{\"query\": \"test query\"}\n\nObservation: Result from the dummy tool.\nObservation:
I encountered an error: Action ''Action: dummy_tool'' don''t exist, these are
the only available Actions:\nTool Name: dummy_tool(*args: Any, **kwargs: Any)
-> Any\nTool Description: dummy_tool(query: ''string'') - Useful for when you
need to get a dummy result for a query. \nTool Arguments: {''query'': {''title'':
''Query'', ''type'': ''string''}}\nMoving on then. I MUST either use a tool
(use one at time) OR give my best final answer not both at the same time. To
Use the following format:\n\nThought: you should always think about what to
do\nAction: the action to take, should be one of [dummy_tool]\nAction Input:
the input to the action, dictionary enclosed in curly braces\nObservation: the
result of the action\n... (this Thought/Action/Action Input/Result can repeat
N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer
must be the great and the most complete as possible, it must be outcome described\n\n
"}, {"role": "assistant", "content": "Thought: I need to use the dummy tool
to get a result for ''test query''.\n\nAction: dummy_tool\nAction Input: {\"query\":
\"test query\"}\n\nObservation: Result from the dummy tool.\nObservation: Dummy
result for: test query"}], "model": "gpt-3.5-turbo"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '3113'
content-type:
- application/json
cookie:
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AB7WZFqqZYUEyJrmbLJJEcylBQAwb\",\n \"object\":
\"chat.completion\",\n \"created\": 1727213895,\n \"model\": \"gpt-3.5-turbo-0125\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Final Answer: Dummy result for: test
query\",\n \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 684,\n \"completion_tokens\":
9,\n \"total_tokens\": 693,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": null\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c85eb9aee421cf3-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 24 Sep 2024 21:38:15 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '297'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '50000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '49999277'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_5da3c303ae34eb8a1090f134d409f97c
http_version: HTTP/1.1
status_code: 200
version: 1

File diff suppressed because it is too large Load Diff

View File

@@ -1,87 +1,4 @@
interactions:
- request:
body: !!binary |
CqcXCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkS/hYKEgoQY3Jld2FpLnRl
bGVtZXRyeRJ5ChBuJJtOdNaB05mOW/p3915eEgj2tkAd3rZcASoQVG9vbCBVc2FnZSBFcnJvcjAB
OYa7/URvKBUYQUpcFEVvKBUYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoPCgNsbG0SCAoG
Z3B0LTRvegIYAYUBAAEAABLJBwoQifhX01E5i+5laGdALAlZBBIIBuGM1aN+OPgqDENyZXcgQ3Jl
YXRlZDABORVGruBvKBUYQaipwOBvKBUYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5w
eXRob25fdmVyc2lvbhIICgYzLjEyLjdKLgoIY3Jld19rZXkSIgogN2U2NjA4OTg5ODU5YTY3ZWVj
ODhlZWY3ZmNlODUyMjVKMQoHY3Jld19pZBImCiRiOThiNWEwMC01YTI1LTQxMDctYjQwNS1hYmYz
MjBhOGYzYThKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAA
ShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgB
SuQCCgtjcmV3X2FnZW50cxLUAgrRAlt7ImtleSI6ICIyMmFjZDYxMWU0NGVmNWZhYzA1YjUzM2Q3
NWU4ODkzYiIsICJpZCI6ICJkNWIyMzM1YS0yMmIyLTQyZWEtYmYwNS03OTc3NmU3MmYzOTIiLCAi
cm9sZSI6ICJEYXRhIFNjaWVudGlzdCIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAy
MCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJn
cHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4
ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFsi
Z2V0IGdyZWV0aW5ncyJdfV1KkgIKCmNyZXdfdGFza3MSgwIKgAJbeyJrZXkiOiAiYTI3N2IzNGIy
YzE0NmYwYzU2YzVlMTM1NmU4ZjhhNTciLCAiaWQiOiAiMjJiZWMyMzEtY2QyMS00YzU4LTgyN2Ut
MDU4MWE4ZjBjMTExIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6
IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJEYXRhIFNjaWVudGlzdCIsICJhZ2VudF9rZXkiOiAiMjJh
Y2Q2MTFlNDRlZjVmYWMwNWI1MzNkNzVlODg5M2IiLCAidG9vbHNfbmFtZXMiOiBbImdldCBncmVl
dGluZ3MiXX1degIYAYUBAAEAABKOAgoQ5WYoxRtTyPjge4BduhL0rRIIv2U6rvWALfwqDFRhc2sg
Q3JlYXRlZDABOX068uBvKBUYQZkv8+BvKBUYSi4KCGNyZXdfa2V5EiIKIDdlNjYwODk4OTg1OWE2
N2VlYzg4ZWVmN2ZjZTg1MjI1SjEKB2NyZXdfaWQSJgokYjk4YjVhMDAtNWEyNS00MTA3LWI0MDUt
YWJmMzIwYThmM2E4Si4KCHRhc2tfa2V5EiIKIGEyNzdiMzRiMmMxNDZmMGM1NmM1ZTEzNTZlOGY4
YTU3SjEKB3Rhc2tfaWQSJgokMjJiZWMyMzEtY2QyMS00YzU4LTgyN2UtMDU4MWE4ZjBjMTExegIY
AYUBAAEAABKQAQoQXyeDtJDFnyp2Fjk9YEGTpxIIaNE7gbhPNYcqClRvb2wgVXNhZ2UwATkaXTvj
bygVGEGvx0rjbygVGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKHAoJdG9vbF9uYW1lEg8K
DUdldCBHcmVldGluZ3NKDgoIYXR0ZW1wdHMSAhgBegIYAYUBAAEAABLVBwoQMWfznt0qwauEzl7T
UOQxRBII9q+pUS5EdLAqDENyZXcgQ3JlYXRlZDABORONPORvKBUYQSAoS+RvKBUYShoKDmNyZXdh
aV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVyc2lvbhIICgYzLjEyLjdKLgoIY3Jld19r
ZXkSIgogYzMwNzYwMDkzMjY3NjE0NDRkNTdjNzFkMWRhM2YyN2NKMQoHY3Jld19pZBImCiQ3OTQw
MTkyNS1iOGU5LTQ3MDgtODUzMC00NDhhZmEzYmY4YjBKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVl
bnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVj
cmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSuoCCgtjcmV3X2FnZW50cxLaAgrXAlt7ImtleSI6ICI5
OGYzYjFkNDdjZTk2OWNmMDU3NzI3Yjc4NDE0MjVjZCIsICJpZCI6ICI5OTJkZjYyZi1kY2FiLTQy
OTUtOTIwNi05MDBkNDExNGIxZTkiLCAicm9sZSI6ICJGcmllbmRseSBOZWlnaGJvciIsICJ2ZXJi
b3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25f
Y2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJs
ZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9s
aW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFsiZGVjaWRlIGdyZWV0aW5ncyJdfV1KmAIKCmNyZXdf
dGFza3MSiQIKhgJbeyJrZXkiOiAiODBkN2JjZDQ5MDk5MjkwMDgzODMyZjBlOTgzMzgwZGYiLCAi
aWQiOiAiMmZmNjE5N2UtYmEyNy00YjczLWI0YTctNGZhMDQ4ZTYyYjQ3IiwgImFzeW5jX2V4ZWN1
dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJGcmll
bmRseSBOZWlnaGJvciIsICJhZ2VudF9rZXkiOiAiOThmM2IxZDQ3Y2U5NjljZjA1NzcyN2I3ODQx
NDI1Y2QiLCAidG9vbHNfbmFtZXMiOiBbImRlY2lkZSBncmVldGluZ3MiXX1degIYAYUBAAEAABKO
AgoQnjTp5boK7/+DQxztYIpqihIIgGnMUkBtzHEqDFRhc2sgQ3JlYXRlZDABOcpYcuRvKBUYQalE
c+RvKBUYSi4KCGNyZXdfa2V5EiIKIGMzMDc2MDA5MzI2NzYxNDQ0ZDU3YzcxZDFkYTNmMjdjSjEK
B2NyZXdfaWQSJgokNzk0MDE5MjUtYjhlOS00NzA4LTg1MzAtNDQ4YWZhM2JmOGIwSi4KCHRhc2tf
a2V5EiIKIDgwZDdiY2Q0OTA5OTI5MDA4MzgzMmYwZTk4MzM4MGRmSjEKB3Rhc2tfaWQSJgokMmZm
NjE5N2UtYmEyNy00YjczLWI0YTctNGZhMDQ4ZTYyYjQ3egIYAYUBAAEAABKTAQoQ26H9pLUgswDN
p9XhJwwL6BIIx3bw7mAvPYwqClRvb2wgVXNhZ2UwATmy7NPlbygVGEEvb+HlbygVGEoaCg5jcmV3
YWlfdmVyc2lvbhIICgYwLjg2LjBKHwoJdG9vbF9uYW1lEhIKEERlY2lkZSBHcmVldGluZ3NKDgoI
YXR0ZW1wdHMSAhgBegIYAYUBAAEAAA==
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '2986'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.27.0
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Fri, 27 Dec 2024 22:14:53 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nTo give my best complete final answer to the task
@@ -105,20 +22,18 @@ interactions:
- '824'
content-type:
- application/json
cookie:
- _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
@@ -132,8 +47,8 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtZLLrWi8ZASpP9bz6HaCV7xBIn\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
content: "{\n \"id\": \"chatcmpl-AaqIIsTxhvf75xvuu7gQScIlRSKbW\",\n \"object\":
\"chat.completion\",\n \"created\": 1733344190,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: Hi\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
@@ -142,12 +57,12 @@ interactions:
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
\"fp_0705bf87c0\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa83deca756b-SEA
- 8ece8cfc3b1f4532-ATL
Connection:
- keep-alive
Content-Encoding:
@@ -155,14 +70,14 @@ interactions:
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:53 GMT
- Wed, 04 Dec 2024 20:29:50 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw;
path=/; expires=Fri, 27-Dec-24 22:44:53 GMT; domain=.api.openai.com; HttpOnly;
- __cf_bm=QJZZjZ6eqnVamqUkw.Bx0mj7oBi3a_vGEH1VODcUxlg-1733344190-1.0.1.1-xyN0ekA9xIrSwEhRBmTiWJ3Pt72UYLU5owKfkz5yihVmMTfsr_Qz.ssGPJ5cuft066v1xVjb4zOSTdFmesMSKg;
path=/; expires=Wed, 04-Dec-24 20:59:50 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
- _cfuvid=eCIkP8GVPvpkg19eOhCquWFHm.RTQBQy4yHLGGEAH5c-1733344190334-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
@@ -175,7 +90,7 @@ interactions:
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '404'
- '313'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -193,7 +108,7 @@ interactions:
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_6ac84634bff9193743c4b0911c09b4a6
- req_9fd9a8ee688045dcf7ac5f6fdf689372
http_version: HTTP/1.1
status_code: 200
- request:
@@ -216,20 +131,20 @@ interactions:
content-type:
- application/json
cookie:
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
- __cf_bm=QJZZjZ6eqnVamqUkw.Bx0mj7oBi3a_vGEH1VODcUxlg-1733344190-1.0.1.1-xyN0ekA9xIrSwEhRBmTiWJ3Pt72UYLU5owKfkz5yihVmMTfsr_Qz.ssGPJ5cuft066v1xVjb4zOSTdFmesMSKg;
_cfuvid=eCIkP8GVPvpkg19eOhCquWFHm.RTQBQy4yHLGGEAH5c-1733344190334-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
@@ -243,8 +158,8 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtZNlWdrrPZhq0MJDqd16sMuQEJ\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
content: "{\n \"id\": \"chatcmpl-AaqIIaQlLyoyPmk909PvAIfA2TmJL\",\n \"object\":
\"chat.completion\",\n \"created\": 1733344190,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"True\",\n \"refusal\": null\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
@@ -253,12 +168,12 @@ interactions:
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
\"fp_0705bf87c0\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa87094f756b-SEA
- 8ece8d060b5e4532-ATL
Connection:
- keep-alive
Content-Encoding:
@@ -266,7 +181,7 @@ interactions:
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:53 GMT
- Wed, 04 Dec 2024 20:29:50 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -280,7 +195,7 @@ interactions:
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '156'
- '375'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -298,7 +213,7 @@ interactions:
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_ec74bef2a9ef7b2144c03fd7f7bbeab0
- req_be7cb475e0859a82c37ee3f2871ea5ea
http_version: HTTP/1.1
status_code: 200
- request:
@@ -327,20 +242,20 @@ interactions:
content-type:
- application/json
cookie:
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
- __cf_bm=QJZZjZ6eqnVamqUkw.Bx0mj7oBi3a_vGEH1VODcUxlg-1733344190-1.0.1.1-xyN0ekA9xIrSwEhRBmTiWJ3Pt72UYLU5owKfkz5yihVmMTfsr_Qz.ssGPJ5cuft066v1xVjb4zOSTdFmesMSKg;
_cfuvid=eCIkP8GVPvpkg19eOhCquWFHm.RTQBQy4yHLGGEAH5c-1733344190334-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
@@ -354,23 +269,22 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtZGv4f3h7GDdhyOy9G0sB1lRgC\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
content: "{\n \"id\": \"chatcmpl-AaqIJAAxpVfUOdrsgYKHwfRlHv4RS\",\n \"object\":
\"chat.completion\",\n \"created\": 1733344191,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I understand the feedback and
will adjust my response accordingly. \\nFinal Answer: Hello\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 188,\n \"completion_tokens\":
18,\n \"total_tokens\": 206,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
\"assistant\",\n \"content\": \"Thought: I now can give a great answer
\ \\nFinal Answer: Hello\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
188,\n \"completion_tokens\": 14,\n \"total_tokens\": 202,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
\"fp_0705bf87c0\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa88cac4756b-SEA
- 8ece8d090fc34532-ATL
Connection:
- keep-alive
Content-Encoding:
@@ -378,7 +292,7 @@ interactions:
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:54 GMT
- Wed, 04 Dec 2024 20:29:51 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -392,7 +306,7 @@ interactions:
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '358'
- '484'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -410,7 +324,7 @@ interactions:
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_ae1ab6b206d28ded6fee3c83ed0c2ab7
- req_5bf4a565ad6c2567a1ed204ecac89134
http_version: HTTP/1.1
status_code: 200
- request:
@@ -432,20 +346,20 @@ interactions:
content-type:
- application/json
cookie:
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
- __cf_bm=QJZZjZ6eqnVamqUkw.Bx0mj7oBi3a_vGEH1VODcUxlg-1733344190-1.0.1.1-xyN0ekA9xIrSwEhRBmTiWJ3Pt72UYLU5owKfkz5yihVmMTfsr_Qz.ssGPJ5cuft066v1xVjb4zOSTdFmesMSKg;
_cfuvid=eCIkP8GVPvpkg19eOhCquWFHm.RTQBQy4yHLGGEAH5c-1733344190334-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
@@ -459,8 +373,8 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtaiHL4TY8Dssk0j2miqmjrzquy\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337694,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
content: "{\n \"id\": \"chatcmpl-AaqIJqyG8vl9mxj2qDPZgaxyNLLIq\",\n \"object\":
\"chat.completion\",\n \"created\": 1733344191,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"False\",\n \"refusal\": null\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
@@ -469,12 +383,12 @@ interactions:
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
\"fp_0705bf87c0\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa8bdd26756b-SEA
- 8ece8d0cfdeb4532-ATL
Connection:
- keep-alive
Content-Encoding:
@@ -482,7 +396,7 @@ interactions:
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:54 GMT
- Wed, 04 Dec 2024 20:29:51 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -496,7 +410,7 @@ interactions:
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '184'
- '341'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -514,7 +428,7 @@ interactions:
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_652891f79c1104a7a8436275d78a69f1
- req_5554bade8ceda00cf364b76a51b708ff
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -2,23 +2,23 @@ interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nUse the following format:\n\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [get_final_answer], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple python dictionary, enclosed in
curly braces, using \" to wrap keys and values.\nObservation: the result of
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
know the final answer\nFinal Answer: the final answer to the original input
question"}, {"role": "user", "content": "\nCurrent Task: The final answer is
42. But don''t give it yet, instead keep using the `get_final_answer` tool over
and over until you''re told you can give your final answer.\n\nThis is the expect
criteria for your final answer: The final answer\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nBegin! This is VERY
important to you, use the tools available and give your best Final Answer, your
job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"],
"stream": false}'
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer(*args:
Any, **kwargs: Any) -> Any\nTool Description: get_final_answer() - Get the final
answer but don''t give it yet, just re-use this tool non-stop. \nTool
Arguments: {}\n\nUse the following format:\n\nThought: you should always think
about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n"}, {"role": "user", "content":
"\nCurrent Task: The final answer is 42. But don''t give it yet, instead keep
using the `get_final_answer` tool over and over until you''re told you can give
your final answer.\n\nThis is the expect criteria for your final answer: The
final answer\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
"gpt-4o"}'
headers:
accept:
- application/json
@@ -27,139 +27,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1440'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnAdPHapYzkPkClCzFaWzfCAUHlWI\",\n \"object\":
\"chat.completion\",\n \"created\": 1736282315,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I need to use the `get_final_answer`
tool and then keep using it repeatedly as instructed. \\n\\nAction: get_final_answer\\nAction
Input: {}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
285,\n \"completion_tokens\": 31,\n \"total_tokens\": 316,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_5f20662549\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8fe6c096ee70ed8c-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 07 Jan 2025 20:38:36 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=hkH74Rv9bMDMhhK.Ep.9blvKIwXeSSwlCoTNGk9qVpA-1736282316-1.0.1.1-5PAsOPpVEfTNNy5DYRlLH1f4caHJArumiloWf.L51RQPWN3uIWsBSuhLVbNQDYVCQb9RQK8W5DcXv5Jq9FvsLA;
path=/; expires=Tue, 07-Jan-25 21:08:36 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=vqZ5X0AXIJfzp5UJSFyTmaCVjA.L8Yg35b.ijZFAPM4-1736282316289-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '883'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999665'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_00de12bc6822ef095f4f368aae873f31
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nUse the following format:\n\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [get_final_answer], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple python dictionary, enclosed in
curly braces, using \" to wrap keys and values.\nObservation: the result of
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
know the final answer\nFinal Answer: the final answer to the original input
question"}, {"role": "user", "content": "\nCurrent Task: The final answer is
42. But don''t give it yet, instead keep using the `get_final_answer` tool over
and over until you''re told you can give your final answer.\n\nThis is the expect
criteria for your final answer: The final answer\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nBegin! This is VERY
important to you, use the tools available and give your best Final Answer, your
job depends on it!\n\nThought:"}, {"role": "assistant", "content": "I need to
use the `get_final_answer` tool and then keep using it repeatedly as instructed.
\n\nAction: get_final_answer\nAction Input: {}\nObservation: 42"}], "model":
"gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1632'
- '1452'
content-type:
- application/json
cookie:
- __cf_bm=hkH74Rv9bMDMhhK.Ep.9blvKIwXeSSwlCoTNGk9qVpA-1736282316-1.0.1.1-5PAsOPpVEfTNNy5DYRlLH1f4caHJArumiloWf.L51RQPWN3uIWsBSuhLVbNQDYVCQb9RQK8W5DcXv5Jq9FvsLA;
_cfuvid=vqZ5X0AXIJfzp5UJSFyTmaCVjA.L8Yg35b.ijZFAPM4-1736282316289-0.0.1.1-604800000
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -169,159 +46,30 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnAdQKGW3Q8LUCmphL7hkavxi4zWB\",\n \"object\":
\"chat.completion\",\n \"created\": 1736282316,\n \"model\": \"gpt-4o-2024-08-06\",\n
content: "{\n \"id\": \"chatcmpl-AB7NlDmtLHCfUZJCFVIKeV5KMyQfX\",\n \"object\":
\"chat.completion\",\n \"created\": 1727213349,\n \"model\": \"gpt-4o-2024-05-13\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I should continue using the `get_final_answer`
tool as per the instructions.\\n\\nAction: get_final_answer\\nAction Input:
{}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 324,\n \"completion_tokens\":
26,\n \"total_tokens\": 350,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_5f20662549\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8fe6c09e6c69ed8c-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 07 Jan 2025 20:38:37 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '542'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999627'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_6844467024f67bb1477445b1a8a01761
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nUse the following format:\n\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [get_final_answer], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple python dictionary, enclosed in
curly braces, using \" to wrap keys and values.\nObservation: the result of
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
know the final answer\nFinal Answer: the final answer to the original input
question"}, {"role": "user", "content": "\nCurrent Task: The final answer is
42. But don''t give it yet, instead keep using the `get_final_answer` tool over
and over until you''re told you can give your final answer.\n\nThis is the expect
criteria for your final answer: The final answer\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nBegin! This is VERY
important to you, use the tools available and give your best Final Answer, your
job depends on it!\n\nThought:"}, {"role": "assistant", "content": "I need to
use the `get_final_answer` tool and then keep using it repeatedly as instructed.
\n\nAction: get_final_answer\nAction Input: {}\nObservation: 42"}, {"role":
"assistant", "content": "I should continue using the `get_final_answer` tool
as per the instructions.\n\nAction: get_final_answer\nAction Input: {}\nObservation:
I tried reusing the same input, I must stop using this action input. I''ll try
something else instead."}], "model": "gpt-4o", "stop": ["\nObservation:"], "stream":
false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1908'
content-type:
- application/json
cookie:
- __cf_bm=hkH74Rv9bMDMhhK.Ep.9blvKIwXeSSwlCoTNGk9qVpA-1736282316-1.0.1.1-5PAsOPpVEfTNNy5DYRlLH1f4caHJArumiloWf.L51RQPWN3uIWsBSuhLVbNQDYVCQb9RQK8W5DcXv5Jq9FvsLA;
_cfuvid=vqZ5X0AXIJfzp5UJSFyTmaCVjA.L8Yg35b.ijZFAPM4-1736282316289-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnAdR2lKFEVaDbfD9qaF0Tts0eVMt\",\n \"object\":
\"chat.completion\",\n \"created\": 1736282317,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I should persist with using the `get_final_answer`
tool.\\n\\nAction: get_final_answer\\nAction Input: {}\",\n \"refusal\":
\"assistant\",\n \"content\": \"Thought: I need to use the provided tool
as instructed.\\n\\nAction: get_final_answer\\nAction Input: {}\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 378,\n \"completion_tokens\":
23,\n \"total_tokens\": 401,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_5f20662549\"\n}\n"
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 303,\n \"completion_tokens\":
22,\n \"total_tokens\": 325,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8fe6c0a2ce3ded8c-ATL
- 8c85de473ae11cf3-GRU
Connection:
- keep-alive
Content-Encoding:
@@ -329,7 +77,7 @@ interactions:
Content-Type:
- application/json
Date:
- Tue, 07 Jan 2025 20:38:37 GMT
- Tue, 24 Sep 2024 21:29:10 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -338,12 +86,10 @@ interactions:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '492'
- '489'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -355,59 +101,273 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999567'
- '29999651'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_198e698a8bc7eea092ea32b83cc4304e
- req_de70a4dc416515dda4b2ad48bde52f93
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nUse the following format:\n\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [get_final_answer], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple python dictionary, enclosed in
curly braces, using \" to wrap keys and values.\nObservation: the result of
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
know the final answer\nFinal Answer: the final answer to the original input
question"}, {"role": "user", "content": "\nCurrent Task: The final answer is
42. But don''t give it yet, instead keep using the `get_final_answer` tool over
and over until you''re told you can give your final answer.\n\nThis is the expect
criteria for your final answer: The final answer\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nBegin! This is VERY
important to you, use the tools available and give your best Final Answer, your
job depends on it!\n\nThought:"}, {"role": "assistant", "content": "I need to
use the `get_final_answer` tool and then keep using it repeatedly as instructed.
\n\nAction: get_final_answer\nAction Input: {}\nObservation: 42"}, {"role":
"assistant", "content": "I should continue using the `get_final_answer` tool
as per the instructions.\n\nAction: get_final_answer\nAction Input: {}\nObservation:
I tried reusing the same input, I must stop using this action input. I''ll try
something else instead."}, {"role": "assistant", "content": "I should persist
with using the `get_final_answer` tool.\n\nAction: get_final_answer\nAction
Input: {}\nObservation: I tried reusing the same input, I must stop using this
action input. I''ll try something else instead.\n\n\n\n\nYou ONLY have access
to the following tools, and should NEVER make up tools that are not listed here:\n\nTool
Name: get_final_answer\nTool Arguments: {}\nTool Description: Get the final
answer but don''t give it yet, just re-use this\n tool non-stop.\n\nUse
the following format:\n\nThought: you should always think about what to do\nAction:
the action to take, only one name of [get_final_answer], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple python
dictionary, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n\nOnce all necessary information is gathered:\n\nThought:
I now know the final answer\nFinal Answer: the final answer to the original
input question"}, {"role": "assistant", "content": "I should persist with using
the `get_final_answer` tool.\n\nAction: get_final_answer\nAction Input: {}\nObservation:
I tried reusing the same input, I must stop using this action input. I''ll try
something else instead.\n\n\n\n\nYou ONLY have access to the following tools,
and should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nUse the following format:\n\nThought:
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer(*args:
Any, **kwargs: Any) -> Any\nTool Description: get_final_answer() - Get the final
answer but don''t give it yet, just re-use this tool non-stop. \nTool
Arguments: {}\n\nUse the following format:\n\nThought: you should always think
about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n"}, {"role": "user", "content":
"\nCurrent Task: The final answer is 42. But don''t give it yet, instead keep
using the `get_final_answer` tool over and over until you''re told you can give
your final answer.\n\nThis is the expect criteria for your final answer: The
final answer\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}, {"role":
"assistant", "content": "Thought: I need to use the provided tool as instructed.\n\nAction:
get_final_answer\nAction Input: {}\nObservation: 42"}], "model": "gpt-4o"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1608'
content-type:
- application/json
cookie:
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AB7Nnz14hlEaTdabXodZCVU0UoDhk\",\n \"object\":
\"chat.completion\",\n \"created\": 1727213351,\n \"model\": \"gpt-4o-2024-05-13\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I must continue using the `get_final_answer`
tool as instructed.\\n\\nAction: get_final_answer\\nAction Input: {}\\nObservation:
42\",\n \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 333,\n \"completion_tokens\":
30,\n \"total_tokens\": 363,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c85de5109701cf3-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 24 Sep 2024 21:29:11 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '516'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999620'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_5365ac0e5413bd9330c6ac3f68051bcf
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer(*args:
Any, **kwargs: Any) -> Any\nTool Description: get_final_answer() - Get the final
answer but don''t give it yet, just re-use this tool non-stop. \nTool
Arguments: {}\n\nUse the following format:\n\nThought: you should always think
about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n"}, {"role": "user", "content":
"\nCurrent Task: The final answer is 42. But don''t give it yet, instead keep
using the `get_final_answer` tool over and over until you''re told you can give
your final answer.\n\nThis is the expect criteria for your final answer: The
final answer\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}, {"role":
"assistant", "content": "Thought: I need to use the provided tool as instructed.\n\nAction:
get_final_answer\nAction Input: {}\nObservation: 42"}, {"role": "assistant",
"content": "Thought: I must continue using the `get_final_answer` tool as instructed.\n\nAction:
get_final_answer\nAction Input: {}\nObservation: 42\nObservation: 42"}], "model":
"gpt-4o"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1799'
content-type:
- application/json
cookie:
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AB7NoF5Gf597BGmOETPYGxN2eRFxd\",\n \"object\":
\"chat.completion\",\n \"created\": 1727213352,\n \"model\": \"gpt-4o-2024-05-13\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I must continue using the `get_final_answer`
tool to meet the requirements.\\n\\nAction: get_final_answer\\nAction Input:
{}\\nObservation: 42\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
372,\n \"completion_tokens\": 32,\n \"total_tokens\": 404,\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c85de587bc01cf3-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 24 Sep 2024 21:29:12 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '471'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999583'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_55550369b28e37f064296dbc41e0db69
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer(*args:
Any, **kwargs: Any) -> Any\nTool Description: get_final_answer() - Get the final
answer but don''t give it yet, just re-use this tool non-stop. \nTool
Arguments: {}\n\nUse the following format:\n\nThought: you should always think
about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n"}, {"role": "user", "content":
"\nCurrent Task: The final answer is 42. But don''t give it yet, instead keep
using the `get_final_answer` tool over and over until you''re told you can give
your final answer.\n\nThis is the expect criteria for your final answer: The
final answer\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}, {"role":
"assistant", "content": "Thought: I need to use the provided tool as instructed.\n\nAction:
get_final_answer\nAction Input: {}\nObservation: 42"}, {"role": "assistant",
"content": "Thought: I must continue using the `get_final_answer` tool as instructed.\n\nAction:
get_final_answer\nAction Input: {}\nObservation: 42\nObservation: 42"}, {"role":
"assistant", "content": "Thought: I must continue using the `get_final_answer`
tool to meet the requirements.\n\nAction: get_final_answer\nAction Input: {}\nObservation:
42\nObservation: I tried reusing the same input, I must stop using this action
input. I''ll try something else instead.\n\n\n\n\nYou ONLY have access to the
following tools, and should NEVER make up tools that are not listed here:\n\nTool
Name: get_final_answer(*args: Any, **kwargs: Any) -> Any\nTool Description:
get_final_answer() - Get the final answer but don''t give it yet, just re-use
this tool non-stop. \nTool Arguments: {}\n\nUse the following format:\n\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [get_final_answer], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple python dictionary, enclosed in
@@ -416,8 +376,7 @@ interactions:
know the final answer\nFinal Answer: the final answer to the original input
question\n\nNow it''s time you MUST give your absolute best final answer. You''ll
ignore all previous instructions, stop using any tools, and just return your
absolute BEST Final answer."}], "model": "gpt-4o", "stop": ["\nObservation:"],
"stream": false}'
absolute BEST Final answer."}], "model": "gpt-4o"}'
headers:
accept:
- application/json
@@ -426,16 +385,16 @@ interactions:
connection:
- keep-alive
content-length:
- '4148'
- '3107'
content-type:
- application/json
cookie:
- __cf_bm=hkH74Rv9bMDMhhK.Ep.9blvKIwXeSSwlCoTNGk9qVpA-1736282316-1.0.1.1-5PAsOPpVEfTNNy5DYRlLH1f4caHJArumiloWf.L51RQPWN3uIWsBSuhLVbNQDYVCQb9RQK8W5DcXv5Jq9FvsLA;
_cfuvid=vqZ5X0AXIJfzp5UJSFyTmaCVjA.L8Yg35b.ijZFAPM4-1736282316289-0.0.1.1-604800000
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -445,34 +404,29 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnAdRu1aVdsOxxIqU6nqv5dIxwbvu\",\n \"object\":
\"chat.completion\",\n \"created\": 1736282317,\n \"model\": \"gpt-4o-2024-08-06\",\n
content: "{\n \"id\": \"chatcmpl-AB7Npl5ZliMrcSofDS1c7LVGSmmbE\",\n \"object\":
\"chat.completion\",\n \"created\": 1727213353,\n \"model\": \"gpt-4o-2024-05-13\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I now know the final answer.\\nFinal
Answer: 42\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
831,\n \"completion_tokens\": 14,\n \"total_tokens\": 845,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_5f20662549\"\n}\n"
\"assistant\",\n \"content\": \"Thought: I now know the final answer.\\n\\nFinal
Answer: The final answer is 42.\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
642,\n \"completion_tokens\": 19,\n \"total_tokens\": 661,\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8fe6c0a68cc3ed8c-ATL
- 8c85de5fad921cf3-GRU
Connection:
- keep-alive
Content-Encoding:
@@ -480,7 +434,7 @@ interactions:
Content-Type:
- application/json
Date:
- Tue, 07 Jan 2025 20:38:38 GMT
- Tue, 24 Sep 2024 21:29:13 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -489,12 +443,10 @@ interactions:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '429'
- '320'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -506,13 +458,13 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999037'
- '29999271'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 1ms
x-request-id:
- req_2552d63d3cbce15909481cc1fc9f36cc
- req_5eba25209fc7e12717cb7e042e7bb4c2
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,117 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '939'
content-type:
- application/json
cookie:
- __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
_cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnuRlxiTxduAVoXHHY58Fvfbll5IS\",\n \"object\":
\"chat.completion\",\n \"created\": 1736458417,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: This is a test task, and the context or question from the coworker is
not specified. Therefore, my best effort would be to affirm my readiness to
answer accurately and in detail any question about Futel Football Club based
on the context described. If provided with specific information or questions,
I will ensure to respond comprehensively as required by my job directives.\",\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 177,\n \"completion_tokens\":
82,\n \"total_tokens\": 259,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_703d4ff298\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ff78bf7bd6cc002-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 09 Jan 2025 21:33:40 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '2263'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_7c1a31da73cd103e9f410f908e59187f
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,119 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '939'
content-type:
- application/json
cookie:
- __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
_cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnuRrFJZGKw8cIEshvuW1PKwFZFKs\",\n \"object\":
\"chat.completion\",\n \"created\": 1736458423,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: Although you mentioned this being a \\\"Test task\\\" and haven't provided
a specific question regarding Futel Football Club, your request appears to involve
ensuring accuracy and detail in responses. For a proper answer about Futel,
I'd be ready to provide details about the club's history, management, players,
match schedules, and recent performance statistics. Remember to ask specific
questions to receive a targeted response. If this were a real context where
information was shared, I would respond precisely to what's been asked regarding
Futel Football Club.\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
177,\n \"completion_tokens\": 113,\n \"total_tokens\": 290,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_703d4ff298\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ff78c1d0ecdc002-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 09 Jan 2025 21:33:47 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '3097'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_179e1d56e2b17303e40480baffbc7b08
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,114 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '939'
content-type:
- application/json
cookie:
- __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
_cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnuRqgg7eiHnDi2DOqdk99fiqOboz\",\n \"object\":
\"chat.completion\",\n \"created\": 1736458422,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: Your best answer to your coworker asking you this, accounting for the
context shared. You MUST return the actual complete content as the final answer,
not a summary.\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
177,\n \"completion_tokens\": 44,\n \"total_tokens\": 221,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_703d4ff298\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ff78c164ad2c002-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 09 Jan 2025 21:33:43 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '899'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_9f5226208edb90a27987aaf7e0ca03d3
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,119 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '939'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnuRjmwH5mrykLxQhFwTqqTiDtuTf\",\n \"object\":
\"chat.completion\",\n \"created\": 1736458415,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: As this is a test task, please note that Futel Football Club is fictional
and any specific details about it would not be available. However, if you have
specific questions or need information about a particular aspect of Futel or
any general football club inquiry, feel free to ask, and I'll do my best to
assist you with your query!\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
177,\n \"completion_tokens\": 79,\n \"total_tokens\": 256,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_703d4ff298\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ff78be5eebfc002-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 09 Jan 2025 21:33:37 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
path=/; expires=Thu, 09-Jan-25 22:03:37 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '2730'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_014478ba748f860d10ac250ca0ba824a
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,119 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '939'
content-type:
- application/json
cookie:
- __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
_cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnuRofLgmzWcDya5LILqYwIJYgFoq\",\n \"object\":
\"chat.completion\",\n \"created\": 1736458420,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: As an official Futel Football Club infopoint, my responsibility is to
provide detailed and accurate information about the club. This includes answering
questions regarding team statistics, player performances, upcoming fixtures,
ticketing and fan zone details, club history, and community initiatives. Our
focus is to ensure that fans and stakeholders have access to the latest and
most precise information about the club's on and off-pitch activities. If there's
anything specific you need to know, just let me know, and I'll be more than
happy to assist!\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
177,\n \"completion_tokens\": 115,\n \"total_tokens\": 292,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_703d4ff298\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ff78c066f37c002-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 09 Jan 2025 21:33:42 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '2459'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_a146dd27f040f39a576750970cca0f52
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,7 +1,7 @@
interactions:
- request:
body: '{"model": "llama3.2:3b", "prompt": "### User:\nRespond in 20 words. Which
model are you?\n\n", "options": {"stop": ["\nObservation:"]}, "stream": false}'
body: '{"model": "llama3.2:3b", "prompt": "### User:\nRespond in 20 words. Who
are you?\n\n", "options": {"stop": ["\nObservation:"]}, "stream": false}'
headers:
accept:
- '*/*'
@@ -10,24 +10,24 @@ interactions:
connection:
- keep-alive
content-length:
- '152'
- '144'
host:
- localhost:11434
user-agent:
- litellm/1.57.4
- litellm/1.56.4
method: POST
uri: http://localhost:11434/api/generate
response:
content: '{"model":"llama3.2:3b","created_at":"2025-01-10T18:37:01.552946Z","response":"I''m
an AI designed by Meta, leveraging large language models to provide information
and assist with various tasks.","done":true,"done_reason":"stop","context":[128006,9125,128007,271,38766,1303,33025,2696,25,6790,220,2366,18,271,128009,128006,882,128007,271,14711,2724,512,66454,304,220,508,4339,13,16299,1646,527,499,1980,128009,128006,78191,128007,271,40,2846,459,15592,6319,555,16197,11,77582,3544,4221,4211,311,3493,2038,323,7945,449,5370,9256,13],"total_duration":2721386667,"load_duration":838784333,"prompt_eval_count":39,"prompt_eval_duration":1462000000,"eval_count":22,"eval_duration":418000000}'
content: '{"model":"llama3.2:3b","created_at":"2024-12-31T16:57:54.063894Z","response":"I''m
an AI designed to provide information and assist with inquiries, while maintaining
a neutral and respectful tone always.","done":true,"done_reason":"stop","context":[128006,9125,128007,271,38766,1303,33025,2696,25,6790,220,2366,18,271,128009,128006,882,128007,271,14711,2724,512,66454,304,220,508,4339,13,10699,527,499,1980,128009,128006,78191,128007,271,40,2846,459,15592,6319,311,3493,2038,323,7945,449,44983,11,1418,20958,264,21277,323,49150,16630,2744,13],"total_duration":651386042,"load_duration":41061917,"prompt_eval_count":38,"prompt_eval_duration":204000000,"eval_count":23,"eval_duration":405000000}'
headers:
Content-Length:
- '683'
- '692'
Content-Type:
- application/json; charset=utf-8
Date:
- Fri, 10 Jan 2025 18:37:01 GMT
- Tue, 31 Dec 2024 16:57:54 GMT
http_version: HTTP/1.1
status_code: 200
- request:
@@ -46,7 +46,7 @@ interactions:
host:
- localhost:11434
user-agent:
- litellm/1.57.4
- litellm/1.56.4
method: POST
uri: http://localhost:11434/api/show
response:
@@ -440,7 +440,7 @@ interactions:
Content-Type:
- application/json; charset=utf-8
Date:
- Fri, 10 Jan 2025 18:37:01 GMT
- Tue, 31 Dec 2024 16:57:54 GMT
Transfer-Encoding:
- chunked
http_version: HTTP/1.1
@@ -461,7 +461,7 @@ interactions:
host:
- localhost:11434
user-agent:
- litellm/1.57.4
- litellm/1.56.4
method: POST
uri: http://localhost:11434/api/show
response:
@@ -855,7 +855,7 @@ interactions:
Content-Type:
- application/json; charset=utf-8
Date:
- Fri, 10 Jan 2025 18:37:01 GMT
- Tue, 31 Dec 2024 16:57:54 GMT
Transfer-Encoding:
- chunked
http_version: HTTP/1.1

View File

@@ -1,353 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nTo give my best complete final answer to the task
use the exact following format:\n\nThought: I now can give a great answer\nFinal
Answer: Your final answer must be the great and the most complete as possible,
it must be outcome described.\n\nI MUST use these formats, my job depends on
it!"}, {"role": "user", "content": "\nCurrent Task: Just say hi.\n\nThis is
the expect criteria for your final answer: Your greeting.\nyou MUST return the
actual complete content as the final answer, not a summary.\n\nBegin! This is
VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"],
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '817'
content-type:
- application/json
cookie:
- _cfuvid=vqZ5X0AXIJfzp5UJSFyTmaCVjA.L8Yg35b.ijZFAPM4-1736282316289-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnSbv3ywhwedwS3YW9Crde6hpWpmK\",\n \"object\":
\"chat.completion\",\n \"created\": 1736351415,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: Hi!\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
154,\n \"completion_tokens\": 13,\n \"total_tokens\": 167,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_5f20662549\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8fed579a4f76b058-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 08 Jan 2025 15:50:15 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=rdN2XYZhM9f2vDB8aOVGYgUHUzSuT.cP8ahngq.QTL0-1736351415-1.0.1.1-lVzOV8iFUHvbswld8xls4a8Ct38zv6Jyr.6THknDnVf3uGZMlgV6r5s10uTnHA2eIi07jJtj7vGopiOpU8qkvA;
path=/; expires=Wed, 08-Jan-25 16:20:15 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=PslIVDqXn7jd_NXBGdSU5kVFvzwCchKPRVe9LpQVdQA-1736351415895-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '416'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999817'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_97c93aa78417badc3f29306054eef79b
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role2. test backstory2\nYour
personal goal is: test goal2\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this tool non-stop.\n\nUse the following format:\n\nThought: you
should always think about what to do\nAction: the action to take, only one name
of [get_final_answer], just the name, exactly as it''s written.\nAction Input:
the input to the action, just a simple python dictionary, enclosed in curly
braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce
all necessary information is gathered:\n\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question"}, {"role": "user",
"content": "\nCurrent Task: NEVER give a Final Answer, unless you are told otherwise,
instead keep using the `get_final_answer` tool non-stop, until you must give
your best final answer\n\nThis is the expect criteria for your final answer:
The final answer\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nThis is the context you''re working with:\nHi!\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"],
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1483'
content-type:
- application/json
cookie:
- _cfuvid=PslIVDqXn7jd_NXBGdSU5kVFvzwCchKPRVe9LpQVdQA-1736351415895-0.0.1.1-604800000;
__cf_bm=rdN2XYZhM9f2vDB8aOVGYgUHUzSuT.cP8ahngq.QTL0-1736351415-1.0.1.1-lVzOV8iFUHvbswld8xls4a8Ct38zv6Jyr.6THknDnVf3uGZMlgV6r5s10uTnHA2eIi07jJtj7vGopiOpU8qkvA
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnSbwn8QaqAzfBVnzhTzIcDKykYTu\",\n \"object\":
\"chat.completion\",\n \"created\": 1736351416,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I should use the available tool to get
the final answer, as per the instructions. \\n\\nAction: get_final_answer\\nAction
Input: {}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
294,\n \"completion_tokens\": 28,\n \"total_tokens\": 322,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_5f20662549\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8fed579dbd80b058-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 08 Jan 2025 15:50:17 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '1206'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999655'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_7b85f1e9b21b5e2385d8a322a8aab06c
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role2. test backstory2\nYour
personal goal is: test goal2\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this tool non-stop.\n\nUse the following format:\n\nThought: you
should always think about what to do\nAction: the action to take, only one name
of [get_final_answer], just the name, exactly as it''s written.\nAction Input:
the input to the action, just a simple python dictionary, enclosed in curly
braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce
all necessary information is gathered:\n\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question"}, {"role": "user",
"content": "\nCurrent Task: NEVER give a Final Answer, unless you are told otherwise,
instead keep using the `get_final_answer` tool non-stop, until you must give
your best final answer\n\nThis is the expect criteria for your final answer:
The final answer\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nThis is the context you''re working with:\nHi!\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "I should
use the available tool to get the final answer, as per the instructions. \n\nAction:
get_final_answer\nAction Input: {}\nObservation: 42"}], "model": "gpt-4o", "stop":
["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1666'
content-type:
- application/json
cookie:
- _cfuvid=PslIVDqXn7jd_NXBGdSU5kVFvzwCchKPRVe9LpQVdQA-1736351415895-0.0.1.1-604800000;
__cf_bm=rdN2XYZhM9f2vDB8aOVGYgUHUzSuT.cP8ahngq.QTL0-1736351415-1.0.1.1-lVzOV8iFUHvbswld8xls4a8Ct38zv6Jyr.6THknDnVf3uGZMlgV6r5s10uTnHA2eIi07jJtj7vGopiOpU8qkvA
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnSbxXFL4NXuGjOX35eCjcWq456lA\",\n \"object\":
\"chat.completion\",\n \"created\": 1736351417,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I now know the final answer\\nFinal
Answer: 42\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
330,\n \"completion_tokens\": 14,\n \"total_tokens\": 344,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_5f20662549\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8fed57a62955b058-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 08 Jan 2025 15:50:17 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '438'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999619'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_1cc65e999b352a54a4c42eb8be543545
http_version: HTTP/1.1
status_code: 200
version: 1

Some files were not shown because too many files have changed in this diff Show More