Compare commits

..

4 Commits

Author SHA1 Message Date
Devin AI
5cccf4f7f5 Fix import formatting in crew.py
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-05 22:39:54 +00:00
Devin AI
dd5f170f45 Implement reviewer suggestions: CLI validation, enhanced error handling, and test improvements
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-05 22:36:15 +00:00
Devin AI
6e8e066091 Fix cache expiration and concurrent test issues
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-05 22:35:39 +00:00
Devin AI
d5dfd5a1f5 Add Record/Replay functionality for offline processing (Issue #2759)
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-05 22:23:03 +00:00
76 changed files with 2173 additions and 11772 deletions

38
.github/security.md vendored
View File

@@ -1,27 +1,19 @@
## CrewAI Security Vulnerability Reporting Policy
CrewAI takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organization.
If you believe you have found a security vulnerability in any CrewAI product or service, please report it to us as described below.
CrewAI prioritizes the security of our software products, services, and GitHub repositories. To promptly address vulnerabilities, follow these steps for reporting security issues:
## Reporting a Vulnerability
Please do not report security vulnerabilities through public GitHub issues.
To report a vulnerability, please email us at security@crewai.com.
Please include the requested information listed below so that we can triage your report more quickly
### Reporting Process
Do **not** report vulnerabilities via public GitHub issues.
- Type of issue (e.g. SQL injection, cross-site scripting, etc.)
- Full paths of source file(s) related to the manifestation of the issue
- The location of the affected source code (tag/branch/commit or direct URL)
- Any special configuration required to reproduce the issue
- Step-by-step instructions to reproduce the issue (please include screenshots if needed)
- Proof-of-concept or exploit code (if possible)
- Impact of the issue, including how an attacker might exploit the issue
Email all vulnerability reports directly to:
**security@crewai.com**
Once we have received your report, we will respond to you at the email address you provide. If the issue is confirmed, we will release a patch as soon as possible depending on the complexity of the issue.
### Required Information
To help us quickly validate and remediate the issue, your report must include:
- **Vulnerability Type:** Clearly state the vulnerability type (e.g., SQL injection, XSS, privilege escalation).
- **Affected Source Code:** Provide full file paths and direct URLs (branch, tag, or commit).
- **Reproduction Steps:** Include detailed, step-by-step instructions. Screenshots are recommended.
- **Special Configuration:** Document any special settings or configurations required to reproduce.
- **Proof-of-Concept (PoC):** Provide exploit or PoC code (if available).
- **Impact Assessment:** Clearly explain the severity and potential exploitation scenarios.
### Our Response
- We will acknowledge receipt of your report promptly via your provided email.
- Confirmed vulnerabilities will receive priority remediation based on severity.
- Patches will be released as swiftly as possible following verification.
### Reward Notice
Currently, we do not offer a bug bounty program. Rewards, if issued, are discretionary.
At this time, we are not offering a bug bounty program. Any rewards will be at our discretion.

View File

@@ -5,29 +5,12 @@ on: [pull_request]
jobs:
lint:
runs-on: ubuntu-latest
env:
TARGET_BRANCH: ${{ github.event.pull_request.base.ref }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Fetch Target Branch
run: git fetch origin $TARGET_BRANCH --depth=1
- name: Install Ruff
run: pip install ruff
- name: Get Changed Python Files
id: changed-files
- name: Install Requirements
run: |
merge_base=$(git merge-base origin/"$TARGET_BRANCH" HEAD)
changed_files=$(git diff --name-only --diff-filter=ACMRTUB "$merge_base" | grep '\.py$' || true)
echo "files<<EOF" >> $GITHUB_OUTPUT
echo "$changed_files" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
pip install ruff
- name: Run Ruff on Changed Files
if: ${{ steps.changed-files.outputs.files != '' }}
run: |
echo "${{ steps.changed-files.outputs.files }}" | tr " " "\n" | xargs -I{} ruff check "{}"
- name: Run Ruff Linter
run: ruff check

View File

@@ -2,3 +2,8 @@ exclude = [
"templates",
"__init__.py",
]
[lint]
select = [
"I", # isort rules
]

View File

@@ -504,7 +504,7 @@ This example demonstrates how to:
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models.
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models.
## How CrewAI Compares

View File

@@ -110,8 +110,6 @@ crewai reset-memories [OPTIONS]
- `-s, --short`: Reset SHORT TERM memory
- `-e, --entities`: Reset ENTITIES memory
- `-k, --kickoff-outputs`: Reset LATEST KICKOFF TASK OUTPUTS
- `-kn, --knowledge`: Reset KNOWLEDGE storage
- `-akn, --agent-knowledge`: Reset AGENT KNOWLEDGE storage
- `-a, --all`: Reset ALL memories
Example:

View File

@@ -27,7 +27,7 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** _(optional)_ | `output_log_file` | Set to True to save logs as logs.txt in the current directory or provide a file path. Logs will be in JSON format if the filename ends in .json, otherwise .txt. Defaults to `None`. |
| **Output Log File** _(optional)_ | `output_log_file` | Set to True to save logs as logs.txt in the current directory or provide a file path. Logs will be in JSON format if the filename ends in .json, otherwise .txt. Defautls to `None`. |
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
@@ -246,7 +246,7 @@ print(f"Token Usage: {crew_output.token_usage}")
You can see real time log of the crew execution, by setting `output_log_file` as a `True(Boolean)` or a `file_name(str)`. Supports logging of events as both `file_name.txt` and `file_name.json`.
In case of `True(Boolean)` will save as `logs.txt`.
In case of `output_log_file` is set as `False(Boolean)` or `None`, the logs will not be populated.
In case of `output_log_file` is set as `False(Booelan)` or `None`, the logs will not be populated.
```python Code
# Save crew logs

View File

@@ -75,12 +75,11 @@ class ExampleFlow(Flow):
flow = ExampleFlow()
flow.plot()
result = flow.kickoff()
print(f"Generated fun fact: {result}")
```
![Flow Visual image](/images/crewai-flow-1.png)
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
Each Flow instance automatically receives a unique identifier (UUID) in its state, which helps track and manage flow executions. The state can also store additional data (like the generated city and fun fact) that persists throughout the flow's execution.
@@ -147,7 +146,6 @@ class OutputExampleFlow(Flow):
flow = OutputExampleFlow()
flow.plot("my_flow_plot")
final_output = flow.kickoff()
print("---- Final Output ----")
@@ -160,10 +158,9 @@ Second method received: Output from first_method
```
</CodeGroup>
![Flow Visual image](/images/crewai-flow-2.png)
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow.
The `kickoff()` method will return the final output, which is then printed to the console. The `plot()` method will generate the HTML file, which will help you understand the flow.
The `kickoff()` method will return the final output, which is then printed to the console.
#### Accessing and Updating State
@@ -195,7 +192,6 @@ class StateExampleFlow(Flow[ExampleState]):
return self.state.message
flow = StateExampleFlow()
flow.plot("my_flow_plot")
final_output = flow.kickoff()
print(f"Final Output: {final_output}")
print("Final State:")
@@ -210,8 +206,6 @@ counter=2 message='Hello from first_method - updated by second_method'
</CodeGroup>
![Flow Visual image](/images/crewai-flow-2.png)
In this example, the state is updated by both `first_method` and `second_method`.
After the Flow has run, you can access the final state to see the updates made by these methods.
@@ -255,12 +249,9 @@ class UnstructuredExampleFlow(Flow):
flow = UnstructuredExampleFlow()
flow.plot("my_flow_plot")
flow.kickoff()
```
![Flow Visual image](/images/crewai-flow-3.png)
**Note:** The `id` field is automatically generated and preserved throughout the flow's execution. You don't need to manage or set it manually, and it will be maintained even when updating the state with new data.
**Key Points:**
@@ -311,8 +302,6 @@ flow = StructuredExampleFlow()
flow.kickoff()
```
![Flow Visual image](/images/crewai-flow-3.png)
**Key Points:**
- **Defined Schema:** `ExampleState` clearly outlines the state structure, enhancing code readability and maintainability.
@@ -447,7 +436,6 @@ class OrExampleFlow(Flow):
flow = OrExampleFlow()
flow.plot("my_flow_plot")
flow.kickoff()
```
@@ -458,8 +446,6 @@ Logger: Hello from the second method
</CodeGroup>
![Flow Visual image](/images/crewai-flow-4.png)
When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`.
The `or_` function is used to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
@@ -488,7 +474,6 @@ class AndExampleFlow(Flow):
print(self.state)
flow = AndExampleFlow()
flow.plot()
flow.kickoff()
```
@@ -499,8 +484,6 @@ flow.kickoff()
</CodeGroup>
![Flow Visual image](/images/crewai-flow-5.png)
When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output.
The `and_` function is used to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
@@ -544,7 +527,6 @@ class RouterFlow(Flow[ExampleState]):
flow = RouterFlow()
flow.plot("my_flow_plot")
flow.kickoff()
```
@@ -556,8 +538,6 @@ Fourth method running
</CodeGroup>
![Flow Visual image](/images/crewai-flow-6.png)
In the above example, the `start_method` generates a random boolean value and sets it in the state.
The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean.
If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`.
@@ -661,7 +641,6 @@ class MarketResearchFlow(Flow[MarketResearchState]):
# Usage example
async def run_flow():
flow = MarketResearchFlow()
flow.plot("MarketResearchFlowPlot")
result = await flow.kickoff_async(inputs={"product": "AI-powered chatbots"})
return result
@@ -671,8 +650,6 @@ if __name__ == "__main__":
asyncio.run(run_flow())
```
![Flow Visual image](/images/crewai-flow-7.png)
This example demonstrates several key features of using Agents in flows:
1. **Structured Output**: Using Pydantic models to define the expected output format (`MarketAnalysis`) ensures type safety and structured data throughout the flow.
@@ -769,16 +746,13 @@ def kickoff():
def plot():
poem_flow = PoemFlow()
poem_flow.plot("PoemFlowPlot")
poem_flow.plot()
if __name__ == "__main__":
kickoff()
plot()
```
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method. The PoemFlowPlot will be generated by `plot()` method.
![Flow Visual image](/images/crewai-flow-8.png)
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method.
### Running the Flow

View File

@@ -397,53 +397,6 @@ result = crew.kickoff(inputs={"question": "What city does John live in and how o
John is 30 years old and lives in San Francisco.
```
</CodeGroup>
## Query Rewriting
CrewAI implements an intelligent query rewriting mechanism to optimize knowledge retrieval. When an agent needs to search through knowledge sources, the raw task prompt is automatically transformed into a more effective search query.
### How Query Rewriting Works
1. When an agent executes a task with knowledge sources available, the `_get_knowledge_search_query` method is triggered
2. The agent's LLM is used to transform the original task prompt into an optimized search query
3. This optimized query is then used to retrieve relevant information from knowledge sources
### Benefits of Query Rewriting
<CardGroup cols={2}>
<Card title="Improved Retrieval Accuracy" icon="bullseye-arrow">
By focusing on key concepts and removing irrelevant content, query rewriting helps retrieve more relevant information.
</Card>
<Card title="Context Awareness" icon="brain">
The rewritten queries are designed to be more specific and context-aware for vector database retrieval.
</Card>
</CardGroup>
### Implementation Details
Query rewriting happens transparently using a system prompt that instructs the LLM to:
- Focus on key words of the intended task
- Make the query more specific and context-aware
- Remove irrelevant content like output format instructions
- Generate only the rewritten query without preamble or postamble
<Tip>
This mechanism is fully automatic and requires no configuration from users. The agent's LLM is used to perform the query rewriting, so using a more capable LLM can improve the quality of rewritten queries.
</Tip>
### Example
```python
# Original task prompt
task_prompt = "Answer the following questions about the user's favorite movies: What movie did John watch last week? Format your answer in JSON."
# Behind the scenes, this might be rewritten as:
rewritten_query = "What movies did John watch last week?"
```
The rewritten query is more focused on the core information need and removes irrelevant instructions about output formatting.
## Clearing Knowledge
If you need to clear the knowledge stored in CrewAI, you can use the `crewai reset-memories` command with the `--knowledge` option.
@@ -497,13 +450,6 @@ crew = Crew(
result = crew.kickoff(
inputs={"question": "What is the storage capacity of the XPS 13?"}
)
# Resetting the agent specific knowledge via crew object
crew.reset_memories(command_type = 'agent_knowledge')
# Resetting the agent specific knowledge via CLI
crewai reset-memories --agent-knowledge
crewai reset-memories -akn
```
<Info>
@@ -707,11 +653,4 @@ recent_news = SpaceNewsKnowledgeSource(
- Configure appropriate embedding models
- Consider using local embedding providers for faster processing
</Accordion>
<Accordion title="One Time Knowledge">
- With the typical file structure provided by CrewAI, knowledge sources are embedded every time the kickoff is triggered.
- If the knowledge sources are large, this leads to inefficiency and increased latency, as the same data is embedded each time.
- To resolve this, directly initialize the knowledge parameter instead of the knowledge_sources parameter.
- Link to the issue to get complete idea [Github Issue](https://github.com/crewAIInc/crewAI/issues/2755)
</Accordion>
</AccordionGroup>

View File

@@ -27,19 +27,23 @@ Large Language Models (LLMs) are the core intelligence behind CrewAI agents. The
</Card>
</CardGroup>
## Setting up your LLM
## Setting Up Your LLM
There are different places in CrewAI code where you can specify the model to use. Once you specify the model you are using, you will need to provide the configuration (like an API key) for each of the model providers you use. See the [provider configuration examples](#provider-configuration-examples) section for your provider.
There are three ways to configure LLMs in CrewAI. Choose the method that best fits your workflow:
<Tabs>
<Tab title="1. Environment Variables">
The simplest way to get started. Set the model in your environment directly, through an `.env` file or in your app code. If you used `crewai create` to bootstrap your project, it will be set already.
The simplest way to get started. Set these variables in your environment:
```bash .env
MODEL=model-id # e.g. gpt-4o, gemini-2.0-flash, claude-3-sonnet-...
```bash
# Required: Your API key for authentication
OPENAI_API_KEY=<your-api-key>
# Be sure to set your API keys here too. See the Provider
# section below.
# Optional: Default model selection
OPENAI_MODEL_NAME=gpt-4o-mini # Default if not set
# Optional: Organization ID (if applicable)
OPENAI_ORGANIZATION_ID=<your-org-id>
```
<Warning>
@@ -49,13 +53,13 @@ There are different places in CrewAI code where you can specify the model to use
<Tab title="2. YAML Configuration">
Create a YAML file to define your agent configurations. This method is great for version control and team collaboration:
```yaml agents.yaml {6}
```yaml
researcher:
role: Research Specialist
goal: Conduct comprehensive research and analysis
backstory: A dedicated research professional with years of experience
verbose: true
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
llm: openai/gpt-4o-mini # your model here
# (see provider configuration examples below for more)
```
@@ -70,23 +74,23 @@ There are different places in CrewAI code where you can specify the model to use
<Tab title="3. Direct Code">
For maximum flexibility, configure LLMs directly in your Python code:
```python {4,8}
```python
from crewai import LLM
# Basic configuration
llm = LLM(model="model-id-here") # gpt-4o, gemini-2.0-flash, anthropic/claude...
llm = LLM(model="gpt-4")
# Advanced configuration with detailed parameters
llm = LLM(
model="model-id-here", # gpt-4o, gemini-2.0-flash, anthropic/claude...
model="gpt-4o-mini",
temperature=0.7, # Higher for more creative outputs
timeout=120, # Seconds to wait for response
max_tokens=4000, # Maximum length of response
top_p=0.9, # Nucleus sampling parameter
frequency_penalty=0.1 , # Reduce repetition
presence_penalty=0.1, # Encourage topic diversity
timeout=120, # Seconds to wait for response
max_tokens=4000, # Maximum length of response
top_p=0.9, # Nucleus sampling parameter
frequency_penalty=0.1, # Reduce repetition
presence_penalty=0.1, # Encourage topic diversity
response_format={"type": "json"}, # For structured outputs
seed=42 # For reproducible results
seed=42 # For reproducible results
)
```
@@ -106,6 +110,7 @@ There are different places in CrewAI code where you can specify the model to use
## Provider Configuration Examples
CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities.
In this section, you'll find detailed examples that help you select, configure, and optimize the LLM that best fits your project's needs.
@@ -169,55 +174,19 @@ In this section, you'll find detailed examples that help you select, configure,
```
</Accordion>
<Accordion title="Google (Gemini API)">
Set your API key in your `.env` file. If you need a key, or need to find an
existing key, check [AI Studio](https://aistudio.google.com/apikey).
<Accordion title="Google">
Set the following environment variables in your `.env` file:
```toml .env
```toml Code
# Option 1: Gemini accessed with an API key.
# https://ai.google.dev/gemini-api/docs/api-key
GEMINI_API_KEY=<your-api-key>
# Option 2: Vertex AI IAM credentials for Gemini, Anthropic, and Model Garden.
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
```
Example usage in your CrewAI project:
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.0-flash",
temperature=0.7,
)
```
### Gemini models
Google offers a range of powerful models optimized for different use cases.
| Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
### Gemma
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window |
|----------------|----------------|
| gemma-3-1b-it | 32k tokens |
| gemma-3-4b-it | 32k tokens |
| gemma-3-12b-it | 32k tokens |
| gemma-3-27b-it | 128k tokens |
</Accordion>
<Accordion title="Google (Vertex AI)">
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
Get credentials from your Google Cloud Console and save it to a JSON file with the following code:
```python Code
import json
@@ -241,18 +210,14 @@ In this section, you'll find detailed examples that help you select, configure,
vertex_credentials=vertex_credentials_json
)
```
Google offers a range of powerful models optimized for different use cases:
| Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
| Model | Context Window | Best For |
|-----------------------|----------------|------------------------------------------------------------------|
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
</Accordion>
<Accordion title="Azure">
@@ -418,7 +383,7 @@ In this section, you'll find detailed examples that help you select, configure,
| microsoft/phi-3-medium-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3.5-mini-instruct | 128K tokens | Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments |
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecture to deliver compute efficient content generation |
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation |
| microsoft/kosmos-2 | 1,024 tokens | Groundbreaking multimodal model designed to understand and reason about visual elements in images. |
| microsoft/phi-3-vision-128k-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| microsoft/phi-3.5-vision-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
@@ -442,19 +407,19 @@ In this section, you'll find detailed examples that help you select, configure,
</Accordion>
<Accordion title="Local NVIDIA NIM Deployed using WSL2">
NVIDIA NIM enables you to run powerful LLMs locally on your Windows machine using WSL2 (Windows Subsystem for Linux).
This approach allows you to leverage your NVIDIA GPU for private, secure, and cost-effective AI inference without relying on cloud services.
NVIDIA NIM enables you to run powerful LLMs locally on your Windows machine using WSL2 (Windows Subsystem for Linux).
This approach allows you to leverage your NVIDIA GPU for private, secure, and cost-effective AI inference without relying on cloud services.
Perfect for development, testing, or production scenarios where data privacy or offline capabilities are required.
Here is a step-by-step guide to setting up a local NVIDIA NIM model:
1. Follow installation instructions from [NVIDIA Website](https://docs.nvidia.com/nim/wsl2/latest/getting-started.html)
2. Install the local model. For Llama 3.1-8b follow [instructions](https://build.nvidia.com/meta/llama-3_1-8b-instruct/deploy)
3. Configure your crewai local models:
```python Code
from crewai.llm import LLM
@@ -476,7 +441,7 @@ In this section, you'll find detailed examples that help you select, configure,
config=self.agents_config['researcher'], # type: ignore[index]
llm=local_nvidia_nim_llm
)
# ...
```
</Accordion>
@@ -672,19 +637,19 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
When streaming is enabled, responses are delivered in chunks as they're generated, creating a more responsive user experience.
</Tab>
<Tab title="Event Handling">
CrewAI emits events for each chunk received during streaming:
```python
from crewai import LLM
from crewai.utilities.events import EventHandler, LLMStreamChunkEvent
class MyEventHandler(EventHandler):
def on_llm_stream_chunk(self, event: LLMStreamChunkEvent):
# Process each chunk as it arrives
print(f"Received chunk: {event.chunk}")
# Register the event handler
from crewai.utilities.events import crewai_event_bus
crewai_event_bus.register_handler(MyEventHandler())
@@ -820,7 +785,7 @@ Learn how to get the most out of your LLM configuration:
<Tip>
Use larger context models for extensive tasks
</Tip>
```python
# Large context model
llm = LLM(model="openai/gpt-4o") # 128K tokens

View File

@@ -679,7 +679,6 @@ crewai reset-memories [OPTIONS]
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
| `-k`, `--kickoff-outputs` | Reset LATEST KICKOFF TASK OUTPUTS. | Flag (boolean) | False |
| `-kn`, `--knowledge` | Reset KNOWLEDEGE storage | Flag (boolean) | False |
| `-akn`, `--agent-knowledge` | Reset AGENT KNOWLEDGE storage | Flag (boolean) | False |
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
Note: To use the cli command you need to have your crew in a file called crew.py in the same directory.
@@ -717,11 +716,9 @@ my_crew.reset_memories(command_type = 'all') # Resets all the memory
| `entities` | Reset ENTITIES memory. |
| `kickoff_outputs` | Reset LATEST KICKOFF TASK OUTPUTS. |
| `knowledge` | Reset KNOWLEDGE memory. |
| `agent_knowledge` | Reset AGENT KNOWLEDGE memory. |
| `all` | Reset ALL memories. |
## Benefits of Using CrewAI's Memory System
- 🦾 **Adaptive Learning:** Crews become more efficient over time, adapting to new information and refining their approach to tasks.

View File

@@ -35,8 +35,7 @@ Let's get started building your first crew!
Before starting, make sure you have:
1. Installed CrewAI following the [installation guide](/installation)
2. Set up your LLM API key in your environment, following the [LLM setup
guide](/concepts/llms#setting-up-your-llm)
2. Set up your OpenAI API key in your environment variables
3. Basic understanding of Python
## Step 1: Create a New CrewAI Project
@@ -93,8 +92,7 @@ For our research crew, we'll create two agents:
1. A **researcher** who excels at finding and organizing information
2. An **analyst** who can interpret research findings and create insightful reports
Let's modify the `agents.yaml` file to define these specialized agents. Be sure
to set `llm` to the provider you are using.
Let's modify the `agents.yaml` file to define these specialized agents:
```yaml
# src/research_crew/config/agents.yaml
@@ -109,7 +107,7 @@ researcher:
finding relevant information from various sources. You excel at
organizing information in a clear and structured manner, making
complex topics accessible to others.
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
llm: openai/gpt-4o-mini
analyst:
role: >
@@ -122,7 +120,7 @@ analyst:
and technical writing. You have a talent for identifying patterns
and extracting meaningful insights from research data, then
communicating those insights effectively through well-crafted reports.
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
llm: openai/gpt-4o-mini
```
Notice how each agent has a distinct role, goal, and backstory. These elements aren't just descriptive - they actively shape how the agent approaches its tasks. By crafting these carefully, you can create agents with specialized skills and perspectives that complement each other.
@@ -284,12 +282,12 @@ This script prepares the environment, specifies our research topic, and kicks of
Create a `.env` file in your project root with your API keys:
```sh
```
OPENAI_API_KEY=your_openai_api_key
SERPER_API_KEY=your_serper_api_key
# Add your provider's API key here too.
```
See the [LLM Setup guide](/concepts/llms#setting-up-your-llm) for details on configuring your provider of choice. You can get a Serper API key from [Serper.dev](https://serper.dev/).
You can get a Serper API key from [Serper.dev](https://serper.dev/).
## Step 8: Install Dependencies

View File

@@ -45,8 +45,7 @@ Let's dive in and build your first flow!
Before starting, make sure you have:
1. Installed CrewAI following the [installation guide](/installation)
2. Set up your LLM API key in your environment, following the [LLM setup
guide](/concepts/llms#setting-up-your-llm)
2. Set up your OpenAI API key in your environment variables
3. Basic understanding of Python
## Step 1: Create a New CrewAI Flow Project
@@ -108,8 +107,6 @@ Now, let's modify the generated files for the content writer crew. We'll set up
1. First, update the agents configuration file to define our content creation team:
Remember to set `llm` to the provider you are using.
```yaml
# src/guide_creator_flow/crews/content_crew/config/agents.yaml
content_writer:
@@ -122,7 +119,7 @@ content_writer:
You are a talented educational writer with expertise in creating clear, engaging
content. You have a gift for explaining complex concepts in accessible language
and organizing information in a way that helps readers build their understanding.
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
llm: openai/gpt-4o-mini
content_reviewer:
role: >
@@ -135,7 +132,7 @@ content_reviewer:
content. You have an eye for detail, clarity, and coherence. You excel at
improving content while maintaining the original author's voice and ensuring
consistent quality across multiple sections.
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
llm: openai/gpt-4o-mini
```
These agent definitions establish the specialized roles and perspectives that will shape how our AI agents approach content creation. Notice how each agent has a distinct purpose and expertise.
@@ -444,15 +441,10 @@ This is the power of flows - combining different types of processing (user inter
## Step 6: Set Up Your Environment Variables
Create a `.env` file in your project root with your API keys. See the [LLM setup
guide](/concepts/llms#setting-up-your-llm) for details on configuring a provider.
Create a `.env` file in your project root with your API keys:
```sh .env
```
OPENAI_API_KEY=your_openai_api_key
# or
GEMINI_API_KEY=your_gemini_api_key
# or
ANTHROPIC_API_KEY=your_anthropic_api_key
```
## Step 7: Install Dependencies
@@ -555,10 +547,7 @@ Let's break down the key components of flows to help you understand how to build
Flows allow you to make direct calls to language models when you need simple, structured responses:
```python
llm = LLM(
model="model-id-here", # gpt-4o, gemini-2.0-flash, anthropic/claude...
response_format=GuideOutline
)
llm = LLM(model="openai/gpt-4o-mini", response_format=GuideOutline)
response = llm.call(messages=messages)
```

View File

@@ -68,13 +68,7 @@ We'll create a CrewAI application where two agents collaborate to research and w
```python
from crewai import Agent, Crew, Process, Task
from crewai_tools import SerperDevTool
from openinference.instrumentation.crewai import CrewAIInstrumentor
from phoenix.otel import register
# setup monitoring for your crew
tracer_provider = register(
endpoint="http://localhost:6006/v1/traces")
CrewAIInstrumentor().instrument(skip_dep_check=True, tracer_provider=tracer_provider)
search_tool = SerperDevTool()
# Define your agents with roles and goals

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

View File

@@ -71,10 +71,6 @@ If you haven't installed `uv` yet, follow **step 1** to quickly get it set up on
```
</Warning>
<Warning>
If you encounter the `chroma-hnswlib==0.7.6` build error (`fatal error C1083: Cannot open include file: 'float.h'`) on Windows, install (Visual Studio Build Tools)[https://visualstudio.microsoft.com/downloads/] with *Desktop development with C++*.
</Warning>
- To verify that `crewai` is installed, run:
```shell
uv tool list

View File

@@ -180,9 +180,8 @@ Follow the steps below to get Crewing! 🚣‍♂️
</Step>
<Step title="Set your environment variables">
Before running your crew, make sure you have the following keys set as environment variables in your `.env` file:
- An [OpenAI API key](https://platform.openai.com/account/api-keys) (or other LLM API key): `OPENAI_API_KEY=sk-...`
- A [Serper.dev](https://serper.dev/) API key: `SERPER_API_KEY=YOUR_KEY_HERE`
- The configuration for your choice of model, such as an API key. See the
[LLM setup guide](/concepts/llms#setting-up-your-llm) to learn how to configure models from any provider.
</Step>
<Step title="Lock and install the dependencies">
- Lock the dependencies and install them by using the CLI command:
@@ -318,7 +317,7 @@ email_summarizer:
Summarize emails into a concise and clear summary
backstory: >
You will create a 5 bullet point summary of the report
llm: provider/model-id # Add your choice of model here
llm: openai/gpt-4o
```
<Tip>

View File

@@ -22,7 +22,7 @@ streamlining the process of finding specific information within large document c
Install the crewai_tools package by running the following command in your terminal:
```shell
uv pip install docx2txt 'crewai[tools]'
pip install 'crewai[tools]'
```
## Example
@@ -76,4 +76,4 @@ tool = DOCXSearchTool(
),
)
)
```
```

View File

@@ -8,10 +8,10 @@ icon: language
## Description
This tool is used to convert natural language to SQL queries. When passed to the agent it will generate queries and then use them to interact with the database.
This tool is used to convert natural language to SQL queries. When passsed to the agent it will generate queries and then use them to interact with the database.
This enables multiple workflows like having an Agent to access the database fetch information based on the goal and then use the information to generate a response, report or any other output.
Along with that provides the ability for the Agent to update the database based on its goal.
Along with that proivdes the ability for the Agent to update the database based on its goal.
**Attention**: Make sure that the Agent has access to a Read-Replica or that is okay for the Agent to run insert/update queries on the database.
@@ -81,4 +81,4 @@ The Tool provides endless possibilities on the logic of the Agent and how it can
```md
DB -> Agent -> ... -> Agent -> DB
```
```

View File

@@ -143,30 +143,12 @@ config = {
"config": {
"model": "text-embedding-ada-002"
}
},
"vectordb": {
"provider": "elasticsearch",
"config": {
"collection_name": "my-collection",
"cloud_id": "deployment-name:xxxx",
"api_key": "your-key",
"verify_certs": False
}
},
"chunker": {
"chunk_size": 400,
"chunk_overlap": 100,
"length_function": "len",
"min_chunk_size": 0
}
}
rag_tool = RagTool(config=config, summarize=True)
```
The internal RAG tool utilizes the Embedchain adapter, allowing you to pass any configuration options that are supported by Embedchain.
You can refer to the [Embedchain documentation](https://docs.embedchain.ai/components/introduction) for details.
Make sure to review the configuration options available in the .yaml file.
## Conclusion
The `RagTool` provides a powerful way to create and query knowledge bases from various data sources. By leveraging Retrieval-Augmented Generation, it enables agents to access and retrieve relevant information efficiently, enhancing their ability to provide accurate and contextually appropriate responses.

View File

@@ -1,6 +1,6 @@
[project]
name = "crewai"
version = "0.119.0"
version = "0.118.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<3.13"
@@ -45,7 +45,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools~=0.44.0"]
tools = ["crewai-tools~=0.42.2"]
embeddings = [
"tiktoken~=0.7.0"
]

View File

@@ -17,7 +17,7 @@ warnings.filterwarnings(
category=UserWarning,
module="pydantic.main",
)
__version__ = "0.119.0"
__version__ = "0.118.0"
__all__ = [
"Agent",
"Crew",

View File

@@ -20,7 +20,6 @@ from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.utilities import Converter, Prompts
from crewai.utilities.agent_utils import (
get_tool_names,
load_agent_from_repository,
parse_tools,
render_text_description_and_args,
)
@@ -32,14 +31,6 @@ from crewai.utilities.events.agent_events import (
AgentExecutionStartedEvent,
)
from crewai.utilities.events.crewai_event_bus import crewai_event_bus
from crewai.utilities.events.knowledge_events import (
KnowledgeQueryCompletedEvent,
KnowledgeQueryFailedEvent,
KnowledgeQueryStartedEvent,
KnowledgeRetrievalCompletedEvent,
KnowledgeRetrievalStartedEvent,
KnowledgeSearchQueryFailedEvent,
)
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.utilities.training_handler import CrewTrainingHandler
@@ -131,20 +122,6 @@ class Agent(BaseAgent):
default=None,
description="Knowledge context for the crew.",
)
knowledge_search_query: Optional[str] = Field(
default=None,
description="Knowledge search query for the agent dynamically generated by the agent.",
)
from_repository: Optional[str] = Field(
default=None,
description="The Agent's role to be used from your repository.",
)
@model_validator(mode="before")
def validate_from_repository(cls, v):
if v is not None and (from_repository := v.get("from_repository")):
return load_agent_from_repository(from_repository) | v
return v
@model_validator(mode="after")
def post_init_setup(self):
@@ -208,7 +185,7 @@ class Agent(BaseAgent):
self,
task: Task,
context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None,
tools: Optional[List[BaseTool]] = None
) -> str:
"""Execute a task with the agent.
@@ -268,65 +245,27 @@ class Agent(BaseAgent):
knowledge_config = (
self.knowledge_config.model_dump() if self.knowledge_config else {}
)
if self.knowledge:
crewai_event_bus.emit(
self,
event=KnowledgeRetrievalStartedEvent(
agent=self,
),
agent_knowledge_snippets = self.knowledge.query(
[task.prompt()], **knowledge_config
)
try:
self.knowledge_search_query = self._get_knowledge_search_query(
task_prompt
if agent_knowledge_snippets:
self.agent_knowledge_context = extract_knowledge_context(
agent_knowledge_snippets
)
if self.knowledge_search_query:
agent_knowledge_snippets = self.knowledge.query(
[self.knowledge_search_query], **knowledge_config
)
if agent_knowledge_snippets:
self.agent_knowledge_context = extract_knowledge_context(
agent_knowledge_snippets
)
if self.agent_knowledge_context:
task_prompt += self.agent_knowledge_context
if self.crew:
knowledge_snippets = self.crew.query_knowledge(
[self.knowledge_search_query], **knowledge_config
)
if knowledge_snippets:
self.crew_knowledge_context = extract_knowledge_context(
knowledge_snippets
)
if self.crew_knowledge_context:
task_prompt += self.crew_knowledge_context
if self.agent_knowledge_context:
task_prompt += self.agent_knowledge_context
crewai_event_bus.emit(
self,
event=KnowledgeRetrievalCompletedEvent(
query=self.knowledge_search_query,
agent=self,
retrieved_knowledge=(
(self.agent_knowledge_context or "")
+ (
"\n"
if self.agent_knowledge_context
and self.crew_knowledge_context
else ""
)
+ (self.crew_knowledge_context or "")
),
),
)
except Exception as e:
crewai_event_bus.emit(
self,
event=KnowledgeSearchQueryFailedEvent(
query=self.knowledge_search_query or "",
agent=self,
error=str(e),
),
if self.crew:
knowledge_snippets = self.crew.query_knowledge(
[task.prompt()], **knowledge_config
)
if knowledge_snippets:
self.crew_knowledge_context = extract_knowledge_context(
knowledge_snippets
)
if self.crew_knowledge_context:
task_prompt += self.crew_knowledge_context
tools = tools or self.tools or []
self.create_agent_executor(tools=tools, task=task)
@@ -349,19 +288,12 @@ class Agent(BaseAgent):
# Determine execution method based on timeout setting
if self.max_execution_time is not None:
if (
not isinstance(self.max_execution_time, int)
or self.max_execution_time <= 0
):
raise ValueError(
"Max Execution time must be a positive integer greater than zero"
)
result = self._execute_with_timeout(
task_prompt, task, self.max_execution_time
)
if not isinstance(self.max_execution_time, int) or self.max_execution_time <= 0:
raise ValueError("Max Execution time must be a positive integer greater than zero")
result = self._execute_with_timeout(task_prompt, task, self.max_execution_time)
else:
result = self._execute_without_timeout(task_prompt, task)
except TimeoutError as e:
# Propagate TimeoutError without retry
crewai_event_bus.emit(
@@ -413,46 +345,54 @@ class Agent(BaseAgent):
)
return result
def _execute_with_timeout(self, task_prompt: str, task: Task, timeout: int) -> str:
def _execute_with_timeout(
self,
task_prompt: str,
task: Task,
timeout: int
) -> str:
"""Execute a task with a timeout.
Args:
task_prompt: The prompt to send to the agent.
task: The task being executed.
timeout: Maximum execution time in seconds.
Returns:
The output of the agent.
Raises:
TimeoutError: If execution exceeds the timeout.
RuntimeError: If execution fails for other reasons.
"""
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(
self._execute_without_timeout, task_prompt=task_prompt, task=task
self._execute_without_timeout,
task_prompt=task_prompt,
task=task
)
try:
return future.result(timeout=timeout)
except concurrent.futures.TimeoutError:
future.cancel()
raise TimeoutError(
f"Task '{task.description}' execution timed out after {timeout} seconds. Consider increasing max_execution_time or optimizing the task."
)
raise TimeoutError(f"Task '{task.description}' execution timed out after {timeout} seconds. Consider increasing max_execution_time or optimizing the task.")
except Exception as e:
future.cancel()
raise RuntimeError(f"Task execution failed: {str(e)}")
def _execute_without_timeout(self, task_prompt: str, task: Task) -> str:
def _execute_without_timeout(
self,
task_prompt: str,
task: Task
) -> str:
"""Execute a task without a timeout.
Args:
task_prompt: The prompt to send to the agent.
task: The task being executed.
Returns:
The output of the agent.
"""
@@ -620,61 +560,6 @@ class Agent(BaseAgent):
def set_fingerprint(self, fingerprint: Fingerprint):
self.security_config.fingerprint = fingerprint
def _get_knowledge_search_query(self, task_prompt: str) -> str | None:
"""Generate a search query for the knowledge base based on the task description."""
crewai_event_bus.emit(
self,
event=KnowledgeQueryStartedEvent(
task_prompt=task_prompt,
agent=self,
),
)
query = self.i18n.slice("knowledge_search_query").format(
task_prompt=task_prompt
)
rewriter_prompt = self.i18n.slice("knowledge_search_query_system_prompt")
if not isinstance(self.llm, BaseLLM):
self._logger.log(
"warning",
f"Knowledge search query failed: LLM for agent '{self.role}' is not an instance of BaseLLM",
)
crewai_event_bus.emit(
self,
event=KnowledgeQueryFailedEvent(
agent=self,
error="LLM is not compatible with knowledge search queries",
),
)
return None
try:
rewritten_query = self.llm.call(
[
{
"role": "system",
"content": rewriter_prompt,
},
{"role": "user", "content": query},
]
)
crewai_event_bus.emit(
self,
event=KnowledgeQueryCompletedEvent(
query=query,
agent=self,
),
)
return rewritten_query
except Exception as e:
crewai_event_bus.emit(
self,
event=KnowledgeQueryFailedEvent(
agent=self,
error=str(e),
),
)
return None
def kickoff(
self,
messages: Union[str, List[Dict[str, str]]],

View File

@@ -5,5 +5,5 @@ def get_auth_token() -> str:
"""Get the authentication token."""
access_token = TokenManager().get_token()
if not access_token:
raise Exception("No token found, make sure you are logged in")
raise Exception()
return access_token

View File

@@ -1,5 +1,6 @@
import os
from importlib.metadata import version as get_version
from typing import Optional
from typing import Optional, Tuple
import click
@@ -137,8 +138,12 @@ def log_tasks_outputs() -> None:
@click.option("-s", "--short", is_flag=True, help="Reset SHORT TERM memory")
@click.option("-e", "--entities", is_flag=True, help="Reset ENTITIES memory")
@click.option("-kn", "--knowledge", is_flag=True, help="Reset KNOWLEDGE storage")
@click.option("-akn", "--agent-knowledge", is_flag=True, help="Reset AGENT KNOWLEDGE storage")
@click.option("-k","--kickoff-outputs",is_flag=True,help="Reset LATEST KICKOFF TASK OUTPUTS")
@click.option(
"-k",
"--kickoff-outputs",
is_flag=True,
help="Reset LATEST KICKOFF TASK OUTPUTS",
)
@click.option("-a", "--all", is_flag=True, help="Reset ALL memories")
def reset_memories(
long: bool,
@@ -146,20 +151,18 @@ def reset_memories(
entities: bool,
knowledge: bool,
kickoff_outputs: bool,
agent_knowledge: bool,
all: bool,
) -> None:
"""
Reset the crew memories (long, short, entity, latest_crew_kickoff_ouputs, knowledge, agent_knowledge). This will delete all the data saved.
Reset the crew memories (long, short, entity, latest_crew_kickoff_ouputs). This will delete all the data saved.
"""
try:
memory_types = [long, short, entities, knowledge, agent_knowledge, kickoff_outputs, all]
if not any(memory_types):
if not all and not (long or short or entities or knowledge or kickoff_outputs):
click.echo(
"Please specify at least one memory type to reset using the appropriate flags."
)
return
reset_memories_command(long, short, entities, knowledge, agent_knowledge, kickoff_outputs, all)
reset_memories_command(long, short, entities, knowledge, kickoff_outputs, all)
except Exception as e:
click.echo(f"An error occurred while resetting memories: {e}", err=True)
@@ -198,9 +201,22 @@ def install(context):
@crewai.command()
def run():
@click.option(
"--record",
is_flag=True,
help="Record LLM responses for later replay",
)
@click.option(
"--replay",
is_flag=True,
help="Replay from recorded LLM responses without making network calls",
)
def run(record: bool = False, replay: bool = False):
"""Run the Crew."""
run_crew()
if record and replay:
raise click.UsageError("Cannot use --record and --replay simultaneously")
click.echo("Running the Crew")
run_crew(record=record, replay=replay)
@crewai.command()

View File

@@ -13,7 +13,7 @@ ENV_VARS = {
],
"gemini": [
{
"prompt": "Enter your GEMINI API key from https://ai.dev/apikey (press Enter to skip)",
"prompt": "Enter your GEMINI API key (press Enter to skip)",
"key_name": "GEMINI_API_KEY",
}
],

View File

@@ -4,7 +4,7 @@ import click
# Be mindful about changing this.
# on some environments we don't use this command but instead uv sync directly
# on some enviorments we don't use this command but instead uv sync directly
# so if you expect this to support more things you will need to replicate it there
# ask @joaomdmoura if you are unsure
def install_crew(proxy_options: list[str]) -> None:

View File

@@ -14,7 +14,6 @@ class PlusAPI:
TOOLS_RESOURCE = "/crewai_plus/api/v1/tools"
CREWS_RESOURCE = "/crewai_plus/api/v1/crews"
AGENTS_RESOURCE = "/crewai_plus/api/v1/agents"
def __init__(self, api_key: str) -> None:
self.api_key = api_key
@@ -38,9 +37,6 @@ class PlusAPI:
def get_tool(self, handle: str):
return self._make_request("GET", f"{self.TOOLS_RESOURCE}/{handle}")
def get_agent(self, handle: str):
return self._make_request("GET", f"{self.AGENTS_RESOURCE}/{handle}")
def publish_tool(
self,
handle: str,

View File

@@ -10,7 +10,6 @@ def reset_memories_command(
short,
entity,
knowledge,
agent_knowledge,
kickoff_outputs,
all,
) -> None:
@@ -24,11 +23,10 @@ def reset_memories_command(
kickoff_outputs (bool): Whether to reset the latest kickoff task outputs.
all (bool): Whether to reset all memories.
knowledge (bool): Whether to reset the knowledge.
agent_knowledge (bool): Whether to reset the agents knowledge.
"""
try:
if not any([long, short, entity, kickoff_outputs, knowledge, agent_knowledge, all]):
if not any([long, short, entity, kickoff_outputs, knowledge, all]):
click.echo(
"No memory type specified. Please specify at least one type to reset."
)
@@ -69,11 +67,6 @@ def reset_memories_command(
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Knowledge has been reset."
)
if agent_knowledge:
crew.reset_memories(command_type="agent_knowledge")
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Agents knowledge has been reset."
)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while resetting the memories: {e}", err=True)

View File

@@ -14,13 +14,17 @@ class CrewType(Enum):
FLOW = "flow"
def run_crew() -> None:
def run_crew(record: bool = False, replay: bool = False) -> None:
"""
Run the crew or flow by running a command in the UV environment.
Starting from version 0.103.0, this command can be used to run both
standard crews and flows. For flows, it detects the type from pyproject.toml
and automatically runs the appropriate command.
Args:
record (bool, optional): Whether to record LLM responses. Defaults to False.
replay (bool, optional): Whether to replay from recorded LLM responses. Defaults to False.
"""
crewai_version = get_crewai_version()
min_required_version = "0.71.0"
@@ -44,17 +48,24 @@ def run_crew() -> None:
click.echo(f"Running the {'Flow' if is_flow else 'Crew'}")
# Execute the appropriate command
execute_command(crew_type)
execute_command(crew_type, record, replay)
def execute_command(crew_type: CrewType) -> None:
def execute_command(crew_type: CrewType, record: bool = False, replay: bool = False) -> None:
"""
Execute the appropriate command based on crew type.
Args:
crew_type: The type of crew to run
record: Whether to record LLM responses
replay: Whether to replay from recorded LLM responses
"""
command = ["uv", "run", "kickoff" if crew_type == CrewType.FLOW else "run_crew"]
if record:
command.append("--record")
if replay:
command.append("--replay")
try:
subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.119.0,<1.0.0"
"crewai[tools]>=0.118.0,<1.0.0"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.119.0,<1.0.0",
"crewai[tools]>=0.118.0,<1.0.0",
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.119.0"
"crewai[tools]>=0.118.0"
]
[tool.crewai]

View File

@@ -52,7 +52,7 @@ from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import BaseTool, Tool
from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities import I18N, FileHandler, Logger, RPMController
from crewai.utilities.constants import NOT_SPECIFIED, TRAINING_DATA_FILE
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.evaluators.crew_evaluator_handler import CrewEvaluator
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
from crewai.utilities.events.crew_events import (
@@ -244,6 +244,15 @@ class Crew(FlowTrackable, BaseModel):
default_factory=SecurityConfig,
description="Security configuration for the crew, including fingerprinting.",
)
record_mode: bool = Field(
default=False,
description="Whether to record LLM responses for later replay.",
)
replay_mode: bool = Field(
default=False,
description="Whether to replay from recorded LLM responses without making network calls.",
)
_llm_response_cache_handler: Optional[Any] = PrivateAttr(default=None)
@field_validator("id", mode="before")
@classmethod
@@ -478,7 +487,7 @@ class Crew(FlowTrackable, BaseModel):
separated by a synchronous task.
"""
for i, task in enumerate(self.tasks):
if task.async_execution and isinstance(task.context, list):
if task.async_execution and task.context:
for context_task in task.context:
if context_task.async_execution:
for j in range(i - 1, -1, -1):
@@ -496,7 +505,7 @@ class Crew(FlowTrackable, BaseModel):
task_indices = {id(task): i for i, task in enumerate(self.tasks)}
for task in self.tasks:
if isinstance(task.context, list):
if task.context:
for context_task in task.context:
if id(context_task) not in task_indices:
continue # Skip context tasks not in the main tasks list
@@ -633,6 +642,19 @@ class Crew(FlowTrackable, BaseModel):
self._task_output_handler.reset()
self._logging_color = "bold_purple"
if self.record_mode and self.replay_mode:
raise ValueError("Cannot use both record_mode and replay_mode at the same time")
if self.record_mode or self.replay_mode:
from crewai.utilities.llm_response_cache_handler import (
LLMResponseCacheHandler,
)
self._llm_response_cache_handler = LLMResponseCacheHandler()
if self.record_mode:
self._llm_response_cache_handler.start_recording()
elif self.replay_mode:
self._llm_response_cache_handler.start_replaying()
if inputs is not None:
self._inputs = inputs
self._interpolate_inputs(inputs)
@@ -651,6 +673,12 @@ class Crew(FlowTrackable, BaseModel):
if not agent.step_callback: # type: ignore # "BaseAgent" has no attribute "step_callback"
agent.step_callback = self.step_callback # type: ignore # "BaseAgent" has no attribute "step_callback"
if self._llm_response_cache_handler:
if hasattr(agent, "llm") and agent.llm:
agent.llm.set_response_cache_handler(self._llm_response_cache_handler)
if hasattr(agent, "function_calling_llm") and agent.function_calling_llm:
agent.function_calling_llm.set_response_cache_handler(self._llm_response_cache_handler)
agent.create_agent_executor()
@@ -1034,14 +1062,11 @@ class Crew(FlowTrackable, BaseModel):
)
return cast(List[BaseTool], tools)
def _get_context(self, task: Task, task_outputs: List[TaskOutput]) -> str:
if not task.context:
return ""
def _get_context(self, task: Task, task_outputs: List[TaskOutput]):
context = (
aggregate_raw_outputs_from_task_outputs(task_outputs)
if task.context is NOT_SPECIFIED
else aggregate_raw_outputs_from_tasks(task.context)
aggregate_raw_outputs_from_tasks(task.context)
if task.context
else aggregate_raw_outputs_from_task_outputs(task_outputs)
)
return context
@@ -1229,7 +1254,7 @@ class Crew(FlowTrackable, BaseModel):
task_mapping[task.key] = cloned_task
for cloned_task, original_task in zip(cloned_tasks, self.tasks):
if isinstance(original_task.context, list):
if original_task.context:
cloned_context = [
task_mapping[context_task.key]
for context_task in original_task.context
@@ -1290,6 +1315,9 @@ class Crew(FlowTrackable, BaseModel):
def _finish_execution(self, final_string_output: str) -> None:
if self.max_rpm:
self._rpm_controller.stop_rpm_counter()
if self._llm_response_cache_handler:
self._llm_response_cache_handler.stop()
def calculate_usage_metrics(self) -> UsageMetrics:
"""Calculates and returns the usage metrics."""
@@ -1356,7 +1384,7 @@ class Crew(FlowTrackable, BaseModel):
Args:
command_type: Type of memory to reset.
Valid options: 'long', 'short', 'entity', 'knowledge', 'agent_knowledge'
Valid options: 'long', 'short', 'entity', 'knowledge',
'kickoff_outputs', or 'all'
Raises:
@@ -1369,7 +1397,6 @@ class Crew(FlowTrackable, BaseModel):
"short",
"entity",
"knowledge",
"agent_knowledge",
"kickoff_outputs",
"all",
"external",
@@ -1394,14 +1421,19 @@ class Crew(FlowTrackable, BaseModel):
def _reset_all_memories(self) -> None:
"""Reset all available memory systems."""
memory_systems = self._get_memory_systems()
memory_systems = [
("short term", getattr(self, "_short_term_memory", None)),
("entity", getattr(self, "_entity_memory", None)),
("external", getattr(self, "_external_memory", None)),
("long term", getattr(self, "_long_term_memory", None)),
("task output", getattr(self, "_task_output_handler", None)),
("knowledge", getattr(self, "knowledge", None)),
]
for memory_type, config in memory_systems.items():
if (system := config.get('system')) is not None:
name = config.get('name')
for name, system in memory_systems:
if system is not None:
try:
reset_fn: Callable = cast(Callable, config.get('reset'))
reset_fn(system)
system.reset()
self._logger.log(
"info",
f"[Crew ({self.name if self.name else self.id})] {name} memory has been reset",
@@ -1420,17 +1452,24 @@ class Crew(FlowTrackable, BaseModel):
Raises:
RuntimeError: If the specified memory system fails to reset
"""
memory_systems = self._get_memory_systems()
config = memory_systems[memory_type]
system = config.get('system')
name = config.get('name')
reset_functions = {
"long": (getattr(self, "_long_term_memory", None), "long term"),
"short": (getattr(self, "_short_term_memory", None), "short term"),
"entity": (getattr(self, "_entity_memory", None), "entity"),
"knowledge": (getattr(self, "knowledge", None), "knowledge"),
"kickoff_outputs": (
getattr(self, "_task_output_handler", None),
"task output",
),
"external": (getattr(self, "_external_memory", None), "external"),
}
if system is None:
memory_system, name = reset_functions[memory_type]
if memory_system is None:
raise RuntimeError(f"{name} memory system is not initialized")
try:
reset_fn: Callable = cast(Callable, config.get('reset'))
reset_fn(system)
memory_system.reset()
self._logger.log(
"info",
f"[Crew ({self.name if self.name else self.id})] {name} memory has been reset",
@@ -1439,64 +1478,3 @@ class Crew(FlowTrackable, BaseModel):
raise RuntimeError(
f"[Crew ({self.name if self.name else self.id})] Failed to reset {name} memory: {str(e)}"
) from e
def _get_memory_systems(self):
"""Get all available memory systems with their configuration.
Returns:
Dict containing all memory systems with their reset functions and display names.
"""
def default_reset(memory):
return memory.reset()
def knowledge_reset(memory):
return self.reset_knowledge(memory)
# Get knowledge for agents
agent_knowledges = [getattr(agent, "knowledge", None) for agent in self.agents
if getattr(agent, "knowledge", None) is not None]
# Get knowledge for crew and agents
crew_knowledge = getattr(self, "knowledge", None)
crew_and_agent_knowledges = ([crew_knowledge] if crew_knowledge is not None else []) + agent_knowledges
return {
'short': {
'system': getattr(self, "_short_term_memory", None),
'reset': default_reset,
'name': 'Short Term'
},
'entity': {
'system': getattr(self, "_entity_memory", None),
'reset': default_reset,
'name': 'Entity'
},
'external': {
'system': getattr(self, "_external_memory", None),
'reset': default_reset,
'name': 'External'
},
'long': {
'system': getattr(self, "_long_term_memory", None),
'reset': default_reset,
'name': 'Long Term'
},
'kickoff_outputs': {
'system': getattr(self, "_task_output_handler", None),
'reset': default_reset,
'name': 'Task Output'
},
'knowledge': {
'system': crew_and_agent_knowledges if crew_and_agent_knowledges else None,
'reset': knowledge_reset,
'name': 'Crew Knowledge and Agent Knowledge'
},
'agent_knowledge': {
'system': agent_knowledges if agent_knowledges else None,
'reset': knowledge_reset,
'name': 'Agent Knowledge'
}
}
def reset_knowledge(self, knowledges: List[Knowledge]) -> None:
"""Reset crew and agent knowledge storage."""
for ks in knowledges:
ks.reset()

View File

@@ -1,3 +0,0 @@
from .utils.embedding_functions import VoyageAIEmbeddingFunction
__all__ = ["VoyageAIEmbeddingFunction"]

View File

@@ -1,3 +0,0 @@
from . import embedding_functions
__all__ = ["embedding_functions"]

View File

@@ -1,3 +0,0 @@
from .voyageai_embedding_function import VoyageAIEmbeddingFunction
__all__ = ["VoyageAIEmbeddingFunction"]

View File

@@ -1,91 +0,0 @@
import logging
from typing import List, Union
from chromadb.api.types import Documents, EmbeddingFunction
logger = logging.getLogger(__name__)
class VoyageAIEmbeddingFunction(EmbeddingFunction[Documents]):
"""
VoyageAI embedding function for ChromaDB.
This class provides integration with VoyageAI's embedding models for use with ChromaDB.
It supports various VoyageAI models including voyage-3, voyage-3.5, and voyage-3.5-lite.
Attributes:
_api_key (str): The API key for VoyageAI.
_model_name (str): The name of the VoyageAI model to use.
"""
def __init__(self, api_key: str, model_name: str = "voyage-3"):
"""
Initialize the VoyageAI embedding function.
Args:
api_key (str): The API key for VoyageAI.
model_name (str, optional): The name of the VoyageAI model to use.
Defaults to "voyage-3".
Raises:
ValueError: If the voyageai package is not installed or if the API key is empty.
"""
self._ensure_voyageai_installed()
if not api_key:
raise ValueError("API key is required for VoyageAI embeddings")
self._api_key = api_key
self._model_name = model_name
def _ensure_voyageai_installed(self):
"""
Ensure that the voyageai package is installed.
Raises:
ValueError: If the voyageai package is not installed.
"""
try:
import voyageai # noqa: F401
except ImportError:
raise ValueError(
"The voyageai python package is not installed. Please install it with `pip install voyageai`"
)
def __call__(self, input: Union[str, List[str]]) -> List[List[float]]:
"""
Generate embeddings for the input text(s).
Args:
input (Union[str, List[str]]): The text or list of texts to generate embeddings for.
Returns:
List[List[float]]: A list of embeddings, where each embedding is a list of floats.
Raises:
ValueError: If the input is not a string or list of strings.
voyageai.VoyageError: If there is an error with the VoyageAI API.
"""
self._ensure_voyageai_installed()
import voyageai
if not input:
return []
if not isinstance(input, (str, list)):
raise ValueError("Input must be a string or a list of strings")
if isinstance(input, str):
input = [input]
try:
embeddings = voyageai.get_embeddings(
input, model=self._model_name, api_key=self._api_key
)
return embeddings
except voyageai.VoyageError as e:
logger.error(f"VoyageAI API error: {e}")
raise
except Exception as e:
logger.error(f"Unexpected error during VoyageAI embedding: {e}")
raise

View File

@@ -5,7 +5,8 @@ import sys
import threading
import warnings
from collections import defaultdict
from contextlib import contextmanager, redirect_stderr, redirect_stdout
from contextlib import contextmanager
from types import SimpleNamespace
from typing import (
Any,
DefaultDict,
@@ -30,6 +31,7 @@ from crewai.utilities.events.llm_events import (
LLMCallType,
LLMStreamChunkEvent,
)
from crewai.utilities.events.tool_usage_events import ToolExecutionErrorEvent
with warnings.catch_warnings():
warnings.simplefilter("ignore", UserWarning)
@@ -43,9 +45,6 @@ with warnings.catch_warnings():
from litellm.utils import supports_response_schema
import io
from typing import TextIO
from crewai.llms.base_llm import BaseLLM
from crewai.utilities.events import crewai_event_bus
from crewai.utilities.exceptions.context_window_exceeding_exception import (
@@ -55,17 +54,12 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
load_dotenv()
class FilteredStream(io.TextIOBase):
_lock = None
def __init__(self, original_stream: TextIO):
class FilteredStream:
def __init__(self, original_stream):
self._original_stream = original_stream
self._lock = threading.Lock()
def write(self, s: str) -> int:
if not self._lock:
self._lock = threading.Lock()
def write(self, s) -> int:
with self._lock:
# Filter out extraneous messages from LiteLLM
if (
@@ -220,11 +214,15 @@ def suppress_warnings():
)
# Redirect stdout and stderr
with (
redirect_stdout(FilteredStream(sys.stdout)),
redirect_stderr(FilteredStream(sys.stderr)),
):
old_stdout = sys.stdout
old_stderr = sys.stderr
sys.stdout = FilteredStream(old_stdout)
sys.stderr = FilteredStream(old_stderr)
try:
yield
finally:
sys.stdout = old_stdout
sys.stderr = old_stderr
class Delta(TypedDict):
@@ -298,6 +296,7 @@ class LLM(BaseLLM):
self.additional_params = kwargs
self.is_anthropic = self._is_anthropic_model(model)
self.stream = stream
self._response_cache_handler = None
litellm.drop_params = True
@@ -871,25 +870,43 @@ class LLM(BaseLLM):
for message in messages:
if message.get("role") == "system":
message["role"] = "assistant"
if self._response_cache_handler and self._response_cache_handler.is_replaying():
cached_response = self._response_cache_handler.get_cached_response(
self.model, messages
)
if cached_response:
# Emit completion event for the cached response
self._handle_emit_call_events(cached_response, LLMCallType.LLM_CALL)
return cached_response
# --- 5) Set up callbacks if provided
# --- 6) Set up callbacks if provided
with suppress_warnings():
if callbacks and len(callbacks) > 0:
self.set_callbacks(callbacks)
try:
# --- 6) Prepare parameters for the completion call
# --- 7) Prepare parameters for the completion call
params = self._prepare_completion_params(messages, tools)
# --- 7) Make the completion call and handle response
# --- 8) Make the completion call and handle response
if self.stream:
return self._handle_streaming_response(
response = self._handle_streaming_response(
params, callbacks, available_functions
)
else:
return self._handle_non_streaming_response(
response = self._handle_non_streaming_response(
params, callbacks, available_functions
)
if (self._response_cache_handler and
self._response_cache_handler.is_recording() and
isinstance(response, str)):
self._response_cache_handler.cache_response(
self.model, messages, response
)
return response
except LLMContextLengthExceededException:
# Re-raise LLMContextLengthExceededException as it should be handled
@@ -1109,3 +1126,18 @@ class LLM(BaseLLM):
litellm.success_callback = success_callbacks
litellm.failure_callback = failure_callbacks
def set_response_cache_handler(self, handler):
"""
Sets the response cache handler for record/replay functionality.
Args:
handler: An instance of LLMResponseCacheHandler.
"""
self._response_cache_handler = handler
def clear_response_cache_handler(self):
"""
Clears the response cache handler.
"""
self._response_cache_handler = None

View File

@@ -0,0 +1,296 @@
import hashlib
import json
import logging
import sqlite3
import threading
from typing import Any, Dict, List, Optional
from crewai.utilities import Printer
from crewai.utilities.crew_json_encoder import CrewJSONEncoder
from crewai.utilities.paths import db_storage_path
logger = logging.getLogger(__name__)
class LLMResponseCacheStorage:
"""
SQLite storage for caching LLM responses.
Used for offline record/replay functionality.
"""
def __init__(
self, db_path: str = f"{db_storage_path()}/llm_response_cache.db"
) -> None:
self.db_path = db_path
self._printer: Printer = Printer()
self._connection_pool = {}
self._initialize_db()
def _get_connection(self) -> sqlite3.Connection:
"""
Gets a connection from the connection pool or creates a new one.
Uses thread-local storage to ensure thread safety.
"""
thread_id = threading.get_ident()
if thread_id not in self._connection_pool:
try:
conn = sqlite3.connect(self.db_path)
conn.execute("PRAGMA foreign_keys = ON")
conn.execute("PRAGMA journal_mode = WAL")
self._connection_pool[thread_id] = conn
except sqlite3.Error as e:
error_msg = f"Failed to create SQLite connection: {e}"
self._printer.print(
content=f"LLM RESPONSE CACHE ERROR: {error_msg}",
color="red",
)
logger.error(error_msg)
raise
return self._connection_pool[thread_id]
def _close_connections(self) -> None:
"""
Closes all connections in the connection pool.
"""
for thread_id, conn in list(self._connection_pool.items()):
try:
conn.close()
del self._connection_pool[thread_id]
except sqlite3.Error as e:
error_msg = f"Failed to close SQLite connection: {e}"
self._printer.print(
content=f"LLM RESPONSE CACHE ERROR: {error_msg}",
color="red",
)
logger.error(error_msg)
def _initialize_db(self) -> None:
"""
Initializes the SQLite database and creates the llm_response_cache table
"""
try:
conn = self._get_connection()
cursor = conn.cursor()
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS llm_response_cache (
request_hash TEXT PRIMARY KEY,
model TEXT,
messages TEXT,
response TEXT,
timestamp DATETIME DEFAULT CURRENT_TIMESTAMP
)
"""
)
conn.commit()
except sqlite3.Error as e:
error_msg = f"Failed to initialize database: {e}"
self._printer.print(
content=f"LLM RESPONSE CACHE ERROR: {error_msg}",
color="red",
)
logger.error(error_msg)
raise
def _compute_request_hash(self, model: str, messages: List[Dict[str, str]]) -> str:
"""
Computes a hash for the request based on the model and messages.
This hash is used as the key for caching.
Sensitive information like API keys should not be included in the hash.
"""
try:
message_str = json.dumps(messages, sort_keys=True)
request_hash = hashlib.sha256(f"{model}:{message_str}".encode()).hexdigest()
return request_hash
except Exception as e:
error_msg = f"Failed to compute request hash: {e}"
self._printer.print(
content=f"LLM RESPONSE CACHE ERROR: {error_msg}",
color="red",
)
logger.error(error_msg)
raise
def add(self, model: str, messages: List[Dict[str, str]], response: str) -> None:
"""
Adds a response to the cache.
"""
try:
request_hash = self._compute_request_hash(model, messages)
messages_json = json.dumps(messages, cls=CrewJSONEncoder)
conn = self._get_connection()
cursor = conn.cursor()
cursor.execute(
"""
INSERT OR REPLACE INTO llm_response_cache
(request_hash, model, messages, response)
VALUES (?, ?, ?, ?)
""",
(
request_hash,
model,
messages_json,
response,
),
)
conn.commit()
except sqlite3.Error as e:
error_msg = f"Failed to add response to cache: {e}"
self._printer.print(
content=f"LLM RESPONSE CACHE ERROR: {error_msg}",
color="red",
)
logger.error(error_msg)
raise
except Exception as e:
error_msg = f"Unexpected error when adding response: {e}"
self._printer.print(
content=f"LLM RESPONSE CACHE ERROR: {error_msg}",
color="red",
)
logger.error(error_msg)
raise
def get(self, model: str, messages: List[Dict[str, str]]) -> Optional[str]:
"""
Retrieves a response from the cache based on the model and messages.
Returns None if not found.
"""
try:
request_hash = self._compute_request_hash(model, messages)
conn = self._get_connection()
cursor = conn.cursor()
cursor.execute(
"""
SELECT response
FROM llm_response_cache
WHERE request_hash = ?
""",
(request_hash,),
)
result = cursor.fetchone()
return result[0] if result else None
except sqlite3.Error as e:
error_msg = f"Failed to retrieve response from cache: {e}"
self._printer.print(
content=f"LLM RESPONSE CACHE ERROR: {error_msg}",
color="red",
)
logger.error(error_msg)
return None
except Exception as e:
error_msg = f"Unexpected error when retrieving response: {e}"
self._printer.print(
content=f"LLM RESPONSE CACHE ERROR: {error_msg}",
color="red",
)
logger.error(error_msg)
return None
def delete_all(self) -> None:
"""
Deletes all records from the llm_response_cache table.
"""
try:
conn = self._get_connection()
cursor = conn.cursor()
cursor.execute("DELETE FROM llm_response_cache")
conn.commit()
except sqlite3.Error as e:
error_msg = f"Failed to clear cache: {e}"
self._printer.print(
content=f"LLM RESPONSE CACHE ERROR: {error_msg}",
color="red",
)
logger.error(error_msg)
raise
def cleanup_expired_cache(self, max_age_days: int = 7) -> None:
"""
Removes cache entries older than the specified number of days.
Args:
max_age_days: Maximum age of cache entries in days. Defaults to 7.
If set to 0, all entries will be deleted.
"""
try:
conn = self._get_connection()
cursor = conn.cursor()
if max_age_days <= 0:
cursor.execute("DELETE FROM llm_response_cache")
deleted_count = cursor.rowcount
logger.info("Deleting all cache entries (max_age_days <= 0)")
else:
cursor.execute(
f"""
DELETE FROM llm_response_cache
WHERE timestamp < datetime('now', '-{max_age_days} days')
"""
)
deleted_count = cursor.rowcount
conn.commit()
if deleted_count > 0:
self._printer.print(
content=f"LLM RESPONSE CACHE: Removed {deleted_count} expired cache entries",
color="green",
)
logger.info(f"Removed {deleted_count} expired cache entries")
except sqlite3.Error as e:
error_msg = f"Failed to cleanup expired cache: {e}"
self._printer.print(
content=f"LLM RESPONSE CACHE ERROR: {error_msg}",
color="red",
)
logger.error(error_msg)
raise
def get_cache_stats(self) -> Dict[str, Any]:
"""
Returns statistics about the cache.
Returns:
A dictionary containing cache statistics.
"""
try:
conn = self._get_connection()
cursor = conn.cursor()
cursor.execute("SELECT COUNT(*) FROM llm_response_cache")
total_count = cursor.fetchone()[0]
cursor.execute("SELECT model, COUNT(*) FROM llm_response_cache GROUP BY model")
model_counts = {model: count for model, count in cursor.fetchall()}
cursor.execute("SELECT MIN(timestamp), MAX(timestamp) FROM llm_response_cache")
oldest, newest = cursor.fetchone()
return {
"total_entries": total_count,
"entries_by_model": model_counts,
"oldest_entry": oldest,
"newest_entry": newest,
}
except sqlite3.Error as e:
error_msg = f"Failed to get cache stats: {e}"
self._printer.print(
content=f"LLM RESPONSE CACHE ERROR: {error_msg}",
color="red",
)
logger.error(error_msg)
return {"error": str(e)}
def __del__(self) -> None:
"""
Closes all connections when the object is garbage collected.
"""
self._close_connections()

View File

@@ -2,6 +2,7 @@ import datetime
import inspect
import json
import logging
import re
import threading
import uuid
from concurrent.futures import Future
@@ -40,7 +41,6 @@ from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput
from crewai.tools.base_tool import BaseTool
from crewai.utilities.config import process_config
from crewai.utilities.constants import NOT_SPECIFIED
from crewai.utilities.converter import Converter, convert_to_model
from crewai.utilities.events import (
TaskCompletedEvent,
@@ -97,7 +97,7 @@ class Task(BaseModel):
)
context: Optional[List["Task"]] = Field(
description="Other tasks that will have their output used as context for this task.",
default=NOT_SPECIFIED,
default=None,
)
async_execution: Optional[bool] = Field(
description="Whether the task should be executed asynchronously or not.",
@@ -643,7 +643,7 @@ class Task(BaseModel):
cloned_context = (
[task_mapping[context_task.key] for context_task in self.context]
if isinstance(self.context, list)
if self.context
else None
)

View File

@@ -10,18 +10,6 @@ from contextlib import contextmanager
from importlib.metadata import version
from typing import TYPE_CHECKING, Any, Optional
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
OTLPSpanExporter,
)
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (
BatchSpanProcessor,
SpanExportResult,
)
from opentelemetry.trace import Span, Status, StatusCode
from crewai.telemetry.constants import (
CREWAI_TELEMETRY_BASE_URL,
CREWAI_TELEMETRY_SERVICE_NAME,
@@ -37,6 +25,18 @@ def suppress_warnings():
yield
from opentelemetry import trace # noqa: E402
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
OTLPSpanExporter, # noqa: E402
)
from opentelemetry.sdk.resources import SERVICE_NAME, Resource # noqa: E402
from opentelemetry.sdk.trace import TracerProvider # noqa: E402
from opentelemetry.sdk.trace.export import ( # noqa: E402
BatchSpanProcessor,
SpanExportResult,
)
from opentelemetry.trace import Span, Status, StatusCode # noqa: E402
if TYPE_CHECKING:
from crewai.crew import Crew
from crewai.task import Task
@@ -232,7 +232,7 @@ class Telemetry:
"agent_key": task.agent.key if task.agent else None,
"context": (
[task.description for task in task.context]
if isinstance(task.context, list)
if task.context
else None
),
"tools_names": [
@@ -748,7 +748,7 @@ class Telemetry:
"agent_key": task.agent.key if task.agent else None,
"context": (
[task.description for task in task.context]
if isinstance(task.context, list)
if task.context
else None
),
"tools_names": [

View File

@@ -27,9 +27,7 @@
"feedback_instructions": "User feedback: {feedback}\nInstructions: Use this feedback to enhance the next output iteration.\nNote: Do not respond or add commentary.",
"lite_agent_system_prompt_with_tools": "You are {role}. {backstory}\nYour personal goal is: {goal}\n\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n```",
"lite_agent_system_prompt_without_tools": "You are {role}. {backstory}\nYour personal goal is: {goal}\n\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
"lite_agent_response_format": "\nIMPORTANT: Your final answer MUST contain all the information requested in the following format: {response_format}\n\nIMPORTANT: Ensure the final output does not include any code block markers like ```json or ```python.",
"knowledge_search_query": "The original query is: {task_prompt}.",
"knowledge_search_query_system_prompt": "Your goal is to rewrite the user query so that it is optimized for retrieval from a vector database. Consider how the query will be used to find relevant documents, and aim to make it more specific and context-aware. \n\n Do not include any other text than the rewritten query, especially any preamble or postamble and only add expected output format if its relevant to the rewritten query. \n\n Focus on the key words of the intended task and to retrieve the most relevant information. \n\n There will be some extra context provided that might need to be removed such as expected_output formats structured_outputs and other instructions."
"lite_agent_response_format": "\nIMPORTANT: Your final answer MUST contain all the information requested in the following format: {response_format}\n\nIMPORTANT: Ensure the final output does not include any code block markers like ```json or ```python."
},
"errors": {
"force_final_answer_error": "You can't keep going, here is the best final answer you generated:\n\n {formatted_answer}",

View File

@@ -16,7 +16,6 @@ from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.tools.tool_types import ToolResult
from crewai.utilities import I18N, Printer
from crewai.utilities.errors import AgentRepositoryError
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
@@ -429,41 +428,3 @@ def show_agent_logs(
printer.print(
content=f"\033[95m## Final Answer:\033[00m \033[92m\n{formatted_answer.output}\033[00m\n\n"
)
def load_agent_from_repository(from_repository: str) -> Dict[str, Any]:
attributes: Dict[str, Any] = {}
if from_repository:
import importlib
from crewai.cli.authentication.token import get_auth_token
from crewai.cli.plus_api import PlusAPI
client = PlusAPI(api_key=get_auth_token())
response = client.get_agent(from_repository)
if response.status_code == 404:
raise AgentRepositoryError(
f"Agent {from_repository} does not exist, make sure the name is correct or the agent is available on your organization"
)
if response.status_code != 200:
raise AgentRepositoryError(
f"Agent {from_repository} could not be loaded: {response.text}"
)
agent = response.json()
for key, value in agent.items():
if key == "tools":
attributes[key] = []
for tool in value:
try:
module = importlib.import_module("crewai_tools")
tool_class = getattr(module, tool["name"])
attributes[key].append(tool_class())
except Exception as e:
raise AgentRepositoryError(
f"Tool {tool['name']} could not be loaded: {e}"
) from e
else:
attributes[key] = value
return attributes

View File

@@ -5,14 +5,3 @@ KNOWLEDGE_DIRECTORY = "knowledge"
MAX_LLM_RETRY = 3
MAX_FILE_NAME_LENGTH = 255
EMITTER_COLOR = "bold_blue"
class _NotSpecified:
def __repr__(self):
return "NOT_SPECIFIED"
# Sentinel value used to detect when no value has been explicitly provided.
# Unlike `None`, which might be a valid value from the user, `NOT_SPECIFIED` allows
# us to distinguish between "not passed at all" and "explicitly passed None" or "[]".
NOT_SPECIFIED = _NotSpecified()

View File

@@ -140,7 +140,7 @@ class EmbeddingConfigurator:
@staticmethod
def _configure_voyageai(config, model_name):
from crewai.knowledge.embedder.chromadb.utils.embedding_functions.voyageai_embedding_function import (
from chromadb.utils.embedding_functions.voyageai_embedding_function import (
VoyageAIEmbeddingFunction,
)

View File

@@ -1,5 +1,4 @@
"""Error message definitions for CrewAI database operations."""
from typing import Optional
@@ -38,9 +37,3 @@ class DatabaseError:
The formatted error message
"""
return template.format(str(error))
class AgentRepositoryError(Exception):
"""Exception raised when an agent repository is not found."""
...

View File

@@ -8,14 +8,6 @@ from crewai.telemetry.telemetry import Telemetry
from crewai.utilities import Logger
from crewai.utilities.constants import EMITTER_COLOR
from crewai.utilities.events.base_event_listener import BaseEventListener
from crewai.utilities.events.knowledge_events import (
KnowledgeQueryCompletedEvent,
KnowledgeQueryFailedEvent,
KnowledgeQueryStartedEvent,
KnowledgeRetrievalCompletedEvent,
KnowledgeRetrievalStartedEvent,
KnowledgeSearchQueryFailedEvent,
)
from crewai.utilities.events.llm_events import (
LLMCallCompletedEvent,
LLMCallFailedEvent,
@@ -65,8 +57,6 @@ class EventListener(BaseEventListener):
execution_spans: Dict[Task, Any] = Field(default_factory=dict)
next_chunk = 0
text_stream = StringIO()
knowledge_retrieval_in_progress = False
knowledge_query_in_progress = False
def __new__(cls):
if cls._instance is None:
@@ -352,59 +342,5 @@ class EventListener(BaseEventListener):
def on_crew_test_failed(source, event: CrewTestFailedEvent):
self.formatter.handle_crew_test_failed(event.crew_name or "Crew")
@crewai_event_bus.on(KnowledgeRetrievalStartedEvent)
def on_knowledge_retrieval_started(
source, event: KnowledgeRetrievalStartedEvent
):
if self.knowledge_retrieval_in_progress:
return
self.knowledge_retrieval_in_progress = True
self.formatter.handle_knowledge_retrieval_started(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
)
@crewai_event_bus.on(KnowledgeRetrievalCompletedEvent)
def on_knowledge_retrieval_completed(
source, event: KnowledgeRetrievalCompletedEvent
):
if not self.knowledge_retrieval_in_progress:
return
self.knowledge_retrieval_in_progress = False
self.formatter.handle_knowledge_retrieval_completed(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
event.retrieved_knowledge,
)
@crewai_event_bus.on(KnowledgeQueryStartedEvent)
def on_knowledge_query_started(source, event: KnowledgeQueryStartedEvent):
pass
@crewai_event_bus.on(KnowledgeQueryFailedEvent)
def on_knowledge_query_failed(source, event: KnowledgeQueryFailedEvent):
self.formatter.handle_knowledge_query_failed(
self.formatter.current_agent_branch,
event.error,
self.formatter.current_crew_tree,
)
@crewai_event_bus.on(KnowledgeQueryCompletedEvent)
def on_knowledge_query_completed(source, event: KnowledgeQueryCompletedEvent):
pass
@crewai_event_bus.on(KnowledgeSearchQueryFailedEvent)
def on_knowledge_search_query_failed(
source, event: KnowledgeSearchQueryFailedEvent
):
self.formatter.handle_knowledge_search_query_failed(
self.formatter.current_agent_branch,
event.error,
self.formatter.current_crew_tree,
)
event_listener = EventListener()

View File

@@ -1,56 +0,0 @@
from typing import TYPE_CHECKING, Any
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.utilities.events.base_events import BaseEvent
if TYPE_CHECKING:
from crewai.agents.agent_builder.base_agent import BaseAgent
class KnowledgeRetrievalStartedEvent(BaseEvent):
"""Event emitted when a knowledge retrieval is started."""
type: str = "knowledge_search_query_started"
agent: BaseAgent
class KnowledgeRetrievalCompletedEvent(BaseEvent):
"""Event emitted when a knowledge retrieval is completed."""
query: str
type: str = "knowledge_search_query_completed"
agent: BaseAgent
retrieved_knowledge: Any
class KnowledgeQueryStartedEvent(BaseEvent):
"""Event emitted when a knowledge query is started."""
task_prompt: str
type: str = "knowledge_query_started"
agent: BaseAgent
class KnowledgeQueryFailedEvent(BaseEvent):
"""Event emitted when a knowledge query fails."""
type: str = "knowledge_query_failed"
agent: BaseAgent
error: str
class KnowledgeQueryCompletedEvent(BaseEvent):
"""Event emitted when a knowledge query is completed."""
query: str
type: str = "knowledge_query_completed"
agent: BaseAgent
class KnowledgeSearchQueryFailedEvent(BaseEvent):
"""Event emitted when a knowledge search query fails."""
query: str
type: str = "knowledge_search_query_failed"
agent: BaseAgent
error: str

View File

@@ -783,202 +783,3 @@ class ConsoleFormatter:
self.update_lite_agent_status(
self.current_lite_agent_branch, lite_agent_role, status, **fields
)
def handle_knowledge_retrieval_started(
self,
agent_branch: Optional[Tree],
crew_tree: Optional[Tree],
) -> Optional[Tree]:
"""Handle knowledge retrieval started event."""
if not self.verbose:
return None
branch_to_use = agent_branch or self.current_lite_agent_branch
tree_to_use = branch_to_use or crew_tree
if branch_to_use is None or tree_to_use is None:
# If we don't have a valid branch, use crew_tree as the branch if available
if crew_tree is not None:
branch_to_use = tree_to_use = crew_tree
else:
return None
knowledge_branch = branch_to_use.add("")
self.update_tree_label(
knowledge_branch, "🔍", "Knowledge Retrieval Started", "blue"
)
self.print(tree_to_use)
self.print()
return knowledge_branch
def handle_knowledge_retrieval_completed(
self,
agent_branch: Optional[Tree],
crew_tree: Optional[Tree],
retrieved_knowledge: Any,
) -> None:
"""Handle knowledge retrieval completed event."""
if not self.verbose:
return None
branch_to_use = self.current_lite_agent_branch or agent_branch
tree_to_use = branch_to_use or crew_tree
if branch_to_use is None and tree_to_use is not None:
branch_to_use = tree_to_use
if branch_to_use is None or tree_to_use is None:
if retrieved_knowledge:
knowledge_text = str(retrieved_knowledge)
if len(knowledge_text) > 500:
knowledge_text = knowledge_text[:497] + "..."
knowledge_panel = Panel(
Text(knowledge_text, style="white"),
title="📚 Retrieved Knowledge",
border_style="green",
padding=(1, 2),
)
self.print(knowledge_panel)
self.print()
return None
knowledge_branch_found = False
for child in branch_to_use.children:
if "Knowledge Retrieval Started" in str(child.label):
self.update_tree_label(
child, "", "Knowledge Retrieval Completed", "green"
)
knowledge_branch_found = True
break
if not knowledge_branch_found:
for child in branch_to_use.children:
if (
"Knowledge Retrieval" in str(child.label)
and "Started" not in str(child.label)
and "Completed" not in str(child.label)
):
self.update_tree_label(
child, "", "Knowledge Retrieval Completed", "green"
)
knowledge_branch_found = True
break
if not knowledge_branch_found:
knowledge_branch = branch_to_use.add("")
self.update_tree_label(
knowledge_branch, "", "Knowledge Retrieval Completed", "green"
)
self.print(tree_to_use)
if retrieved_knowledge:
knowledge_text = str(retrieved_knowledge)
if len(knowledge_text) > 500:
knowledge_text = knowledge_text[:497] + "..."
knowledge_panel = Panel(
Text(knowledge_text, style="white"),
title="📚 Retrieved Knowledge",
border_style="green",
padding=(1, 2),
)
self.print(knowledge_panel)
self.print()
def handle_knowledge_query_started(
self,
agent_branch: Optional[Tree],
task_prompt: str,
crew_tree: Optional[Tree],
) -> None:
"""Handle knowledge query generated event."""
if not self.verbose:
return None
branch_to_use = self.current_lite_agent_branch or agent_branch
tree_to_use = branch_to_use or crew_tree
if branch_to_use is None or tree_to_use is None:
return None
query_branch = branch_to_use.add("")
self.update_tree_label(
query_branch, "🔎", f"Query: {task_prompt[:50]}...", "yellow"
)
self.print(tree_to_use)
self.print()
def handle_knowledge_query_failed(
self,
agent_branch: Optional[Tree],
error: str,
crew_tree: Optional[Tree],
) -> None:
"""Handle knowledge query failed event."""
if not self.verbose:
return
tree_to_use = self.current_lite_agent_branch or crew_tree
branch_to_use = self.current_lite_agent_branch or agent_branch
if branch_to_use and tree_to_use:
query_branch = branch_to_use.add("")
self.update_tree_label(query_branch, "", "Knowledge Query Failed", "red")
self.print(tree_to_use)
self.print()
# Show error panel
error_content = self.create_status_content(
"Knowledge Query Failed", "Query Error", "red", Error=error
)
self.print_panel(error_content, "Knowledge Error", "red")
def handle_knowledge_query_completed(
self,
agent_branch: Optional[Tree],
crew_tree: Optional[Tree],
) -> None:
"""Handle knowledge query completed event."""
if not self.verbose:
return None
branch_to_use = self.current_lite_agent_branch or agent_branch
tree_to_use = branch_to_use or crew_tree
if branch_to_use is None or tree_to_use is None:
return None
query_branch = branch_to_use.add("")
self.update_tree_label(query_branch, "", "Knowledge Query Completed", "green")
self.print(tree_to_use)
self.print()
def handle_knowledge_search_query_failed(
self,
agent_branch: Optional[Tree],
error: str,
crew_tree: Optional[Tree],
) -> None:
"""Handle knowledge search query failed event."""
if not self.verbose:
return
tree_to_use = self.current_lite_agent_branch or crew_tree
branch_to_use = self.current_lite_agent_branch or agent_branch
if branch_to_use and tree_to_use:
query_branch = branch_to_use.add("")
self.update_tree_label(query_branch, "", "Knowledge Search Failed", "red")
self.print(tree_to_use)
self.print()
# Show error panel
error_content = self.create_status_content(
"Knowledge Search Failed", "Search Error", "red", Error=error
)
self.print_panel(error_content, "Search Error", "red")

View File

@@ -1,6 +1,6 @@
import re
from typing import TYPE_CHECKING, List
if TYPE_CHECKING:
from crewai.task import Task
from crewai.tasks.task_output import TaskOutput
@@ -17,11 +17,6 @@ def aggregate_raw_outputs_from_task_outputs(task_outputs: List["TaskOutput"]) ->
def aggregate_raw_outputs_from_tasks(tasks: List["Task"]) -> str:
"""Generate string context from the tasks."""
task_outputs = (
[task.output for task in tasks if task.output is not None]
if isinstance(tasks, list)
else []
)
task_outputs = [task.output for task in tasks if task.output is not None]
return aggregate_raw_outputs_from_task_outputs(task_outputs)

View File

@@ -0,0 +1,156 @@
import logging
from typing import Any, Dict, List, Optional
from crewai.memory.storage.llm_response_cache_storage import LLMResponseCacheStorage
logger = logging.getLogger(__name__)
class LLMResponseCacheHandler:
"""
Handler for the LLM response cache storage.
Used for record/replay functionality.
"""
def __init__(self, max_cache_age_days: int = 7) -> None:
"""
Initializes the LLM response cache handler.
Args:
max_cache_age_days: Maximum age of cache entries in days. Defaults to 7.
"""
self.storage = LLMResponseCacheStorage()
self._recording = False
self._replaying = False
self.max_cache_age_days = max_cache_age_days
try:
self.storage.cleanup_expired_cache(self.max_cache_age_days)
except Exception as e:
logger.warning(f"Failed to cleanup expired cache on initialization: {e}")
def start_recording(self) -> None:
"""
Starts recording LLM responses.
"""
self._recording = True
self._replaying = False
logger.info("Started recording LLM responses")
def start_replaying(self) -> None:
"""
Starts replaying LLM responses from the cache.
"""
self._recording = False
self._replaying = True
logger.info("Started replaying LLM responses from cache")
try:
stats = self.storage.get_cache_stats()
logger.info(f"Cache statistics: {stats}")
except Exception as e:
logger.warning(f"Failed to get cache statistics: {e}")
def stop(self) -> None:
"""
Stops recording or replaying.
"""
was_recording = self._recording
was_replaying = self._replaying
self._recording = False
self._replaying = False
if was_recording:
logger.info("Stopped recording LLM responses")
if was_replaying:
logger.info("Stopped replaying LLM responses")
def is_recording(self) -> bool:
"""
Returns whether recording is active.
"""
return self._recording
def is_replaying(self) -> bool:
"""
Returns whether replaying is active.
"""
return self._replaying
def cache_response(self, model: str, messages: List[Dict[str, str]], response: str) -> None:
"""
Caches an LLM response if recording is active.
Args:
model: The model used for the LLM call.
messages: The messages sent to the LLM.
response: The response from the LLM.
"""
if not self._recording:
return
try:
self.storage.add(model, messages, response)
logger.debug(f"Cached response for model {model}")
except Exception as e:
logger.error(f"Failed to cache response: {e}")
def get_cached_response(self, model: str, messages: List[Dict[str, str]]) -> Optional[str]:
"""
Retrieves a cached LLM response if replaying is active.
Returns None if not found or if replaying is not active.
Args:
model: The model used for the LLM call.
messages: The messages sent to the LLM.
Returns:
The cached response, or None if not found or if replaying is not active.
"""
if not self._replaying:
return None
try:
response = self.storage.get(model, messages)
if response:
logger.debug(f"Retrieved cached response for model {model}")
else:
logger.debug(f"No cached response found for model {model}")
return response
except Exception as e:
logger.error(f"Failed to retrieve cached response: {e}")
return None
def clear_cache(self) -> None:
"""
Clears the LLM response cache.
"""
try:
self.storage.delete_all()
logger.info("Cleared LLM response cache")
except Exception as e:
logger.error(f"Failed to clear cache: {e}")
def cleanup_expired_cache(self) -> None:
"""
Removes cache entries older than the maximum age.
"""
try:
self.storage.cleanup_expired_cache(self.max_cache_age_days)
logger.info(f"Cleaned up expired cache entries (older than {self.max_cache_age_days} days)")
except Exception as e:
logger.error(f"Failed to cleanup expired cache: {e}")
def get_cache_stats(self) -> Dict[str, Any]:
"""
Returns statistics about the cache.
Returns:
A dictionary containing cache statistics.
"""
try:
return self.storage.get_cache_stats()
except Exception as e:
logger.error(f"Failed to get cache stats: {e}")
return {"error": str(e)}

View File

@@ -2,13 +2,14 @@
import os
from unittest import mock
from unittest.mock import MagicMock, patch
from unittest.mock import patch
import pytest
from crewai import Agent, Crew, Task
from crewai.agents.cache import CacheHandler
from crewai.agents.crew_agent_executor import AgentFinish, CrewAgentExecutor
from crewai.agents.parser import CrewAgentParser, OutputParserException
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.knowledge_config import KnowledgeConfig
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
@@ -18,7 +19,6 @@ from crewai.tools import tool
from crewai.tools.tool_calling import InstructorToolCalling
from crewai.tools.tool_usage import ToolUsage
from crewai.utilities import RPMController
from crewai.utilities.errors import AgentRepositoryError
from crewai.utilities.events import crewai_event_bus
from crewai.utilities.events.tool_usage_events import ToolUsageFinishedEvent
@@ -73,7 +73,6 @@ def test_agent_creation():
assert agent.goal == "test goal"
assert agent.backstory == "test backstory"
def test_agent_with_only_system_template():
"""Test that an agent with only system_template works without errors."""
agent = Agent(
@@ -89,7 +88,6 @@ def test_agent_with_only_system_template():
assert agent.goal == "Test Goal"
assert agent.backstory == "Test Backstory"
def test_agent_with_only_prompt_template():
"""Test that an agent with only system_template works without errors."""
agent = Agent(
@@ -121,8 +119,7 @@ def test_agent_with_missing_response_template():
assert agent.role == "Test Role"
assert agent.goal == "Test Goal"
assert agent.backstory == "Test Backstory"
def test_agent_default_values():
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
assert agent.llm.model == "gpt-4o-mini"
@@ -309,7 +306,9 @@ def test_cache_hitting():
def handle_tool_end(source, event):
received_events.append(event)
with (patch.object(CacheHandler, "read") as read,):
with (
patch.object(CacheHandler, "read") as read,
):
read.return_value = "0"
task = Task(
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool, you must use the tool.",
@@ -1039,7 +1038,7 @@ def test_agent_human_input():
CrewAgentExecutor,
"_invoke_loop",
return_value=AgentFinish(output="Hello", thought="", text=""),
),
) as mock_invoke_loop,
):
# Execute the task
output = agent.execute_task(task)
@@ -1631,10 +1630,13 @@ def test_agent_with_knowledge_sources():
# Create a knowledge source with some content
content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content)
with patch("crewai.knowledge") as MockKnowledge:
with patch(
"crewai.knowledge.storage.knowledge_storage.KnowledgeStorage"
) as MockKnowledge:
mock_knowledge_instance = MockKnowledge.return_value
mock_knowledge_instance.sources = [string_source]
mock_knowledge_instance.search.return_value = [{"content": content}]
mock_knowledge_instance.query.return_value = [{"content": content}]
agent = Agent(
role="Information Agent",
@@ -1688,7 +1690,7 @@ def test_agent_with_knowledge_sources_with_query_limit_and_score_threshold():
assert agent.knowledge is not None
mock_knowledge_query.assert_called_once_with(
["Brandon's favorite color"],
[task.prompt()],
**knowledge_config.model_dump(),
)
@@ -1725,7 +1727,7 @@ def test_agent_with_knowledge_sources_with_query_limit_and_score_threshold_defau
assert agent.knowledge is not None
mock_knowledge_query.assert_called_once_with(
["Brandon's favorite color"],
[task.prompt()],
**knowledge_config.model_dump(),
)
@@ -1735,7 +1737,9 @@ def test_agent_with_knowledge_sources_extensive_role():
content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content)
with patch("crewai.knowledge") as MockKnowledge:
with patch(
"crewai.knowledge.storage.knowledge_storage.KnowledgeStorage"
) as MockKnowledge:
mock_knowledge_instance = MockKnowledge.return_value
mock_knowledge_instance.sources = [string_source]
mock_knowledge_instance.query.return_value = [{"content": content}]
@@ -1799,40 +1803,6 @@ def test_agent_with_knowledge_sources_works_with_copy():
assert isinstance(agent_copy.llm, LLM)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_with_knowledge_sources_generate_search_query():
content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content)
with patch("crewai.knowledge") as MockKnowledge:
mock_knowledge_instance = MockKnowledge.return_value
mock_knowledge_instance.sources = [string_source]
mock_knowledge_instance.query.return_value = [{"content": content}]
agent = Agent(
role="Information Agent with extensive role description that is longer than 80 characters",
goal="Provide information based on knowledge sources",
backstory="You have access to specific knowledge sources.",
llm=LLM(model="gpt-4o-mini"),
knowledge_sources=[string_source],
)
task = Task(
description="What is Brandon's favorite color?",
expected_output="The answer to the question, in a format like this: `{{name: str, favorite_color: str}}`",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
# Updated assertion to check the JSON content
assert "Brandon" in str(agent.knowledge_search_query)
assert "favorite color" in str(agent.knowledge_search_query)
assert "red" in result.raw.lower()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_litellm_auth_error_handling():
"""Test that LiteLLM authentication errors are handled correctly and not retried."""
@@ -1970,153 +1940,3 @@ def test_litellm_anthropic_error_handling():
# Verify the LLM call was only made once (no retries)
mock_llm_call.assert_called_once()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_get_knowledge_search_query():
"""Test that _get_knowledge_search_query calls the LLM with the correct prompts."""
from crewai.utilities.i18n import I18N
content = "The capital of France is Paris."
string_source = StringKnowledgeSource(content=content)
agent = Agent(
role="Information Agent",
goal="Provide information based on knowledge sources",
backstory="I have access to knowledge sources",
llm=LLM(model="gpt-4"),
knowledge_sources=[string_source],
)
task = Task(
description="What is the capital of France?",
expected_output="The capital of France is Paris.",
agent=agent,
)
i18n = I18N()
task_prompt = task.prompt()
with patch.object(agent, "_get_knowledge_search_query") as mock_get_query:
mock_get_query.return_value = "Capital of France"
crew = Crew(agents=[agent], tasks=[task])
crew.kickoff()
mock_get_query.assert_called_once_with(task_prompt)
with patch.object(agent.llm, "call") as mock_llm_call:
agent._get_knowledge_search_query(task_prompt)
mock_llm_call.assert_called_once_with(
[
{
"role": "system",
"content": i18n.slice(
"knowledge_search_query_system_prompt"
).format(task_prompt=task.description),
},
{
"role": "user",
"content": i18n.slice("knowledge_search_query").format(
task_prompt=task_prompt
),
},
]
)
@pytest.fixture
def mock_get_auth_token():
with patch(
"crewai.cli.authentication.token.get_auth_token", return_value="test_token"
):
yield
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
from crewai_tools import SerperDevTool
mock_get_response = MagicMock()
mock_get_response.status_code = 200
mock_get_response.json.return_value = {
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": [{"name": "SerperDevTool"}],
}
mock_get_agent.return_value = mock_get_response
agent = Agent(from_repository="test_agent")
assert agent.role == "test role"
assert agent.goal == "test goal"
assert agent.backstory == "test backstory"
assert len(agent.tools) == 1
assert isinstance(agent.tools[0], SerperDevTool)
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
def test_agent_from_repository_override_attributes(mock_get_agent, mock_get_auth_token):
from crewai_tools import SerperDevTool
mock_get_response = MagicMock()
mock_get_response.status_code = 200
mock_get_response.json.return_value = {
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": [{"name": "SerperDevTool"}],
}
mock_get_agent.return_value = mock_get_response
agent = Agent(from_repository="test_agent", role="Custom Role")
assert agent.role == "Custom Role"
assert agent.goal == "test goal"
assert agent.backstory == "test backstory"
assert len(agent.tools) == 1
assert isinstance(agent.tools[0], SerperDevTool)
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
def test_agent_from_repository_with_invalid_tools(mock_get_agent, mock_get_auth_token):
mock_get_response = MagicMock()
mock_get_response.status_code = 200
mock_get_response.json.return_value = {
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": [{"name": "DoesNotExist"}],
}
mock_get_agent.return_value = mock_get_response
with pytest.raises(
AgentRepositoryError,
match="Tool DoesNotExist could not be loaded: module 'crewai_tools' has no attribute 'DoesNotExist'",
):
Agent(from_repository="test_agent")
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
def test_agent_from_repository_internal_error(mock_get_agent, mock_get_auth_token):
mock_get_response = MagicMock()
mock_get_response.status_code = 500
mock_get_response.text = "Internal server error"
mock_get_agent.return_value = mock_get_response
with pytest.raises(
AgentRepositoryError,
match="Agent test_agent could not be loaded: Internal server error",
):
Agent(from_repository="test_agent")
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
def test_agent_from_repository_agent_not_found(mock_get_agent, mock_get_auth_token):
mock_get_response = MagicMock()
mock_get_response.status_code = 404
mock_get_response.text = "Agent not found"
mock_get_agent.return_value = mock_get_response
with pytest.raises(
AgentRepositoryError,
match="Agent test_agent does not exist, make sure the name is correct or the agent is available on your organization",
):
Agent(from_repository="test_agent")

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,660 +0,0 @@
interactions:
- request:
body: '{"input": ["Brandon''s favorite color is red and he likes Mexican food."],
"model": "text-embedding-3-small", "encoding_format": "base64"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '137'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-read-timeout:
- '600'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/embeddings
response:
body:
string: !!binary |
H4sIAAAAAAAAA1SaWw+6SrPm799PsbJumTcCIt297hDkjDQnESeTCSAqB0VODfTO/u4T/O/smbkx
EYimm+qq5/dU/ce//vrr7zarinz8+5+//m7KYfz7f2zX7umY/v3PX//zX3/99ddf//H7/P+eLN5Z
cb+Xn+fv8d/N8nMvlr//+Yv97yv/96F//vpbqXVC1KY6Ai5tbEYUvL2Fz3d3UKbls2ooTr4sDq2d
H3I7zRfROAtPcpupB9i58SM02CpPTBYpFV92+RsAu3VxJt5ItRyJHKMCiSkJHii1FzN45mier6KL
Ip7L5nWpWVTu8pScO3MAizDvZ1Br3wLbot335HPPZbheQ8HlvOuQzV81D0SsxfIkjqoL9pgYb1Q+
zXbinHJXEY9NT+ix5hM+rQMCq7GWOepKI8RXuDL0vftIJ5iHrwKfMaParIgPOQwa5kEkgbyrZRSy
Dl6qJCJ3xdLCqeGqGYbZeYdlLiz71WQDBkXLyGPNY7yQ92PjDYVFvOOz4rfZHLYNj8YH5+AsdFSF
fF8vF7Xw8iU4PqFsoZFdI98/mwRfEMjoTvVb1JiOQwqjTKqJQ7WHOI5KRG0Ene4vpiGJX7vN8Xkl
n2r5vuUCBR9vj43K+tLZPXcWQJp6Jtfiydm0/ewk2JVWSHz18sqmmGQz9HHeupQ5JP3qJVwCucbm
iGw9cMhP6ZogL1sb4iS3WqFpw7qQaxpE5KvaKcNuKixoJM6daLe7aXMPwYzgmaQKCcX7XNHYeUkI
3JNpip3ZydagABq8jpk18R2gYNB6+IRDyi4kX4/Hig7rUUSxdohIKLtKSCMb8pBrPgjrRbAos0nL
CaR9qBOJHTjAy7H4hqOzRi4PdbPijmzII/C8f4hxEqRsXzq6AR7YWXG0OqeeawJpBgXKA+y7Ec0G
yecsxOQWcMNwfmSL03AMOF0ilahvBSjr4wZEqLCTiPX38gXLuQ069BIFndg8zik7v4QIqDBQiMIN
ZTWnp8uKVFNMiDXMLV1jtgnQddesOL60MFu68VSgM2Ft7KlVXu1fx8GCa3xrybkBLaDcNXmCZiCO
OwuvsmeVWx/BipUnbHweGJCBigm8na4m0YW7QOc3QKUo+H6ATQ4nlApvY4BqGj8JJk1CuYtleJCB
s+YKZyYKOc1yNdifqDLdWrkEs07uJQTkExPJnyV7r59DGRZItojy5EKFJlfBQaadJ9hMEbEpdzvK
SCmfAYn3rVfR7FNPgDGZkERrcQDklt1c+DK/Cw6ekq1weTMbSOdHmaQVNAE3io4FsehbOCbegY4S
JR3QQ1fGdx71/bK66wlcM7fEsqwn/eTHUg2Jfg6xw9d2tb/4bgTjA9GJbmpZv9zn0ICi/lRwiGXZ
5sGFuqg7qhJ2BWlP50fKWgjsYp8c+92x2gP+lSCr5B/u82GLyrzX0hm+zPOXON/sDRZfoDG6nahP
7Gs7KmsrVyX8NPVEgowXlEG87mb4PkdXUhjmMeQv7W2GU21kJG0zmfIN18/geHNaLH3yCcxKFrJI
oA5H/OzoKTT7DBM4KB3CEttQMJR1wKPxSe7YYaCc8aK7uOiaa+sE+fu3WlRt1KDb7CBxjVKoZsk7
dTCF9pUY6X1UZjm6sfDUpoiYVWcDrn0YOTQn1cBe2noKPw/XFAmlXWLF2i3Z2ATSCq5TmGJ8O3TK
nFthDm+nizm9u5GAxWkQhNE5bohmaM9wrYfFg8wOUyIfSNfPTEUt9CgJnRY347Kl4IYW+IF+cytO
1Ko12R8ZxIyPkKjgHlXf4yyyUHC7F972l850vUmQJdIJxy/4ohQAbYCN6ToTD/pXOKqKo4H9aGrY
Oi5l2LHU9OBvv9x3ZgH+g25QPCbr013aB7AXTX3OaOKpTbD08Kqp80gJOVlDWDITM+RZagZwMDWf
HIdxZ9PpMJ/Qh88ZfHycJLr/PJzytz5s+PRoc7obetAvXy8ixaGl7Et1idEnemXEOnZStoJb5ICv
c6qxP2Qm4J/pU0OGATrivBLeXlEmiBDYnUvkx/qmQ0NtCcq7oSDF/n7uB/dcWsjcGRnRy0ql2/4G
KLy5KZGOetS3wjlfoVUmV3K356PNFzc8HELz/iAyF8pVd13mHI3OHGHND/mMPnfgjWZmzon/gTOg
/TAniEFt/ItXezb46xNm7nVH9LPcZVOmlwOcqinE1wvR7dXRahcSo+qxqgth+Lrltxyk+qpjo5VT
ZQ8OhQuDg3LFUkpWe76hbICwrM8ktM9cSHfG2qH2Ox1ccZgNMKPMaKHlX0/YpUe1mv1kiX7xj939
/VyxBn8tYeKKDZafLlaWxVVFKJrqk1zh81vN4nW3AsZKMnxJkG5zyf0+Q9/Hpsva0dPeD1RMQWFf
B+LCfAdIabwN9OEP2sRF7UtZnM87h0LZdMQRD0vfy9A9QaL3GZYqSaP88hE1kfYf7U88sR/x4KCX
cbnhmyqAamFO3xSOj72Dj8H3ln0J48dI87DjHg6npqI73bJAaILPdOBiEXybJK6hUiUlya7t2eb3
uiIjqdHuxMHDR1kqK6hhHlaFy1OnrVa1PM5IOt0Dl6HGp1oPZ2OCvSAxOBsLF6zcLlrRNWs74iS2
ZK9z+awRx8XCFr8PZeaA8oSbnnKJ/wrDRegnD8wj2fTCaijzbioMML6akfjb+Zg7TSpQ3RoHXEA3
A7yNDi1sRjvCtpa8s4kpVwGhz7slcnaXwJ6RDAt8wlTDJmfEgGryLEPrBTE2VK0Oeb9wBTDxiz2l
zIXrF3GpGCSscYN//8+2SrUikgRvbI9DQOdmSZ/Q0ESV6HSw+lmOrRjy7ZtipVH2ShsUFg8eLyPG
j/h0zzhNmT10ip0LVncfPSPv6PsEXBQ2031NXxUhygThbJ9uGPtkHy5OZRqI9hYz7YvnRaHnT5r+
0Wd5/mntNRFLiLhafrrgSrBNdayUKEyfK5akg9cvhPEjdLrE6jQqSh1+F6sW/uRPk8MCnSykFvDA
i8Ikcne1X6y7ncPds7Cw+z5rYI5qo4StGJ+2fCWH/On90eBdvOduLfJzNaGGTICTd2di57qmjOsF
C3Bmogi7p4jtl/zzbKHH1AmJof6tRqn/RpB8LR6fI54LqXk5uvBqFjLGpS9R+nk4T/jwVGVi3u9C
mfmdN8HO33ku/JYtWJKHFkBivHoSbnqkzYwSok+0q9y9AYyQ/dUP4JziCSnKKVzrOQ2gY0YXcjvU
BZ0L72bA3VMIyFZPlZF/pDN0mIbDDm5Ue8bRYIBm8E940/M9T5dkhqfYvUyH4WrQ2j2XBhqM1iQP
QnM6+jCP4bb+ieu0HqyPxE4hcOCbeCzd04XLlhlpLz13m12f9PT8lUV0EvbqtLv0b3u10dL+7mOn
OZeAi+15Rf3hGpOTzH3CQe8EB92i04nku8utpy8NtX/0Xt48BGWkF1aDRP9m+KTosKcNTmQw38Oj
W08eBxY8JTG6vYOBuKfz2q+PGxXgquoe0Yn5rPaA+UzwOu6DjX9egACGDPAuHiOMy8eJssdLbIHF
Qyr25mIBmx57wpPAqcRTlSrjM72cwBolyh99Ph66F4O08zvDGlW1fv4wgQPK4qaTNDya9srLiIWH
xkqmw7Hb91TkkAuRhu7Tejyw/cZHBcqr6UVcZ+izrX5O8PR5jiTe9PS7bAcNTpwASJYdXnRZ/UmE
qv3gXHFgBtCCGOSAkysV//LLEp6sN6BomdzO+gbZ/PrgFhp6ExBv9kbavj56Czf9j1VQ8/0as6MH
Nr2Iz07YZySyTjF8rNIbXxS9CRfDZz1oiJbkvmrrYi/D4cpAekuK7Tx/MvLSuA7C41PE+aZ3VlTt
HKhwtkzcib6qpTicO/F8zp8kkjk95NXSnGGYnjribfVnYuoqhYyVZuQ+FSudplRMIAZ2OrHf0Qbv
3pFdpHCmPAHZr8NZXzQJTopuEOtwGKvlmjgepGA2sd/vjv0yUNlA+t623Pc0S9UsOdWmRzsLY8qP
1Rq9Xx6qL6WJpY9r0UGO5QiJRp9MjGG+wqki0wk0w+i4MzoN/bjxL/zwBYOlQf7S1QIRhM6QnbFu
nlG/nN+dAbnIbyakj3M2RbWwAl68hhPIP4+MllbcQYutd+QK1wLQ17G20OGkqxOzg+9s6tydBKaA
VdxdX8Rg73YggR0V6UQ3XprFGAcwc+J0klAw9xSiKkH3NmCx07xFe8js64o2fsPqrk+qVbnDGlJE
pz/nmwcX4ICqDNyJczJUrVv8omya1Gku+EiZp9QyoP9KHOJVN0aZn8XFgqrVNC58Wauy2vJVglxj
cuQX3+ve3LOwgYWOjSCF4aBWsgT6CxsTjXg3One6wECF9Vas+U6Utc8XY0G9PgXEn/cLndOdkMAU
SLLLrYPWc96yRCgPd+8fH1RrNXJv9LXdiEhVALf6Fxvgfmgcl9lxdTjeh9VAgsdZOHsugTLXpJRA
3VoH7BZKpLCdLkAYvNMP1vapUy23RCjgj88C5jRXy+KeRcjuxIiob39nf4l0FEXDeJRYMjumH6Iy
kVD/9gqcfB4ErMu3e4P2GyzY0s5dNqi1WIIfn2kv+AIU710Bph2/c9eM8Eqrj6MkNj0Pib1Qli7S
rLXgIl+PE++ra0jbwnChuasrnLqMEA43uQkgFklAsCA9+9lmFgHaQii7ezidq3nMtAJ+bSciGF60
aonPXw/8ePynlyZwyx3Yfr2FOJ6BQ37bP/jT6/r5Omft56hrsNwVqSseDud+MaMEwnE9cthUEzmk
XZdNMDs7HdGFoKnmaHB5yN7h4s6D/AX7C77w8NWNJ6zublVG+aqWwOngn0mkuEVFH3vzBOPY0bEj
rrginrBPYPKabvh8z2eFPsXyBJeXYZB7VEsKpwhfCT7f1uLujLkJF90SPbj5T64f7O9h5yiKA//s
78b/+8V4RtC+YoK3fE2XGg8ayIqXTI6sWFajuPQM7AXgEjs5cdk8Ph8MfKzymxzVyyucBcP2oGoW
HpYzEtu//At++coO06/dpscWgrRL9vj2WUYwXfnLyhXmion7jgw68YyXo90ztwje6h+9j2UCnzV/
c5d+96rmQzAF0AGDhM2Y2MqP58DmJ2DfPfvZaqxdDnNJmMi1nA7KpJbmCgv7Mkzr1RkUyl6eEnx4
u3Dal9x/8Q/c/C5sHrtrxelOuf7hA1P1n9kKi3GCadJe3Z1PrtmMo9pAoS7IRB5YE6zYk0WEk1NJ
/DY52N2l9VcgLMKdaJfjS6HD7KdwhxOeHDd+5ZB8GuAnQhU+4axUFsBKHbrmpeiiiQpgunRWAC9c
DYm5HCO6Hv3v6ceX5DwNPP2j/356wsxOvrJ22jxAdN2P02IKfTYLJpjB8rIMrH53JlhC5iChrT7h
k/ypwuWW+wWMD6OOpTPrZ5y3l3m45XtiBs5LebvOV4QqMkIcG2c5ZO/ObKA2SV+b3ooqXm2vNeS/
D2USiulF52c1dVDww8A9iWTqF0YWRKiE5weRysOJUoJ8Db3XiE7M7WbYvJz0GtwtUYqTjb/Hfr0z
P97A8jWd7T/rPV4Fj5wm7wKo1H1PUKq7CzHO+zUcL5YUQHQ1nljttJ7O1uGZon0+MdP7YigK5fJS
QhsfEXm5huC7b88BnHf3L9Hh7VMtC+YTSAy1wmbF7ezFygIG5U3IEDVqX/b8FvISPrzKJuebnWUT
LMYBPjwUYifnHJtIg1/D/BjzExivA31u8Sx+ai/GXhx2NmmCgwwqKeo3fXalXKl4b1TuhJKoIA76
Wx1xNfz5r9bjU4Hxk/crjGbjRh4G8822emUAqYEjMX88xcVtAFLDxUS5Uk8Z3EDPYXX8KsTO3VaZ
bZCIf/wGTEKlWh4iY0DjCxEpJnqsuFsyFz99g2/cPQeUK3csjHVaEyNQ44x7opQHWz7E+Kz22bc6
0zesr+rbfW56bHR6nMOOjYF7mMUAtEEexOgQspIL5UrOxn4UWiSmNwEf2+SmbPqLB/L5I2CjZzkw
DBfIwI+8Apcxrk1GY399QulyLbG2MnU1ie7BhfaHFbGCs1s4mkFbgO284eIUsdUW7xH45uVA1M1/
Xq/8fQbauodukxBFWa8HZv35f+6idnq4wENqgOf+bk5I6XbV5v92sP5435/fbv/hva1eu0ip7v2f
+GbdL7PpMTtcvetJhJorVS7Lp20186h1RW3lIHYYTgo5TX2ukBpPF99ccw6/12Bd0fZ98yMayglN
UsA3vkL3l2+IGSYTLKfNR+/BHvzqN6LwEROJgE81bHoSKsFyJdbP/9p4C6boQbCpLoeK6qYo/+Fv
W8731Xd7v+DsRrMr9OBK6fctDrBAkoXd27OzVzn03/DVeXBaH0i0l0mcYySUnw6rCa8qG6/F0HMO
Ila+j52yJNkYwNlqfXI9LVzVYR+kf/jJvd1aez44Sf5bDz7xrdPT9jFPEF258cf3IS8Y9PnLp0Q1
tGs/t0fqgahwOKy9/Q+YOEBXyDDXCp82/2GuSSf98atd6AKw6v2zgG3XmPhMjYISorwZQAy9mvpt
vfT7+joQOMwbH9tHpkwJ5xfwp0d+ft0Mb4kh/n5PPh6ifm1PuST+/F6FG+SKN8gxR+PZzSfOZkuF
+ntwAhwrzhufSP1Saoc3HIzOJPbX6+hbcKIS7dZu/cPzU3C33pAc1qPLQMuquJ1hJTCE0pkch0gH
K128GdZJxrh15HjZ5D/2KQg7RnRFxqX920sbAT5eVkzO3ouEgy11OdzqlVtt55lMsBaR9qAKdsR+
7B8PYXlD4VVcsBEfG+V9D87Dn3oqFb1EefSSU5jt6guWj9MroxfTkNEasTb2Y9G1xydKWbhIJw5f
9GoMp/HqJ5AmMMbFln/XsrFP8Mfv51uAsiZ7swXc/A13af29stVr56eP8HE4N4CeqyUFbJEciC5V
fkUVxNTQjXDu8lt94K7JKYDylGrToTVrSmhoR/DZeK8pQVJfzTeJnWHO3Y/kGHwP4Sycoxm2Wnvc
3iew6ZFzNHi14mCCTVTTdetvQTFhM4Jj+xG+7yHjAM2VK/fwEQhdRe81occq8vjXX6Dh7uuB9/Md
uqJVztlqQ9MQsRha5GzPR2UeC1JA1B4kcnbUTzgn/G0Amz9F7KV9/eFBmDxZ0+W2/txe2RENkG/N
EFm7wH6+yY0HeZVRsPOdB7A41dGCWz/mj5+4565eiZJX4E+7/sqCbz8KHXy8soz8/FOuOJxb2DD3
AXv+vrTpS8lbsPl7RNUFmq1iJE/QMePL5udcbfqOUxlozxS7C1qDat3rioRqlbpYL3MF0F//4sdf
kR1J9h4dTzlc6CpM4tOalGGvBTNaL7un+2GNlZLj0p/Eza+aIJBoP5472wA6x65Ewd97yIXSq/75
ycTb9MBry29IapjRre3oqczxHMmoo3I18ZeXGP78fuAM+wvZ+AysfuRp6HPZU+J8ZwewWz4Twdx2
5HGpa2V+lV0Mpkbup91Z7cPp5+/ibyLih8/fMmrFhwkeosEjOpzGfvj1UxrmWE/Lxpe8l6AEzsya
E5MzeDDNn7MgMqiLp2TwGJsCuHdAfYG++/O/53VSeCiFs4oTsxw2ntMDSHuDwa4xw359vcoTMgwl
ndYDmnpaykkEt/7rxFP13X8Lrm5h/IEfbB7sT7VawtNA1Dhp2DW+Yr+SF5vAzb8mliqcMv5UMzVc
SjnBqr3jsrn8DjEcX5+R2PUogc9LPRuwVi8a1s+yFU7raNdwPHc1sTmk99yw/xRw84snTrl8w7Uz
Xi4gh/mIzfil2/vpyOQwZ40vzvLcUXiTTSHMQz3++e+ApXP7hMZ1veCzvVfsrf/AgG19LsXmyZ61
dyWhsSjZaeWPO7DCnHPQ2+3IdKguKli2fjTc9Oq0p43Rc4dJZuBUWxl2vqVBOV+sWMQdW45oG9+z
boALKLykM85zONnfrR8CdUXa4bhUFEAjI2NhflwZojnfLltRNovItGoDu2Cl1Yxx0P76M79+Srgg
pmpRsJ92+LxzpYzOp9CAlpSzW/+rt5cY32aw9Yuwdqt7e3g1DwOKt6Da/KMadBaIGCAphojTMGTC
xRcrHm36kNypwdCxSfWTyIwgI9KnmsHIqM3w6zfhokxSZZG+M4+6Kmyxw8efihxGeYJYHANshcev
vb4eyxtZkgCx2Zo1WNdRecNxVbiffwXWmt3XcJIdYetHN/YQvCoWloQ8poO6u1e0LSQX/v2bCvjP
f/311//6TRi823vRbIMBY7GM//7vUYF/7/89vNOm+TOGMA3ps/j7n/+aQPj727fv7/i/x7YuPsPf
//zF8X9mDf4e2zFt/t/r/9r+6j//9X8AAAD//wMAEEMP2eAgAAA=
headers:
CF-RAY:
- 93bd468618792506-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 07 May 2025 02:26:58 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=b8RyPEId4yq9HJyuFnK7KNXV1hEa38vaf3KsPaYMi6U-1746584818-1.0.1.1-D2L05owANBA1NNJNxdD5avYizVIMB0Q9M_6PgN4YJzuXkQLOyORtRMDfNCF4SCptihGS_hISsNIh4LqfOcp9pQDRlLaFsYpAvHOaWt6teXk;
path=/; expires=Wed, 07-May-25 02:56:58 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=xH94XekAl_WXtZ8yJYk4wagWOpjufglIcgBHuIK4j5s-1746584818263-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-allow-origin:
- '*'
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-model:
- text-embedding-3-small
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '271'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
via:
- envoy-router-6fcbcbb5fd-rlx2b
x-envoy-upstream-service-time:
- '276'
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '10000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '9999986'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_dfb1b7e20cfae7dd4c21a591f5989210
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "Your goal is to rewrite the
user query so that it is optimized for retrieval from a vector database. Consider
how the query will be used to find relevant documents, and aim to make it more
specific and context-aware. \n\n Do not include any other text than the rewritten
query, especially any preamble or postamble and only add expected output format
if its relevant to the rewritten query. \n\n Focus on the key words of the intended
task and to retrieve the most relevant information. \n\n There will be some
extra context provided that might need to be removed such as expected_output
formats structured_outputs and other instructions."}, {"role": "user", "content":
"The original query is: What is Brandon''s favorite color?\n\nThis is the expected
criteria for your final answer: The answer to the question, in a format like
this: `{{name: str, favorite_color: str}}`\nyou MUST return the actual complete
content as the final answer, not a summary.."}], "model": "gpt-4o-mini", "stop":
["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1054'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA4xSsW7bMBTc9RXEW7pYhaw6leylQJClU4AGyRIEAkM+yUwoPoJ8MloE/veAkmMp
aQp04cB7d7w7vpdMCDAadgLUXrLqvc0vb697eroKNzdey/hLF9XzdXm4uLqTZfUTVolBj0+o+I31
VVHvLbIhN8EqoGRMqutq8/2i3tTregR60mgTrfOcbyjvjTN5WZSbvKjydX1i78kojLAT95kQQryM
Z/LpNP6GnShWbzc9xig7hN15SAgIZNMNyBhNZOkYVjOoyDG60fplkE6T+xJFKw8UDKNQZCn8WM4H
bIcok2c3WLsApHPEMmUenT6ckOPZm6XOB3qMH6jQGmfivgkoI7nkIzJ5GNFjJsTD2MHwLhb4QL3n
hukZx+fW23LSg7n6Ga1OGBNLuyRtV5/INRpZGhsXJYKSao96ps6Ny0EbWgDZIvTfZj7TnoIb1/2P
/AwohZ5RNz6gNup94HksYFrMf42dSx4NQ8RwMAobNhjSR2hs5WCndYH4JzL2TWtch8EHM+1M65vi
27asy7LYFpAds1cAAAD//wMA3xmId0EDAAA=
headers:
CF-RAY:
- 93bd468ac97dcedd-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 07 May 2025 02:26:58 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=RAnX9bxMu6FRFRvWLdkruoVeTpKeJSsewnbE5u1SKNc-1746584818-1.0.1.1-08O3HvJLNgXLW2GhIFer0bWIw7kc_bnco7201aq5kLNaI2.5R_LzcmmIHlEQmos6TsjWG..AYDzzeYQBts4AfDWCT__jWc1iMNREXvz_Bk4;
path=/; expires=Wed, 07-May-25 02:56:58 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=hVuA8E89306pCEvNIEtxK0bavBXUyyJLC45CNZ0NFcY-1746584818774-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '267'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '300'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999769'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_9be67025184f64bbc77df86b89c5f894
status:
code: 200
message: OK
- request:
body: '{"input": ["Brandon''s favorite color?"], "model": "text-embedding-3-small",
"encoding_format": "base64"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '104'
content-type:
- application/json
cookie:
- __cf_bm=b8RyPEId4yq9HJyuFnK7KNXV1hEa38vaf3KsPaYMi6U-1746584818-1.0.1.1-D2L05owANBA1NNJNxdD5avYizVIMB0Q9M_6PgN4YJzuXkQLOyORtRMDfNCF4SCptihGS_hISsNIh4LqfOcp9pQDRlLaFsYpAvHOaWt6teXk;
_cfuvid=xH94XekAl_WXtZ8yJYk4wagWOpjufglIcgBHuIK4j5s-1746584818263-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-read-timeout:
- '600'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/embeddings
response:
body:
string: !!binary |
H4sIAAAAAAAAA1R6SROyTLPl/vsVT7xb+oZMUsW3YxIRkEJBxI6ODlBkEpGhqqBu3P/eoU9HDxsX
QIAkmSfPOZn/+a8/f/7p86a4z//8+88/r3qa//lv32OPbM7++fef//6vP3/+/PnP3+//d2XR5cXj
Ub/L3+W/k/X7USz//PsP/3+O/N+L/v3nHy10PsRPcxuIYz6sIG03O6RXYmZOpt0pqpupCN1ta2+K
2T5U1Gg2d8TiXAzYNTdb1V+PVxL4y4exz8UvwQOkRxSSJ4tWZ2tkKpuTicTBlRsXtnHarSCuWzyr
8Xukk9e3ahtwIUF5NIA1fk8ZqD/yndw55DQYvzVRNRuDD/Jwh/O1q+ZYwffBJUfYrWDZhO2qCmzb
4QTlN7B8vB1VcQJa5DzSyqOKN1nwJKknZL74FSx8MrfwwkdHdAzUEqzqZ+tDDT0zooO9ZK6Orqwg
AmVAvE8yRvh1ppPagBvB8527NWyD7ETNW8fGwD6xCKcF5OCsG/sAOI+7t67vA/4br9M48GCun6da
fSsdR4xB2edrsb9RVQVaRyxn64IFtEuq0mU6kui6e0VLGm0n2LVGhA6R30UCfT4MYLEpRIaVUrYc
l2cB6+QCiN9yXrPcxmKAxZO+SCjT2Vv0MS1hpRUFySFXeVQ7nGVYB+cuKB/RPl/X4pRu7waTMTjg
1VwV99jD6+2iByUHhYYdz2kPa+0+EVQYMViot6/Vw7Zw8bIUCND94d7DPHy/CNo9p4adDpWv6p/u
HbB933hT7Wxt9TpoCJ0rvsqXOWl4NUFbg6T9E+a09NEAw3QwyYH7kIbGfG+oFPY8CmUjjeh4ag3o
hZNFTjuReYJX1zx4QfpGns3zDTUymMFFjUOyy9wLmDAeJpgnsYcuo1mNrJw6ET5AdkSaC9/N/Ny8
FFh212dwPTveNx6ZDCctGYnzOB4bCZ6UBPI62QWcHzgNW5owUU1NrJCOhiOTqtVp4ZGaJbHmuzi+
04vjqq/D1ibIPlYR3atSpjYFZ6Br/Nzn5HVOVjV0XQUd7feNSSF4WoBu6j3eEv+YSwsyeCilmwzL
NyPzhOwVBZBkRYSlQmzyCb5UGUqhYpPd8X0yv/Evf89HXnIS2Rw7eQ+54dIRywzNSIKnNYbf+KNg
N6WM3yw5B5zS8P/+XywQRYbPOyjxBgNtFDPABvh2spEYM28x5lVWqr6yNkbXi+GMVFhuCaSJfSX7
s+OZS9UfeHg5rw65Fk7QYAhOoqr49Qtdju/FZJ+LVapbRxVJoah3sKgbM4DZpSfol89DZA0h5Er7
iE58OkfUJ34N59sux8vWNjxh/xRsGMVhg5KLemjoR3NLyEwFEuNYJIBPyN6BJpsM9FSAZdKzJHbw
E2MPoYIeGsE+aYW6d7kr8atqYoT6F6iiTXAmaCx0wISsl+HeWc9ffHEBlbTAAEZOFWJvDb3heSle
VfvoBOQUG5to0Z4ght7n3gWC1Nk5aWRLho8T9yC7d44i8XC3Wugn7yrgztniMa24KzDE1ZlcrTjM
cZt/NGXoUIHs3ftp4t08JX/z86hx51HaZE2tajtfJ9dCbKJfvNXv/YgpbA5MDMGiwfi1iOSuWBeP
YfvWw2x73f6+hyfJykb7/T/0zJ1rJPDJq1Nr7xYhJxGXCBstcOHUEhMFxPEZ/w64EH7xErnKxhp/
9Q648/lMdro15ROQihKS8tqQo4iShu7VTQbrVrCIb7HFW5v50CuczBNitNrgkXP6EOFhe3fxBsJb
RLR1SIFd1W9k3ayK4Zjpraqn4Ib2KJibicXnXv3ia8AP9c1cEHYUqHBYxtDqH80SnkwLLpFcooP9
0UfpW1/wXDSYOA9Q52xfJL0an62JPIcvvjWiWCid29Yo2ZtOI9WHVweHRYjJbqqEkfFmV0M7pVYg
KaIb8VTzz5BGyMJL5+ueiI63CXzxnOziyTCla252Kt/tHmgHJ3+kurQGanHwHeJowyuX3uRRwM1o
GsSysJTTxLxjmHpNTHQeBM38q4+jNpXk3JGIsbHDBrRswuFyaDW2ZvtUUeNBCIgWH/VcFEBoQb2t
ECl6aOW8cMI9tPEYBQCfVDZdYiGGEGgpcu/CJl/1QexUd3tPyQ21sbdYV1NWK+1ekKc3XjzJsEyo
lpI2B3RoNSBs7gcOVJvlgfL7bgHk9OEK2HJcgQ79o2lGon7OimKzzbcfnpn4CF0bLsfYQDegvMal
3WkdFPyPgrdC8xnJ5sQC9W4sMnL85yWnwnKKVXsTV8Ra3q5Jedze1UKuHGJvgsFk2GxXOLbmGx36
c2hSKuUywDcqkdPqtw1VroczcMfhQdwdLke+/gQxuIz1QOwmaDwpvxwMsL3eN6TIDDOfe3iy1E8Z
60RTdNMUkQ5LOCVHhbgnbHkC3LNW/eIdbrK6NakzJSt875qUGFfhDMRL1UN1e6+vyFW3hrd21SuB
xzf0AmixxSSrWrvqvg5M5JXhLhK440YBuK4y4g78sWHsE/fwx29uFYgAf8g4F+rHqCHWM3Sj5UGX
Mzic/SfJb8mbzTFvW7AXGp64G0Xypq2zCQBnD3dk8dV7pMpz2/3wCcsvuuZTorSxOixSTJwnvEYS
K6cYyrdBxJBDfcNSbjfBMb0txOvuR8b83OFAROSUhJ/IBbz2KDM1CdsTikvM5WtIc1GJEUqRkxVm
s6Kt5UAhjSiyyu0CyBXhDJqCvP++7ytipk1FNRILEWlH8WVK+WMbw6tBWxL3Vestet37cLOtPbS/
mo/mh9dqe8UMrxpYvSWjcgqtoP+QxJlCb5X0ewG+/REdnsTz+lAZOnALMgFpppSMIu/LFDrD6YXX
O9iac3avZbjKJxsZhY88Atd9q2KPg2i/0DaafOUzwVTrZqzozWwOizwkcNdMBQkFkYxLUs936Nht
jpwPeAE6PvRAJZw4Eo0e+3ztZreEW0ZNlNunKF9wWKbQn+J9cDOJZeINUWvloeANCZZYGNngxz5c
RpUPBNddc9zNbg1fC68hC+QWE2tuOINtgO/IaAQxWty+C2Eevl7EnfFofutfg6vneIHs6XouRCeP
A6/y2ZKDTm85e5hlCnmZo+iLb9FyEVcI90l7JLYmad58Hu8yvIGKIsPwHZMeMtGBab1fgx9+0Zz/
UNiLR4Pku4Zrpv7hxODHt8zJQ6YgzSUPb8gyg0tYBQ29+UcFHvTqRFIo2EzafwAPOnyaSGQQhS3c
cSMDg2kdik930mDh7bZgErITCr71x1tC4AO0ZClxz6ddxHb6FcIMIp/YUTdH81wfDOgM0QsL9LAH
4tBGtvp8UkauerYyTM4sVkXU5HhrDQmgLD4P6o/f/fCe3zWvFl619olSpo+A//EDpaEnct29umiF
hO9VoxiOxJht6jFVbSlsm/iDLlb1jfdqBrCJpTPabThpXGitywrmDgi/tugImGGZnCqFsk2MvYDy
5X1cViXmww3J9aRkQhzJIuTmYItF+NRydtaorFrb9zPgL+/GFGh9kJVXmqFAufpXwMdqmijJuktI
npwSIOI2hyDbXrbIvHtjQ0/lbYB9bBDkjrWRT+Yhp2DsZIU8jpMLpNc5obBa+wV58k4GS9FrHBT4
lidewYNxiW7wDAnHjwRVw9tjgp1n21er7Uhehq98KfHxDGXPm0ngB864Zlt7hbdXX5Hd5516bC7r
DnZbUCOX9v3Irg9aqJJpewTxug1EfR7vMNyf3iRNImouKCh8GM9ThdeN8MjJgHRL/fIPdNzLBKyJ
Lp6VrbMRkQc53RT2T9WCZ/O9x9tvPq+Jzp0Be9YLcgO+a5ju8yGshNJCD1ZOIymnjodhlXUEzU/J
XBJsDEBlO0CCV7yaU2X7PiyeuyvJrMfQLHFERTWtdyu5e3Pw49cpTN98Sg5BxHmTkfGZim+rRAL5
1jWUjuIEv/gdwJKlJhnMRoaiEIboqqu8t2puFav34jZjNVkpY8X5osHayyMsa88T6G+BQiFejAYF
0bj3/urDbz4FlMRPRp/jmihavguRMfMtGILN/hvfbY3ubT7kL2WAPNyNbEX6V+/hWf4UIOHhB7lS
vQOL9rDvQCyMlljvPh/xLd2WQDc1l+QFn4+L+ZIG9boBBab02EfsroHyL58/AP2R85Ey3uG4Ge+Y
aXjH2E5/cnAjDjI6ikgc13TIHHh7DRXa11HtkbZrz+pciDq6bDipmQYpc2C8ja8ohEAbx+/3hXz2
eAdLS68Nc3zOgJzJDnjrnz8mcdLcB9ne64KtUWCTNbKvKLMZ+MiXOz2nnaYP0FFFmzieXuWMv0ID
DI8CBPAp76J1IasGw4x30F5xhBFf6aOD6T2okcl2fkTts89BHPDo10/ZKs2BDGXODNAxQL4nlWYV
qnPgMmQN9dac194O1MSgCTm665z3lZqL4NJ/rmR/ysR8Bu02hWnyhEiv9eO44tC+w6cejZh9+bRQ
4mMIX53vojztI3Oh9UGBFPNbdDFZ4rHc5jjIl0FAPFJHYG1S3oWBnzQBj10vYqp2mtRXmiJUfPn+
mnNlDUue71Fs1EGDNbdKgH48Nd/+swPzoyY+9NIlJZpMZ3Ot328F4mu3Ek+0zEZwPNIBUQsG4juc
Nf7w8McvUbB36og2IwjgpXltvv2g9OiY3WP4qtyVfOuRkVHtMwiTQcMrvtreeqWPFn7u7EkQvfHj
rLr9/ZefxMl1i0m/fPniAXH15mjS0t8PMMvwHgv3avVw+B5r0LVahAzVjppBF/MV5pZKvveLG1rZ
fgBj8WEhm5iRJ3RG3MHJqGkAo0s5rqJrUdgc0J1YidtGsxVfNSgYaYU0NthsbS08gIPenLCQyW4j
gauwQnWqHiRpFivi1fgSwmktOWIb8sVkNtmEAGBiB6FlDDl9ZCEPo7c6o+Ag+ZEoW3oMY/FpoeAV
n80v/8igAwY92IwRzrFyGURQiFeT+PVqAOGgBAosF/VC3MwW8l7Grxg+7PiNUC5OHjW6WweV7afE
KzvLY1/zTakmk2+jXOt9b8E7z4cI+RY5Z3o+kvowt1AY+RNyK6+LcETvGIRn8CDGp3gAelhoDd54
4+CJz48Md9WcQMm0PHRw7TJfLkYawPCoHUl4E51xfDTq8KsXTENjZsumVUJ4FKhNnvNT8uiLnzKl
3Moaun/10t/3cfvFRdZ8T8a14KpMFXeh8fMHwCLfNV/VNVciewn6jG4dKVA5YXMnpnASzOnLXxUh
DO4BhceR0clhGM7kIwciC7pmDQolAd2ucoN5K/WMBjHrIFdax4CFnuMxon5C+Mq6mPzw5qdPFeua
+8ES3K6MteTeQUEnEbK+/RPDtaqhI4D8y//EkYGrQGEGjz666GvlLQFvxSpI3Q+WLkbfsOG6zZTt
66xgSZY2I50Neob9wdr/xfvv+wVqXoUMXdRMj8SvH/jLZ+Qcj4PJnz46hEnYnb76rjaHiSEMwirt
8DY9T+MCbrEPJyE9kSh6MG8KynBV7cgsvvxWHZfzGCtwkV4D8tx7xyjGNVbdfbhDN0BWb0mCJ4T8
6AhY/NbD4vLmXX02Dw6rc2yZxJuqTj0RH6Nbut+a9Oz7LqTbpUIet0lMQT5yDpTeRYPpU3qM7Omf
OrhgYU+O1AzMJb5XjvrYjbcAPrbZSNORdmp43j4IGujRW5NUHqAwiifiUw552NEVCiHJaoIK+hlX
zx0hPNIJkwNRDHMdCA3lc22F6Kye3h7zKj+DrwOwv/6mFy2fgRYqKIGPtx+tBYv52vSKvgsQMjnl
nq/mXk3gvvbNQIXdyr71k0Bx5J9E/+q51XMbDrrociM/fCf5Rdf+6ttD/ekbxi7IBSeXT0j6SHVT
ZAIXAwf0OkkMWTAXttE66A+Oga7ffMLuXrIgbZQzOUZylPOXWEigtObe14+bGWvM2oHTh+bkHtzl
aN4/BUt17C4PODlgJuGbWwAbsmyJ230ykzm+qKnK0vroDtnGXDcgw6DxLYLulCMmbq2uh+9pL+DY
SQ6juH2bdxioQo98bC3jHJ5MWzVewYiM6qR5K+HmFdj7vP/y897sm8aQwY//m9GDmdQgugxfloSR
K/KLSd3Dowblsrng5C7tzeXnZx5Zp+MTRz7R6sxz8IsvKr78Y6WX3odffR5sHcHNhWiHKfzqE+Sm
uzKaVdZpCr3kB2R3ow6o+9ECWEn4gPbFiJi0XIG79VKWIstJDg3lFX1QfUO+In+Ueybwt3Px80sC
enMYWL79FTjPWCX3XfVi86fWHLVL/JQ83uXHXI3bBSo/f8wvmWziX79JumcZvKJQjn7nwVcfBuC2
PZh8rTkUBm/uRnan1WpEWZ4sSBpfJI7XGs1S0dWFP7/7UA4qYMGhT/7i+R6LNsPPIVeU6oa0QG7a
T7OYfVvDYm5LcqzisplyNbMh3u4lFNjZPNJt/5HBT997t2QP6PbhYyguMwnWr34Sja3Wg1+9G6eP
G61VXrp/+aHBe5W5+lZZq2jjn1F04EdAxf5jgJ8etV7WK18Eusm2pLMOyBss7dcvWvidL2Dl6kts
5dKwhcoms4NqHHg24dea/vX3dSI4Ed0sOYTn4dAGbbzF+eJCsEKgBgghC47RmoW6r6KSuFi5KlPD
vvobTIX7wuI6vnIGmoyDwYvG5Iqd1RyfGp+pNA3vJBtrI2KSf52UzvhIAfvmH8ty6w4F6b7Hy6vw
o/HupxP44gVeluieT1//AYjCOURaEojRfMNmCS6bI0/8XXLOmSL1dyC97w0yNsIjouTUKjC6XyRk
BL7XjPXFpdB2uUOw7R/myN7mqEEnThN0dNdjxIeLEoIvf/j5n953njCot8P1gQzl+mleX7yHXSUG
6MgHVUOJA1twuD/lQFzUcJzSkbZq7k4X4kikbGi4rCHMlvML2Xuzb5Yz1qkqBcYaiO10NdmqqKHy
9d/xOtuhyQcxaKEZV2PA2gay7/kz+PrLxDtsX82y3a+T6pi6ggL5ZjfrqQA+OAtLhoyrsIIpfXQ1
GFv9jX75OS2KtkLj1gmYa5xPRKIGGFAPxR0ySX9sVqG9TMpPDxyATxl9v+igroV0+enniLd8p4ej
UdfE//KlhTtKMrxKvEvyg0cZuU2zDcEx0DF3iEeTlObnDIRt7yMU3+ZxahpDgew+FgHzdjqQuos3
wFXIGQlwJnv05u8U0AYwRPHPX4Vho8B3xd+xuFvXhp2EfIB0/w7IcX/B4+IEcfHjr+TXn7/40MPH
HYlIjwQ9X79+CDhYaUscctuOS+rpmWL2to90Pj3ma+qSAkrFpAWc+9JN9p0/qD9+f/j2V5Z11QAZ
QxDZm2XM1932UIKPfhjIUZ/eEeWuhqNa5ytP9s6zYXOTVAG8hK1JDOV6GJfyKGtQrUOLxGgXRaIZ
ZSXUNw+R6JWomOTLLwDYRQk5oDY2KfWqQqXnSsQdWfZMCDafGih++SLhj2/8/OoPu6rE8np+XGeB
7/7iz1OhtMGkCGu1vVOP7MRLGC0342ZDbtR4lE0PwVx+84X2POa4t8UZUEsbC3hDtvmdL1SMSP5z
gsknZiRqmztbbtisFVUWK3QIouLrh8V3db7tc/ylGc0iH0UXuoY/I4eea3Ntkn0Ibkt4JfoSwZwW
p7JUlwrnxAtDnf2d511vVx3tjbFn82lTp5D4Q0qOcLEjfvLKVt341TZQ50M48j6xapgomo6877yU
uo/GgE+L91GcVtooTZddBre9eyS7Kzebf/X1N/7IvGOfsRBcLQjCIxeI1Xobhbx3bKjvfER2UZjm
tXgWbHh82XIgBJcd4GtNW0ElTQf0zCOXSfY6OTBlRYBM80FG2uDSUtu9+UK691RyvKSbAZ6Su4+8
O+uaQSCrrGp8o+OXqhrjb97306OBRGMF4CEVFODubI3YL/Xo8bJi1PAFd1+LKzQjPlTqVu3criaH
5LMzv/rxrOYL1NHztd+Bv3yzw9GEfv7PGrgTD1fP9TD39fuWi+fUwLymXMCxSIvEdf/U4JcPIRS8
1maRw0ZTv/WJfn71Ejw5HjavskTpV+9JR8601Wve2Mjn8yOg7akO1e/9kasJaU4HKXPl6L2ZkQ72
V5OKD1NU8bVdySnaT834Ohcr3PQeIYZ3xt5X7/pQ64GHAkPFjLlvJMIp50rkHF5XtoSvtIVK6ubI
knYqoCigrvrXP5OINi6Yqj3cR1hB+5sKPJY0TIbXIr2hvGeDtwhh7YCv3sbiWi5s2fZ7CP/5bQX8
17/+/Pkfvw2Drn8Ur+9iwFws83/8n1WB/5D+Y+qy1+vvGgKesrL459//ewPhn8/Yd5/5f859W7yn
f/79Z/t31eCfuZ+z1/9z+F/fB/3Xv/4XAAAA//8DAHXQUXneIAAA
headers:
CF-RAY:
- 93bd468e08302506-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 07 May 2025 02:26:59 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-allow-origin:
- '*'
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-model:
- text-embedding-3-small
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '140'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
via:
- envoy-router-678b766599-k7s96
x-envoy-upstream-service-time:
- '61'
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '10000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '9999994'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_22e020337220a8384462c62d1e51bcc6
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Information Agent
with extensive role description that is longer than 80 characters. You have
access to specific knowledge sources.\nYour personal goal is: Provide information
based on knowledge sources\nTo give my best complete final answer to the task
respond using the exact following format:\n\nThought: I now can give a great
answer\nFinal Answer: Your final answer must be the great and the most complete
as possible, it must be outcome described.\n\nI MUST use these formats, my job
depends on it!"}, {"role": "user", "content": "\nCurrent Task: What is Brandon''s
favorite color?\n\nThis is the expected criteria for your final answer: The
answer to the question, in a format like this: `{{name: str, favorite_color:
str}}`\nyou MUST return the actual complete content as the final answer, not
a summary.Additional Information: Brandon''s favorite color is red and he likes
Mexican food.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
"gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1136'
content-type:
- application/json
cookie:
- __cf_bm=RAnX9bxMu6FRFRvWLdkruoVeTpKeJSsewnbE5u1SKNc-1746584818-1.0.1.1-08O3HvJLNgXLW2GhIFer0bWIw7kc_bnco7201aq5kLNaI2.5R_LzcmmIHlEQmos6TsjWG..AYDzzeYQBts4AfDWCT__jWc1iMNREXvz_Bk4;
_cfuvid=hVuA8E89306pCEvNIEtxK0bavBXUyyJLC45CNZ0NFcY-1746584818774-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//jFNNb+IwEL3nV4x8JqsQYAu50UOl7mG/JE5LFU3tSXBxPJZt6K4Q/33l
QCHtdqVeInnevOf3ZpxDBiC0EhUIucEoO2fy29W3zq2+L2dmpb4sFj+2X5dm80Td9qe8j2KUGPz4
RDK+sD5J7pyhqNmeYOkJIyXV8c3082w+nY8XPdCxIpNorYv5lPNOW52XRTnNi5t8PD+zN6wlBVHB
rwwA4NB/k0+r6LeooBi9VDoKAVsS1aUJQHg2qSIwBB0i2pPnMyjZRrK99Xuw/AwSLbR6T4DQJtuA
NjyTB1jbO23RwLI/V3A4WOyogrW49WgV27UYQYN79jpSLdmwT6AntRbH4/BOT80uYMptd8YMALSW
I6a59Wkfzsjxks9w6zw/hjdU0Wirw6b2hIFtyhIiO9GjxwzgoZ/j7tVohPPcuVhH3lJ/XTmenPTE
dX0DdHYGI0c0g/pkPnpHr1YUUZsw2ISQKDekrtTr2nCnNA+AbJD6XzfvaZ+Sa9t+RP4KSEkukqqd
J6Xl68TXNk/pdf+v7TLl3rAI5PdaUh01+bQJRQ3uzPk/CX9CpK5utG3JO69PD69xdTFZlPOyLBaF
yI7ZXwAAAP//AwCISUFdhgMAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 93bd46929f55cedd-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 07 May 2025 02:27:00 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '394'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '399'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999749'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_08f3bc0843f6a5d9afa8380d28251c47
status:
code: 200
message: OK
version: 1

View File

@@ -6,7 +6,7 @@ interactions:
accept:
- application/json
accept-encoding:
- gzip, deflate
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
@@ -151,7 +151,7 @@ interactions:
6NrP9D+nrsnf4z///rMV/t41+GfqpvT1/z7/1/dV//Wv/wUAAP//AwBcfFVx4CAAAA==
headers:
CF-RAY:
- 93bd535cca31f973-SJC
- 931fcf607c16eb34-SJC
Connection:
- keep-alive
Content-Encoding:
@@ -159,14 +159,14 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 07 May 2025 02:35:43 GMT
- Thu, 17 Apr 2025 23:47:53 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=FaqN2sfsTata5eZF3jpzsswr9Ry6.aLOWPP..HstyKk-1746585343-1.0.1.1-9IGOA.WxYd0mtZoXXs5PV_DSi6IzwCB.H8l4mQxLdl3V1cQ9rGr5FSQPLoDVJA5uPwxduxFEbLVxJobTW2J_P0iBVcEQSvxcMnsJ8Jtnsxk;
path=/; expires=Wed, 07-May-25 03:05:43 GMT; domain=.api.openai.com; HttpOnly;
- __cf_bm=CncSMPCav.9EJL3emmM0sTqugx5GN6_Oy8JPFBssVho-1744933673-1.0.1.1-Q1XMvHbQdrfEWkkCYeeNHwFdZ1NpjAGJ_0jOUYIk_APelFe7nCanjW_xlOj12b3JQql9.iWQDiHvCeYJDTWkdxnNiMQOEiFMYHX5YZXUuJs;
path=/; expires=Fri, 18-Apr-25 00:17:53 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=SlYSO8wQlhrJsTTYoTXd7IBl_D9ZddMlIzW1PTFiZIE-1746585343627-0.0.1.1-604800000;
- _cfuvid=unfPTYCpF5COtm5PuZDuaJqlhefP0iibfjsXHc9lKq0-1744933673515-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
@@ -185,15 +185,15 @@ interactions:
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '38'
- '75'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
via:
- envoy-router-6fcbcbb5fd-pxw6t
- envoy-router-8687b6cbdb-4qpmr
x-envoy-upstream-service-time:
- '41'
- '46'
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
@@ -207,125 +207,7 @@ interactions:
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_39d01dc72178a8952d00ba36c7512521
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "Your goal is to rewrite the
user query so that it is optimized for retrieval from a vector database. Consider
how the query will be used to find relevant documents, and aim to make it more
specific and context-aware. \n\n Do not include any other text than the rewritten
query, especially any preamble or postamble and only add expected output format
if its relevant to the rewritten query. \n\n Focus on the key words of the intended
task and to retrieve the most relevant information. \n\n There will be some
extra context provided that might need to be removed such as expected_output
formats structured_outputs and other instructions."}, {"role": "user", "content":
"The original query is: What is Brandon''s favorite color?\n\nThis is the expected
criteria for your final answer: Brandon''s favorite color.\nyou MUST return
the actual complete content as the final answer, not a summary.."}], "model":
"gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '992'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//jFJNa9wwFLz7V4h36WVdvF5nv46BQEsPpYWeSjCK9GwrlfVU6XlpCfvf
i+zN2klT6EUHzZvRzOg9ZUKA0XAUoDrJqvc2v/32+fTh68e7bel/UtXFR6/vKv+FP6191cMqMejh
ERU/s94r6r1FNuQmWAWUjEl1vau2N/ubTbUZgZ402kRrPecV5b1xJi+LssqLXb7eX9gdGYURjuJ7
JoQQT+OZfDqNv+AoitXzTY8xyhbheB0SAgLZdAMyRhNZOobVDCpyjG60fhuk0+TeRdHIEwXDKBRZ
CsvxgM0QZbLsBmsXgHSOWKbIo9H7C3K+WrPU+kAP8RUVGuNM7OqAMpJLNiKThxE9Z0LcjxUML1KB
D9R7rpl+4PjcereZ9GBufka3F4yJpV2SDqs35GqNLI2Niw5BSdWhnqlz4XLQhhZAtgj9t5m3tKfg
xrX/Iz8DSqFn1LUPqI16GXgeC5j28l9j15JHwxAxnIzCmg2G9BEaGznYaVsg/o6Mfd0Y12LwwUwr
0/i62BzKfVkWhwKyc/YHAAD//wMAwl9O/EADAAA=
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 93bd535e5f0b3ad4-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 07 May 2025 02:35:43 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=4ExRXOhgXGvPCnJZJFlvggG1kkRKGLpJmVtf53soQhg-1746585343-1.0.1.1-X3_EsGB.4aHojKVKihPI6WFlCtq43Qvk.iFgVlsU18nGDyeau8Mi0Y.LCQ8J8.g512gWoCQCEakoWWjNpR4G.sMDqDrKit3KUFaL71iPZXo;
path=/; expires=Wed, 07-May-25 03:05:43 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=vNgB2gnZiY_kSsrGNv.zug22PCkhqeyHmMQUQ5_FfM8-1746585343998-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '167'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '174'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999783'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_efb615e12a042605322c615ab896925c
- req_b8c884a7fe2bd4732903ecbdc632576d
status:
code: 200
message: OK
@@ -346,16 +228,13 @@ interactions:
accept:
- application/json
accept-encoding:
- gzip, deflate
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '926'
content-type:
- application/json
cookie:
- __cf_bm=4ExRXOhgXGvPCnJZJFlvggG1kkRKGLpJmVtf53soQhg-1746585343-1.0.1.1-X3_EsGB.4aHojKVKihPI6WFlCtq43Qvk.iFgVlsU18nGDyeau8Mi0Y.LCQ8J8.g512gWoCQCEakoWWjNpR4G.sMDqDrKit3KUFaL71iPZXo;
_cfuvid=vNgB2gnZiY_kSsrGNv.zug22PCkhqeyHmMQUQ5_FfM8-1746585343998-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
@@ -385,20 +264,18 @@ interactions:
response:
body:
string: !!binary |
H4sIAAAAAAAAA4xTTU/bQBC951eM9tJLghITIORWVKFSDq0qoR5aZE12x/aW9Yy7O06IEP+9shPi
0FKpF0ueN+/tm6+nEYDxzizB2ArV1k2YXN19Xt/cVtnjh+2XbLH99W399a759HFzy8Xs0Yw7hqx+
ktUX1omVugmkXngH20io1KnOLubnZ4uz0/m8B2pxFDpa2ehkLpPas59k02w+mV5MZos9uxJvKZkl
fB8BADz1384nO3o0S5iOXyI1pYQlmeUhCcBECV3EYEo+KbKa8QBaYSXurd8AywYsMpR+TYBQdrYB
OW0oAvzga88Y4H3/v4SriOyE3yUocC3RK4GVIBF8AhaFpl0Fb8MWnNi2JlZy4Bms1LVw2AKu0Qdc
BYIHlk0gVxIkaaOldALXEgGtbSMqgedCYo1dP8fgFTbSBgcrghUlBRXA9PBiB5yPZDVsQSJY4dQG
hYZiks77Xh82FUUCrXw6Focat51sqjCSOzluU6SiTdiNitsQjgBkFu3Z/YDu98jzYSRByibKKv1B
NYVnn6o8Eibhrv1JpTE9+jwCuO9H376apmmi1I3mKg/UPzc7X+z0zLBxAzq/3IMqimGIZ7OL8Rt6
uSNFH9LR8hiLtiI3UIdNw9Z5OQJGR1X/7eYt7V3lnsv/kR8Aa6lRcnkTyXn7uuIhLVJ3kP9KO3S5
N2wSxbW3lKun2E3CUYFt2J2JSdukVOeF55JiE/3uVoomn55eZossm15Ozeh59BsAAP//AwAaTaZd
OQQAAA==
H4sIAAAAAAAAAwAAAP//jJNNi9swEIbv+RWDLr0ki/NBvm5NYUsplFK29NAuZiKNnWlkjVaSk02X
/PdiJxtn2y30YrCeecfvvCM/9QAUG7UEpTeYdOXtYPXp7vbLw7evm4ePmObvt/jrsVgVn9/5dbhj
1W8Usv5JOj2rbrRU3lJicSesA2GiputwNpksxuPpbNyCSgzZRlb6NJjIoGLHg1E2mgyy2WA4P6s3
wpqiWsL3HgDAU/tsfDpDj2oJWf/5pKIYsSS1vBQBqCC2OVEYI8eELql+B7W4RK61/gGc7EGjg5J3
BAhlYxvQxT0FgB/ulh1aeNu+L2EV0BlxbyIUuJPAiUCLlQAcwUkCX68ta3sAI7quyCUywA72bMge
AHfIFteWYOtkb8mUBFHqoCneXPsLVNQRm4xcbe0VQOckYZNxm8z9mRwvWVgpfZB1/EOqCnYcN3kg
jOKauWMSr1p67AHct5nXL2JUPkjlU55kS+3nhtP5qZ/qVt3R0fQMkyS0V6rFpP9Kv9xQQrbxamtK
o96Q6aTdirE2LFegdzX1325e632anF35P+07oDX5RCb3gQzrlxN3ZYGaP+FfZZeUW8MqUtixpjwx
hWYThgqs7el+qniIiaq8YFdS8IFPl7TweTZejOajUbbIVO/Y+w0AAP//AwA4a1/QsgMAAA==
headers:
CF-RAY:
- 93bd53604e3f3ad4-SJC
- 931fcf649bdbed40-SJC
Connection:
- keep-alive
Content-Encoding:
@@ -406,9 +283,15 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 07 May 2025 02:35:45 GMT
- Thu, 17 Apr 2025 23:47:54 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=8vySwO0xqpm0u93_1_rXQwTeEIWa2ei3_CD5sAdoo3o-1744933674-1.0.1.1-iqZDpH5poOUp4Rcnhfrb0N2Z0c2662RBiPEcx_gefNW.m3tBA3qyFa8tmFv7PitH8u9vyYK7jxUwy4lPiSi830QWNbTMgCMTbrJ7iaUV7hY;
path=/; expires=Fri, 18-Apr-25 00:17:54 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=IXAyT8eWpFERM53ngcYNmaqhocfGbOHWSoe7SFNdoGI-1744933674288-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
@@ -422,13 +305,11 @@ interactions:
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '933'
- '489'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '936'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
@@ -436,13 +317,13 @@ interactions:
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999802'
- '149999801'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_0001c38df543cc383617c370087f0ee3
- req_151f9d0b786f2022f249ee20ea108b43
status:
code: 200
message: OK

View File

@@ -6,7 +6,7 @@ interactions:
accept:
- application/json
accept-encoding:
- gzip, deflate
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
@@ -150,10 +150,8 @@ interactions:
Mj9mH/7zuxXwX//68+d//G4YtN0jf30vBkz5Mv3H/7kq8B/if4xt+nr9vYZAxrTI//n3/76B8M9n
6NrP9D+nrsnf4z///rMV/t41+GfqpvT1/z7/1/dV//Wv/wUAAP//AwBcfFVx4CAAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 93bd57189acf15be-SJC
- 931fceef786ded38-SJC
Connection:
- keep-alive
Content-Encoding:
@@ -161,14 +159,14 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 07 May 2025 02:38:16 GMT
- Thu, 17 Apr 2025 23:47:35 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=VGdrMAj2834vuX5RC6lPbHVNwWHXnBmqLb0kAhiGO4g-1746585496-1.0.1.1-kvgkEGO9fI9sasCfJjizGBG4k82_KhCRbH8CEyFrjJatzMoxhM0Z3suJO_hFFH13Wyi2wThiM9QSPvH3dddjfC7hC_tscxijZwiGqtCVnnE;
path=/; expires=Wed, 07-May-25 03:08:16 GMT; domain=.api.openai.com; HttpOnly;
- __cf_bm=fj4RMXSXRDQjE2CFC6CGC3dVcJ8cl2Cbu8alijwMHA8-1744933655-1.0.1.1-M3c3AI4XQa.0GJoanNACuOm2aEL4xjqHR1grxIP3olFvq3e0eFHwQTvCF20YwR_OLiMJUH87eNUwgziawMccsxjR9OVZyDr5._5Wts6CrqA;
path=/; expires=Fri, 18-Apr-25 00:17:35 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=sAoMYVxAaEFBkQttcKO7GlBZ5NlUNUIaJomZ05pGlCs-1746585496569-0.0.1.1-604800000;
- _cfuvid=MSkpJsQZtdyIGvrl2mIwy0a_We8H6CIrS7etFgRBl2Y-1744933655703-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
@@ -180,20 +178,22 @@ interactions:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-model:
- text-embedding-3-small
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '69'
- '140'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
via:
- envoy-router-7d545f8f56-jx5wk
- envoy-router-84959bbcd5-rzqvq
x-envoy-upstream-service-time:
- '52'
- '110'
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
@@ -207,125 +207,7 @@ interactions:
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_73f3f0d371e3c19b16c7a6d7cc45d3ee
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "Your goal is to rewrite the
user query so that it is optimized for retrieval from a vector database. Consider
how the query will be used to find relevant documents, and aim to make it more
specific and context-aware. \n\n Do not include any other text than the rewritten
query, especially any preamble or postamble and only add expected output format
if its relevant to the rewritten query. \n\n Focus on the key words of the intended
task and to retrieve the most relevant information. \n\n There will be some
extra context provided that might need to be removed such as expected_output
formats structured_outputs and other instructions."}, {"role": "user", "content":
"The original query is: What is Brandon''s favorite color?\n\nThis is the expected
criteria for your final answer: Brandon''s favorite color.\nyou MUST return
the actual complete content as the final answer, not a summary.."}], "model":
"gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '992'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA4xSy27bMBC86yuIvfRiFbKs+HVLUKBFL0YPRosWgcCQK5kNxSXItZEi8L8XlBxL
aVOgFx44O8OZ4T5nQoDRsBWgDpJV521+t989PX78tF/fnr5X+w/fdgv6WnxuWruzX25hlhj08BMV
v7DeK+q8RTbkBlgFlIxJdb6qljfrm2qz7IGONNpEaz3nFeWdcSYvi7LKi1U+X1/YBzIKI2zFj0wI
IZ77M/l0Gp9gK4rZy02HMcoWYXsdEgIC2XQDMkYTWTqG2Qgqcoyut34XpNPk3kXRyBMFwygUWQrT
8YDNMcpk2R2tnQDSOWKZIvdG7y/I+WrNUusDPcQ/qNAYZ+KhDigjuWQjMnno0XMmxH1fwfFVKvCB
Os810yP2z81Xi0EPxuZHdHnBmFjaKWkze0Ou1sjS2DjpEJRUB9QjdSxcHrWhCZBNQv9t5i3tIbhx
7f/Ij4BS6Bl17QNqo14HHscCpr3819i15N4wRAwno7BmgyF9hMZGHu2wLRB/RcauboxrMfhghpVp
fF0sNuW6LItNAdk5+w0AAP//AwDAmd1xQAMAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 93bd571a5a7267e2-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 07 May 2025 02:38:17 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=62_LRbzx15KBnTorpnulb_ZMoUJCYXHWEnTXVApNOr4-1746585497-1.0.1.1-KqnrR_1Udr1SzCiZW4umsNj1gQgcKOjAPf24HsqotTebuxO48nvo8g_X5O7Mng9tGurC0otvvkjYjsSWuRaddXculJnfdeGq5W3hJhxI21k;
path=/; expires=Wed, 07-May-25 03:08:17 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=LPWfk79PGAoGrMHseblqRazN9H8qdBY0BP50Y1Bp5wI-1746585497006-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '183'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '187'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999783'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_50fa35cb9ba592c55aacf7ddded877ac
- req_dd3ef61c4765b46ed7db80ddfe261f41
status:
code: 200
message: OK
@@ -346,16 +228,13 @@ interactions:
accept:
- application/json
accept-encoding:
- gzip, deflate
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '926'
content-type:
- application/json
cookie:
- __cf_bm=62_LRbzx15KBnTorpnulb_ZMoUJCYXHWEnTXVApNOr4-1746585497-1.0.1.1-KqnrR_1Udr1SzCiZW4umsNj1gQgcKOjAPf24HsqotTebuxO48nvo8g_X5O7Mng9tGurC0otvvkjYjsSWuRaddXculJnfdeGq5W3hJhxI21k;
_cfuvid=LPWfk79PGAoGrMHseblqRazN9H8qdBY0BP50Y1Bp5wI-1746585497006-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
@@ -385,20 +264,20 @@ interactions:
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//jFNNb9swDL3nVxC67JIMSZo0aW4ttmI77bIO3UdhMBLtcJVJQZKTBkX/
+2CnrdOuA3YxYD4+8lGPvB8AGHZmBcZuMNs6+NHF1Zc7Xycnp/Rhdv3x2/fL82r+9Tr/uDrxn8yw
Zej6N9n8xHpvtQ6eMqscYBsJM7VVJ4vZ6Xw5n50tOqBWR76lVSGPZjqqWXg0HU9no/FiNFk+sjfK
lpJZwc8BAMB99211iqM7s4Lx8ClSU0pYkVk9JwGYqL6NGEyJU0bJZtiDViWTdNI/g+gOLApUvCVA
qFrZgJJ2FAF+ySULejjv/ldwEVGcyrsEJW41ciaw6jUCJxDNEJq1Z+v3cCu6E9AIuEX2uPYELGC1
rlU60JOrCJI20VIaAiYIFJO2zUKkkiKJpQSeb+lVrwQYCfI+sEXv9xAibzEToLhukC3GPezYkd8D
1ioVsDjesmvQJ9hx3mhzpDRtMJIDllJjja1/74/fKlLZJGz9ksb7IwBFNHf5nUs3j8jDsy9eqxB1
nV5RTcnCaVNEwqTSepCyBtOhDwOAm87/5oWlJkStQy6y3lLXbnK6PNQz/dr16GzxCGbN6Pv4dDIf
vlGvcJSRfTraIGPRbsj11H7dsHGsR8DgaOq/1bxV+zA5S/U/5XvAWgqZXBEiObYvJ+7TIrVX+a+0
51fuBJtEccuWiswUWyccldj4w62YtE+Z6qJkqSiGyIeDKUMxPjmbLqfT8dnYDB4GfwAAAP//AwA/
0jeHPgQAAA==
H4sIAAAAAAAAAwAAAP//jFLBbtswDL37KwhdeokLx0mbOLd2WIAC63YZdthWGIpMO9xkUZDkpEWR
fx/kpLG7dUAvBszHR733yOcEQFAlViDUVgbVWp3efv66LvKPRd7Q/f57vtaf7m6ePjze774sv23E
JDJ48wtVeGFdKm6txkBsjrByKAPGqdPFfF7MZtdXVz3QcoU60hob0jmnLRlK8yyfp9kinS5P7C2T
Qi9W8CMBAHjuv1GnqfBRrCCbvFRa9F42KFbnJgDhWMeKkN6TD9IEMRlAxSag6aXfgeE9KGmgoR2C
hCbKBmn8Hh3AT7MmIzXc9P8ruHXSVGwuPNRyx44CgmLNDsjDRnd4OX7GYd15Ga2aTusRII3hIGNU
vcGHE3I4W9LcWMcb/xdV1GTIb0uH0rOJ8n1gK3r0kAA89NF1r9IQ1nFrQxn4N/bPTa+Xx3li2NgI
LU5g4CD1qL5cTN6YV1YYJGk/Cl8oqbZYDdRhU7KriEdAMnL9r5q3Zh+dk2neM34AlEIbsCqtw4rU
a8dDm8N40P9rO6fcCxYe3Y4UloHQxU1UWMtOH89M+CcfsC1rMg066+h4a7Uts1mRL/M8KzKRHJI/
AAAA//8DALRhJdF5AwAA
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 93bd571c9cf367e2-SJC
- 931fcef51a67f947-SJC
Connection:
- keep-alive
Content-Encoding:
@@ -406,9 +285,15 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 07 May 2025 02:38:18 GMT
- Thu, 17 Apr 2025 23:47:36 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=7agwu5JV1OJvEFNhvfvqdgWf.HoMyIni9D85soRl3WE-1744933656-1.0.1.1-dKUwAZnjjuuiswFKWGsxpwHNBJUpjhYlZvfZpyNQIejxEJrXMCppgPvtQ9wa4SKezLmKqftvn_H.bAx_AEFJD2EWm5V6R_uK8.odneErR6A;
path=/; expires=Fri, 18-Apr-25 00:17:36 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=LdTrzwZYrB6ZyQLY7NdaaHVpDVFvIjYm3arSpNy87wU-1744933656504-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
@@ -417,18 +302,14 @@ interactions:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '785'
- '540'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '931'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
@@ -442,7 +323,7 @@ interactions:
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_9bf7c8e011b2b1a8e8546b68c82384a7
- req_8837be6510731522fd5ac4b75c11d486
status:
code: 200
message: OK

View File

@@ -1,333 +0,0 @@
interactions:
- request:
body: '{"input": ["Capital of France"], "model": "text-embedding-3-small", "encoding_format":
"base64"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '96'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-read-timeout:
- '600'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/embeddings
response:
body:
string: !!binary |
H4sIAAAAAAAAA1R6Ww+yyrbl+/4VK+uV3pGLUFXrjbsISKEgYqfTAUEEBeRWQJ2c/97Br/t094uJ
WIZUzVljjjHm/I9//fXX321a5Y/x73/++vtTDuPf/217liVj8vc/f/33f/31119//cfv8/9bmddp
nmVlU/yW/34smyxf/v7nL/a/nvzfRf/89XfIFDaOu8OtX0ttiOESiQE+pXaRrnxot/BYFAy5kEUP
ONOCJaymU0mOVijT9bPeH/BZXg4kYptvv4S7/QQztojx/ay+Ul5kFR8F+9XEUeizziKFvQ+uIr4T
E107h77ZSw1r9zPiUCqSYL4SxUbv5usRy5ZLusgkecBnVXUYOzGbrs25voiDG1c4vEhCTxHz8GCW
BAdvL9+sngv6iwv3uxDj7HWgPUkXUIBvFy3YnkpJo7AQIvipwoIEgINatQMnCZbG54EP4NtQGgm1
j84HJBDbhxeNI+7DBsboBuS5G27VeLUbGV4TmSEyXDVNYO2qg963GbCJrrazpK3iIk9XzQn1/aGa
HvHpDatPvZDj9a5UvL67PpApmxGO6maite0dfTjujIfHLN4F0Dun8IiTkpSc0vyuLRl9hRAz+on4
10ruubG6mchYruLEKW2rfYOdyUKFSzKs4XeRsjXNYsmeAMVeXOk9n7hjDu9xnGNTZ57p3DVSDWr5
fMXpOdNo4zDGG1m7/QtHFVHoPOe4gw9gDliBlhTQ8z7hRY3aLjneZ4Uu8u3TAfe1ODjU9lrFPfww
hvez5uNTtZcBx1Zhglg5CrF231NAhv5ewND/BCQ5FX2/CJdjAW8sKYkyly7lHFe2AdR9HZ9Jc6EC
8O8qelzOmcdoegrac2xJ0KLvI8be8UXp28I+fJ/uiCQH4ews2KhKpFTMwxPPo5S+M+Oxh4esr7Hb
TDigi6VIaPf8aMS6GVol/PItMgbfA8z3HghaTguYSWJFrmr1AovFCxN6lWOCH8Zy0Cj3tG24rysZ
n9r61c8ncz9A32GbqY46xeETxDNIpUswCWHyqRbOfMVIaG0bG4uy9Et2iGSYhTeJqEEyARpwbgEt
fzpMxzJXU9bImBBMrPAicq0fwCyP9wiGaWsRR481jeNRCZHD8pG348i7IurdyaE9iRTHha1X0yeE
NbicCPCK19j063LnYvSxK5+oxyWshHE3y2h/CD/kfrjXdEaVFIL3y9fwbQWHgApiPMGnM2U4gotf
rZ/1nKNkskqcWfRTrXUytNBlfAWr5VD1rak4ETSWm4iVsLxt+8tVwDlaOfWqg7XlfHY6cDjfPGIy
4EFJveg+1JsHxmnzzgDl35IHDwafeKabnYLRmXoXamJ1m+ZoDTQara0OOTVZsHr21YoviMHCV9w2
Xhyo0Fma0VZh/dXfON9HRTV/7vMAFpFeJxbnZ20Rs28L+x0YiFcKNzDj21P95SexzvRYcVejYNHL
3p9w6l15ZzaGYQZWc7gQo/rMKQEr1P+s12KUOD+8gU/9ir193EcVh3mkwnF3eGDjcbS1LV9l8aiE
lbfU4hqMtzd6S8X3POFnBStnNPtbjcDbqLGm0n06n77BCkrCD948wBDMDjY8+L7vWRztmSPoLobs
/fCAaLN67eeMnxKYHFWJHOpFTvlQEfcwPpWLR7+T6nBPHQ9S9MI6ceprH0yrM/OI8IDBTs6jdNu/
DN9WTr3yzmgaX5ATD3fr0/R4tjlWbCmuMlpPOSTqnak0sgPGHo6vm7bhs619j3KhIsKLzDT/8EeV
1hpe4nkhF+HtaLPNtTnkXo2LtUA2KUfDD4SscAUY21nSc339iVEYXiSiAuAEfKu9BnSa9hl+HG4i
WMTs1aGT3p/w7z7wL7dc0RLgAeMrbKtB08gM7kdXw9nexVrfiZGOPs9uT3R6PmksFWUT1dXxhuWD
++l7duVnMGVqiPVdGjpcC+YHhF+uw9bK7bWFrcIYct54xGcd2g6b9t7lT71zGdpRKl2frhTXkeJ9
TEnVKDafDxh0yteD3rOp1rgp3uiZ2Tnxjq2vsVdrlKRLvC5Y9vRLT+tFv6DrpZ+JWoHIWVRpfcMt
HsSNn2oq3OsgRiqlAbaSl+1wVXSckWv6N5I43Q3wrfdkwHzTHyRWXyxYSb/av/wg3m6t6cJPsJTU
lDFxoE2rs6T3ywxD+rxO+8ukpUIowTesh9kh0fhM6dqy1IXtM9OxfBtdwKUjTuA8ePrEcrmiCTwT
STCTvSvOZOPjbPjeIvdFHeKJitivfqDP6G7CjBzN/KSNuyV4w21/WAuA3As69GPIvT4uPsx2oq1R
zQ+wegGbGLWJtCH83k0QNKHhzVnUpHzifh4ofz1PHmdKpdOuB30AeUY/3jc+Tz2NKrH9xYPkr7Oh
Cca7f0PXOb2wdXocA5Zw7AOBV6wSP9bHtPM5wkrIHQdispqv0fNBidD3XJUT3zHYoW7nQyh809Hb
H42vI2QobaEa+0+c4XB11putTSA5yhLxrDFKZ41QHd4WDXit8S2rqTo+HvA1zphoPP4GRHncV4jl
YzR9g2oE81l+mGLTgpToVcqAWc7LPUpQhDzY2FfA8rkO4Wjw2OOy+FEtmjBc4GMML0S1StXhbOH1
RkrTavhxApPGqcevDleddYhy+egBOUmCj2q3GT3JSz/BELGi+wd/5LfHBvRY7WTYIQETXDSPlN3w
AgyJhXDEAEgH7RTLcOc5KjlcTueeFtkl+lMPsEISh33czh26z7gnp8K9g29/kic0WAMkh8WgQBh0
5gKKy74ix0vLassrjy5o6d4SkfkP0eawuEyQu5V3YufKBcyXbIboy6bvaTl1di+8IzRImSWY09kl
PhAI/A6QdbuBKLcG9QMPvrzQTt8nztr6VbHzwJhSFKcslmP9lAp5UXQIjFPiOUTrwWqpX+tP/mtc
iPvJCZZQcly+J9rXtirhtexZIJhVTzb8C9aIGy1YfS8x8T7tJ2gMl1/h/az4ntBbRb9AtS7gEoEA
O80bgd/9hnyvCuQQJp+eWnp5kXhcatOeQwZdNHy2YWsZPLbXz6tarvfrAz1hKGKv65j+veEV5BhT
nahzKpxZqqMVupNRYVvMvJ7/Sv0FbnhHrLpfHHrX5hB+PE8gWos9sAiP3Sy1RnzBxzQK+3l9v2oQ
0uyKQ3RuANH8kw6/kxzjbOOXQmSGK/RVTCdwMqtq3Yn6/MtPbL1fOhBOr0uNBll38F2LngHHFZ4O
F/4wEcv4qpUQa14Ob2eh8Fgdvx1ivKs3xPWww7kIg2BelEcLLr4tT8vjyqRrVS/RH34h2+2gLexq
s7DjvHxipUJK6cangcZxM06uwSWgOHcvcMoDwXvkDy7o+ciZ4XqpP8RRHezwQ/JiID/bxSS6vaJ9
X/arQyKBPTGGbglG4SGscPceBhzQQ9ovtzf3hvU1QRPParO27KWwg/zJ2OHj1B6D9ZuZMVQiV8VB
9DgFg8a8CpAnB8YTZlvSfvkNtXlf4xv4HgArpW0BNzwm6l37BlRT8g4wnxudqslQg8mLAhOZgoG8
9cdHEsRDIKwumnapmPbzQJMJHoKaxcrGX3/4iMDpBYnmfc1KcLsYwvq4PxLtNA2UCF3ng+h10qdd
nRFap50aojJPeqx+TnPaSb26R8wcf7G/C0j1zbuOgXs3IB6bjExA90dBB32sF95Mj1YvbN/Rxk9I
Yow64HOQFKChzJngJv72q6r2rOSwbIQV5VgFg4dWH5k3scTu+q0Ambs3A1fR8vBt/I7BPBwOCdyJ
fEiMV6pro+d9JVCY94FYXzcDvaWXPkpFGeN4qY2eqyJlRnBfH7yPfmTpwOc6I92mMMJX5MQVbYyO
lcbbxd/yray2/V/g5/o2cZ7qi7a691eC1FoVf3yhmp+q+YbvCjUkQ/lXowa9qHA9PSBR7WFK5/KR
WX/qXYfYCfQ5m13gGLy/2FuPQz/o+rlD6mwM5NZbcs9zXlLD4/1IyHH+wIrGZ1aCA7l88AlWJFjk
OjQRMydfD5qDDFojYyL44+PO+lyDRa1OM7zlzh5rytugQh2QGEz6ycAGcvbV+sZ6iXKDuU/rwW/o
d+MbwJyHFWdfjnVmfgAJ+O4PDnHzB5cSK/AHwJeJOQmrPFakbNccJRwi2OqRo/G96Tykl1buvPHw
clK2ZVAJxITTSJ4chWAKqBPDTb8RV3zNdEnvyQq9em9h88GE/WrRtQU/fXXAg06Xj36SITeKb2Ie
P0Ww6GaygmOUO8RwzmUwd/xLRdM9GsnJrJd+0x8h2PALO1nTASp/Yhvm/JvDxySVwAgnkUcbvyWG
SC4O6198HlbOtcAGhiCYHkzBI2ZiGqw97RP9jNXThGlkn7APhKPDO8ESAeMqGMTK0Fwt+nOXgy6S
NSID4aiRXVyyUGVYi4RZ7lA+oE4CZRrJRNvPvEM13zChJr5u3szEVzoPV8SDa6Iy+CSpaiXgQZog
aOgJK2fz7FBWGX3QS7DE2vHm9iy+3VT49mJ54saPowmPq1xA4SSJZLu/gGbJ/EblmwrePn1UYCK7
eoYnK2KJ9uNb2KgKVLZZThyLVMEsj+fwd37TaxrELV8T9Yd3WJ51tV+b8+RDJN/DTZ/LgQAPlxYq
OU+J+Rk4rVX2hxruibNMHGNeAqGLqIys0rLIvTqCYH6xCgPZ83rFch+9wNAyXAmsTH16/J75gtWF
7gPmXmOTw+GLUrryJxPeB93A12BXgpmmvQtxpcX48OHNfnnqeIK/88eTH/XLcjR4KJbu5efngHm3
93kwOe+AeJirwLpOwR5BE7XE84vM4YmX+fAwV19vJ5lvSpMskyHoZIpl6B37ae/DCGqWBIjiiZlG
iywJYQpG6rH53ajmvBRXsHfPBGNmycDS5EHNFfYhmCStmOliGZMLW2Xq8UkJb9Wq3ezo5/8Qyw7T
fmzBPof5x2L/3C9CSz9H3rMQt/yWA77eKwWsDl+RHGuurKZz0/vQct45OZiaXa0JtSZo4zEmj/Rt
pbN/vOSIUZ8L0eXqQckS7UNkZfKThPdD3xP541uIpHebeIOwVpueV9FTv2FsVB8/ZfscenCPUYST
p2AGwtwNDEj1ycWHwOpTOuj8Bb7qu4LtyXsFlG2tEpYscyVHxaXOUvrnAaaXR01wRnC62rtklTa+
hR1hGdI581Ifci2TTNLkKcH8uJ1bBKvHk+SPqA8oLeMc6L6h4OTerRUlju8iIn0+5KkQyVm3+waR
LVv48PMn9kfNhsGS7DF2A+/P+6Dn6fuJ9xqxmupvasKyKsNpOZtnjQaXVw43vj7B6kaccfN34GGI
NXw5LmxFJLDm6E7BBePzxwros/Yn1LRiSix/jyk7MRIPzVGzscmVWspl+6aAnkk6rL9V2elEET+A
wWkFUTAbAbI2UQdTLmwnfsOrmR9oAn5+1MZXNYrIZwbnqSqwBaq4GqD8SX7xxNiBI/3hF6yk4U4y
G52BUAdNjN479eEtHybu1+SdXODiFxYxjF2bzk3UxaAy44hc47HqOU6Q9rC7hA/y03NLt+/sP3h0
P9xNSoB/lmHER8MEnsvNmdeDPv38S2IEO5XysGFruOHHJLEO2087f+mQJl9sfCx0Lp2E9mVLDfz8
H3z9+Y+92T5J0N1TjRejdwKb2e+wB0eZCoo+MPBz3j9JnD4qOvO2HyI218epZ773dGaZvSVKTvbE
zmRdnMU2YxNsfi22FokJ6DU/1jDx4yOJDFF0Vv0hJbBdbwdyXIoo5cluWiHiISLKs/s4czHd9/B0
OjPeromPlaDloICptzJTtfEvfqsvUHbCK85pvNPWTI8hXKAMiBJBvRLq3TpA53jUsXWm34rW5ZWF
9sFsiCwUFqDrTsvBhic/Pzcgh1PRIl5/etNSlhdnLcx1QMHBabB5CWQgiPpeguvudfeks9CDGV01
CRJhrxDsxGGwgK8yo9N95TwmONvpli8xOuFBxUotrikVTkYsbviDbZsw2urm5A0vjnojxqbvOYja
C9TOk7Pxt5VOh+fDhrITXYlyMPlgIl52gT7lLwTv/XMw9zl0oV0HKsafMtJ6rRH3YNPHHkpz0SEO
c6rBFl8PHm4inU971UM/v3njc/1YKr0PkuQBMN78ofVjehbIh0KdOss79/RCwgeaUWdgIypwRa/D
Xv35H8T/HCetu+elBze/yqNB+OxX7aaG6B4OAlZrq3MWOC0s2vQrPi4Fn071/ljCULAhdkKxC4Tj
3Vp//yd4ifV0OUlaKL5cZ/akMI77OWsECAfxGk/DdQrB/LAqV9rwHpt6V1fLK899wIZPcfps/IGA
t9FC/eN+SSSKVT/sv6MJvOvjgI/bfRluJcx/+OsJwuVQkW1/8OcnRZ/hqpEhGAawOMmTyGe3SJfY
st9wjdGM5UEBThMOng02PMHpeIrTdZ1SCT7vuYCtpf5Um1/jwWU3lfiw3yf9eH5KJuSr73d68c8d
HZ5CfIGJ2EdeluC6H+7+bpB0r9thNb9qwdLg0Yab3+S9SLMC2j3F5Fff8LFdOUCYizTDyzv4bHpP
6ac6eXfwgFh92vCkWhRNVCWPNyti2/rH+cMHNj1JsNoFlHuN7gB/eu6w6c25a9Y3PPUePzEgrau5
H8sWfs+vkiiS0zgr2U3zD3+J+wVawLMiq4p5qitb/jt0nC+6h+BYnT1287/G5XhioWDjCza+Xgfo
O+ImOEGXkOweukDoxFwH0XNRsU+Pbb/A8wMC9TsU5K64VBuDR79CY/QCchIPLaBRtbTIL+Mz9oJz
l87v0OrgtH9Zf/g0a+CDBJ4NPhNN2ec9laPXgPKhVLE5JdeKX0x5RUNfM8S5srrDea1Xw+FCCXFD
CrRe5NUYNpa/89Ys+VYziBMXuJ5oEWVcdsEqioccNP1RmVB9P1V00BkfIvKZsDKadUA2fQ/XhcrE
PJNDwNvzOwTIJQO2ETvR9YVYH8nXu0WwA0+UZRIA4bQcpenr9i9tuJvHHAYnk8XWk+oOnYzWRvVR
OmI1HL7OWl4SHsTCqnprBXhtBfPK/PETVLK8gxXoTQt/9ce1rUOwvtxuhTrfdjgUOSXd+jUR4n1o
kNvVYPqhOQAIG9ioRFWYkfbV1JVQNdl8i9/ozBoBJpS6XvLKPnlr9BBHe9hO/dMT/UNTzbS7t+Cn
f8yBVakwn+UL0voZYVUyBNpMT52Hm77CrtAjbfzohiotXS0RXSqkYNz8Ydg4TE0co7lWa/SwHrAX
fEDOF8WmA2aphGqp/GAVnYyK/eknTG4zdrf+Bx837RvWt64izidZKpKhtIOWjn1vTU9cvw4OKP74
G27y9SqujqvyT3ycnM+C+aenLkXDTEAzW0B//mckqQGxxPuQbniXw7NPD9P+ZlT93BwoA5Rl4rFN
Mgp+fgtqC2AS75kuwXjlpQhKB7qfaIwkrb37wgCUnKWbuXTUtn5OCX563rt7WbVemb6Al0cee4Ji
zXRuojL+g/+O63zBuvVv4MjnPTGKwk6XJCgH4BhZNoHhwlS02KEYZucu8sQDklN+5y8t2urptLjE
p+MLExaQUztjL3T1YN09CxMpjOYRZ/emv35E9/N7sBeNM/0mQTdB8WR+seXga7pgoy/h7/w3Pkzn
PNQiKO/PNn7gRAl+/BSuzXTC0fgEdG57aw+as4mx7J/NXiidaw09XTbxTx99Gc/rwOPb9eTg3phf
P6aFy90WPenD3qtl67cARTR1bJH7uZ/rZ7GiD7u+iMWhD5imW9BB/l7zxFJq4AxN1CVQi/rrNOtp
nC7P1/AGSuFzePOHqzk9f99w81vxIXqfKf1IdvvTo+QEKxyspfSB0OpmER8SsDrLvm5l2Do0m3a+
eXXW2pxldLw7hDjbeQ9bPwKW7TPHbrC8Kirydgw7WUqJcUw9IJCR+n/8Gquw39Xy45PdyaNbf0yv
eNeWINSDucUGIUK6/PrV/bd4TWCrH6yUFgXqtM8O6+cdAtMtsHW4a5OAWN6t0pYqPUAoNUzk9bJS
/vRVDiRDD6dff41w4/wGh3Z4ELvwTMpLvSr9/GmibPySpgstkYZcxoNKYlLaRUCFW78HywNk6fLh
8lL6Puob1hP1E7TNATDQGTST2JE3ONRIPy6MmF3mwQ0vh7h9F1B+zBGJk5et8Y3bmpDgffTzryjX
vp0WTiz3msRVJ2CuVWcPHGXhyeH2dCrOlDwVdIXJEwdNu75Tzxcf7YziTDJ2bqu1Fx4Q/v2bCvjP
f/311//4TRjUbZZ/tsGAMV/Gf//XqMC/hX8PdfL5/BlDmIakyP/+539PIPz97dv6O/7PsX3nzfD3
P38Jf0YN/h7bMfn8P4//tb3oP//1vwAAAP//AwDPjjDU3iAAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 93c2407849cb943e-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 07 May 2025 16:56:39 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=iRcThWdZ.NO0HPhZg.pUV1AiG0u.0Dkd58N9HGucKdQ-1746636999-1.0.1.1-Cswtia9bUNC0npExHV2GcZLT2MVo6tEQbFU_dsKpjNN5R3s37B6JGWTE1IIZV9V0UGLhiy04og474anpJW4c6yLw0.9q5F4MPcxtAOjwBvo;
path=/; expires=Wed, 07-May-25 17:26:39 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=rvDDZbBWaissP0luvtyuyyAWcPx3AiaoZS9LkAuK4sM-1746636999152-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-allow-origin:
- '*'
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-model:
- text-embedding-3-small
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '116'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
via:
- envoy-router-6b78fbf94c-z6prb
x-envoy-upstream-service-time:
- '123'
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '10000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '9999996'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_3f67e7a1b90d845c25e9cef31147aba0
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Information Agent.
I have access to knowledge sources\nYour personal goal is: Provide information
based on knowledge sources\nTo give my best complete final answer to the task
respond using the exact following format:\n\nThought: I now can give a great
answer\nFinal Answer: Your final answer must be the great and the most complete
as possible, it must be outcome described.\n\nI MUST use these formats, my job
depends on it!"}, {"role": "user", "content": "\nCurrent Task: What is the capital
of France?\n\nThis is the expected criteria for your final answer: The capital
of France is Paris.\nyou MUST return the actual complete content as the final
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '911'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA4xSwW7bMAy9+ysIXXpJgmbN0iW3DEPQAtswDBt2WAuDlWhbjSxpEt00KPLvg+Uk
drYeetGBj3x6fHwvGYDQSixByApZ1t6MP/6UX75vV82n7Yf17uvKX4dVuXn/Sxc3n2+uxKidcA+P
JPk4NZGu9oZYO9vBMhAytazT69l8fjVfLBYJqJ0i046Vnsez8eV8eiCUldOSoljC7wwA4CW9rTar
6Fks4XJ0rNQUI5YklqcmABGcaSsCY9SR0bIY9aB0lskmubcg0VrH4IN70ooA7Q4cVxRA28KFGtst
ACNwRcAYNyANYTA7iIxMXZ2ePUkmBYW2aABt3FIAtAqUo2gvGAL9aXQgQKV0y4hmyD+BW4iVa4w6
6ehoUfKR7cCgJnf2zq7TP6uELOFHRSDRa0YDroB1QCsJdIRvGHScDFcPVDQRW8ttY8wASC4kMcn0
+wOyP9lsXOmDe4j/jIpCWx2rPBBGZ1tLIzsvErrPAO7TOZuzCwkfXO05Z7eh9N10vuj4RJ+cHp1N
DyA7RtPX300PITjnyxUxahMHgRASZUWqH+3Tg43SbgBkg63/V/Mad7e5tuVb6HtASvJMKveBlJbn
G/dtgR5Tsl5vO7mcBItI4UlLyllTaC+hqMDGdNEXcReZ6rzQtqTgg075by+Z7bO/AAAA//8DAI9H
rN32AwAA
headers:
CF-RAY:
- 93c2407d5cfaface-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 07 May 2025 16:56:41 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=g.ZMxB8fB7ZSkwD6w5ws93pGGw6nEi3uFVh.JDp2OOU-1746637001-1.0.1.1-59mPPW0bDWyD6ngFx6m9LdHurrdN9Kaem.eFcKAwWp_H_4kabp2CzCRiEaW2QhRYYPWE6fZPgqWU8amQtZqpRZHtEjTyoL8t6UtyzyTCoAQ;
path=/; expires=Wed, 07-May-25 17:26:41 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=WOtfHIloFTPkupN1gC2z.3cExzObgfz.p4fXYpCK0aI-1746637001631-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '2084'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '2086'
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '1000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '999805'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 11ms
x-request-id:
- req_1ff0f0c079f8e7f5feb17fe762b5e40a
status:
code: 200
message: OK
version: 1

View File

@@ -16,7 +16,7 @@ interactions:
answer MUST contain all the information requested in the following format: {\n \"summary\":
str,\n \"confidence\": int\n}\n\nIMPORTANT: Ensure the final output does not
include any code block markers like ```json or ```python."}, {"role": "user",
"content": "What is the population of Tokyo? Return your structured output in
"content": "What is the population of Tokyo? Return your strucutred output in
JSON format with the following fields: summary, confidence"}], "model": "gpt-4o-mini",
"stop": []}'
headers:

View File

@@ -162,18 +162,8 @@ def test_reset_knowledge(mock_get_crews, runner):
assert call_count == 1, "reset_memories should have been called once"
def test_reset_agent_knowledge(mock_get_crews, runner):
result = runner.invoke(reset_memories, ["--agent-knowledge"])
call_count = 0
for crew in mock_get_crews.return_value:
crew.reset_memories.assert_called_once_with(command_type="agent_knowledge")
assert f"[Crew ({crew.name})] Agents knowledge has been reset." in result.output
call_count += 1
assert call_count == 1, "reset_memories should have been called once"
def test_reset_memory_from_many_crews(mock_get_crews, runner):
crews = []
for crew_id in ["id-1234", "id-5678"]:
mock_crew = mock.Mock(spec=Crew)

View File

@@ -2,19 +2,22 @@
import hashlib
import json
import os
import tempfile
from concurrent.futures import Future
from unittest import mock
from unittest.mock import ANY, MagicMock, patch
from unittest.mock import MagicMock, patch
import pydantic_core
import pytest
from crewai.agent import Agent
from crewai.agents import CacheHandler
from crewai.agents.cache import CacheHandler
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.crew import Crew
from crewai.crews.crew_output import CrewOutput
from crewai.flow import Flow, start
from crewai.knowledge.knowledge import Knowledge
from crewai.flow import Flow, listen, start
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from crewai.llm import LLM
from crewai.memory.contextual.contextual_memory import ContextualMemory
@@ -3138,30 +3141,6 @@ def test_replay_with_context():
assert crew.tasks[1].context[0].output.raw == "context raw output"
def test_replay_with_context_set_to_nullable():
agent = Agent(role="test_agent", backstory="Test Description", goal="Test Goal")
task1 = Task(
description="Context Task", expected_output="Say Task Output", agent=agent
)
task2 = Task(
description="Test Task", expected_output="Say Hi", agent=agent, context=[]
)
task3 = Task(
description="Test Task 3", expected_output="Say Hi", agent=agent, context=None
)
crew = Crew(agents=[agent], tasks=[task1, task2, task3], process=Process.sequential)
with patch("crewai.task.Task.execute_sync") as mock_execute_task:
mock_execute_task.return_value = TaskOutput(
description="Test Task Output",
raw="test raw output",
agent="test_agent",
)
crew.kickoff()
mock_execute_task.assert_called_with(agent=ANY, context="", tools=ANY)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_replay_with_invalid_task_id():
agent = Agent(role="test_agent", backstory="Test Description", goal="Test Goal")
@@ -4404,165 +4383,3 @@ def test_sets_parent_flow_when_inside_flow(researcher, writer):
flow = MyFlow()
result = flow.kickoff()
assert result.parent_flow is flow
def test_reset_knowledge_with_no_crew_knowledge(researcher,writer):
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
tasks=[
Task(description="Task 1", expected_output="output", agent=researcher),
Task(description="Task 2", expected_output="output", agent=writer),
]
)
with pytest.raises(RuntimeError) as excinfo:
crew.reset_memories(command_type='knowledge')
# Optionally, you can also check the error message
assert "Crew Knowledge and Agent Knowledge memory system is not initialized" in str(excinfo.value) # Replace with the expected message
def test_reset_knowledge_with_only_crew_knowledge(researcher,writer):
mock_ks = MagicMock(spec=Knowledge)
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
tasks=[
Task(description="Task 1", expected_output="output", agent=researcher),
Task(description="Task 2", expected_output="output", agent=writer),
],
knowledge=mock_ks
)
crew.reset_memories(command_type='knowledge')
mock_reset_agent_knowledge.assert_called_once_with([mock_ks])
def test_reset_knowledge_with_crew_and_agent_knowledge(researcher,writer):
mock_ks_crew = MagicMock(spec=Knowledge)
mock_ks_research = MagicMock(spec=Knowledge)
mock_ks_writer = MagicMock(spec=Knowledge)
researcher.knowledge = mock_ks_research
writer.knowledge = mock_ks_writer
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
tasks=[
Task(description="Task 1", expected_output="output", agent=researcher),
Task(description="Task 2", expected_output="output", agent=writer),
],
knowledge=mock_ks_crew
)
crew.reset_memories(command_type='knowledge')
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_crew,mock_ks_research,mock_ks_writer])
def test_reset_knowledge_with_only_agent_knowledge(researcher,writer):
mock_ks_research = MagicMock(spec=Knowledge)
mock_ks_writer = MagicMock(spec=Knowledge)
researcher.knowledge = mock_ks_research
writer.knowledge = mock_ks_writer
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
tasks=[
Task(description="Task 1", expected_output="output", agent=researcher),
Task(description="Task 2", expected_output="output", agent=writer),
],
)
crew.reset_memories(command_type='knowledge')
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_research,mock_ks_writer])
def test_reset_agent_knowledge_with_no_agent_knowledge(researcher,writer):
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
tasks=[
Task(description="Task 1", expected_output="output", agent=researcher),
Task(description="Task 2", expected_output="output", agent=writer),
],
)
with pytest.raises(RuntimeError) as excinfo:
crew.reset_memories(command_type='agent_knowledge')
# Optionally, you can also check the error message
assert "Agent Knowledge memory system is not initialized" in str(excinfo.value) # Replace with the expected message
def test_reset_agent_knowledge_with_only_crew_knowledge(researcher,writer):
mock_ks = MagicMock(spec=Knowledge)
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
tasks=[
Task(description="Task 1", expected_output="output", agent=researcher),
Task(description="Task 2", expected_output="output", agent=writer),
],
knowledge=mock_ks
)
with pytest.raises(RuntimeError) as excinfo:
crew.reset_memories(command_type='agent_knowledge')
# Optionally, you can also check the error message
assert "Agent Knowledge memory system is not initialized" in str(excinfo.value) # Replace with the expected message
def test_reset_agent_knowledge_with_crew_and_agent_knowledge(researcher,writer):
mock_ks_crew = MagicMock(spec=Knowledge)
mock_ks_research = MagicMock(spec=Knowledge)
mock_ks_writer = MagicMock(spec=Knowledge)
researcher.knowledge = mock_ks_research
writer.knowledge = mock_ks_writer
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
tasks=[
Task(description="Task 1", expected_output="output", agent=researcher),
Task(description="Task 2", expected_output="output", agent=writer),
],
knowledge=mock_ks_crew
)
crew.reset_memories(command_type='agent_knowledge')
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_research,mock_ks_writer])
def test_reset_agent_knowledge_with_only_agent_knowledge(researcher,writer):
mock_ks_research = MagicMock(spec=Knowledge)
mock_ks_writer = MagicMock(spec=Knowledge)
researcher.knowledge = mock_ks_research
writer.knowledge = mock_ks_writer
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
tasks=[
Task(description="Task 1", expected_output="output", agent=researcher),
Task(description="Task 2", expected_output="output", agent=writer),
],
)
crew.reset_memories(command_type='agent_knowledge')
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_research,mock_ks_writer])

View File

@@ -0,0 +1,155 @@
import sqlite3
import threading
from concurrent.futures import ThreadPoolExecutor
from types import SimpleNamespace
from unittest.mock import MagicMock, patch
import pytest
from crewai.llm import LLM
from crewai.memory.storage.llm_response_cache_storage import LLMResponseCacheStorage
from crewai.utilities.llm_response_cache_handler import LLMResponseCacheHandler
@pytest.fixture
def handler():
handler = LLMResponseCacheHandler()
handler.storage.add = MagicMock()
handler.storage.get = MagicMock()
return handler
def create_mock_response(content):
"""Create a properly structured mock response object for litellm.completion"""
message = SimpleNamespace(content=content)
choice = SimpleNamespace(message=message)
response = SimpleNamespace(choices=[choice])
return response
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_recording(handler):
handler.start_recording()
llm = LLM(model="gpt-4o-mini")
llm.set_response_cache_handler(handler)
messages = [{"role": "user", "content": "Hello, world!"}]
with patch('litellm.completion') as mock_completion:
mock_completion.return_value = create_mock_response("Hello, human!")
response = llm.call(messages)
assert response == "Hello, human!"
handler.storage.add.assert_called_once_with(
"gpt-4o-mini", messages, "Hello, human!"
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_replaying(handler):
handler.start_replaying()
handler.storage.get.return_value = "Cached response"
llm = LLM(model="gpt-4o-mini")
llm.set_response_cache_handler(handler)
messages = [{"role": "user", "content": "Hello, world!"}]
with patch('litellm.completion') as mock_completion:
response = llm.call(messages)
assert response == "Cached response"
mock_completion.assert_not_called()
handler.storage.get.assert_called_once_with("gpt-4o-mini", messages)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_replay_fallback(handler):
handler.start_replaying()
handler.storage.get.return_value = None
llm = LLM(model="gpt-4o-mini")
llm.set_response_cache_handler(handler)
messages = [{"role": "user", "content": "Hello, world!"}]
with patch('litellm.completion') as mock_completion:
mock_completion.return_value = create_mock_response("Hello, human!")
response = llm.call(messages)
assert response == "Hello, human!"
mock_completion.assert_called_once()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_cache_error_handling():
"""Test that errors during cache operations are handled gracefully."""
handler = LLMResponseCacheHandler()
handler.storage.add = MagicMock(side_effect=sqlite3.Error("Mock DB error"))
handler.storage.get = MagicMock(side_effect=sqlite3.Error("Mock DB error"))
handler.start_recording()
handler.cache_response("model", [{"role": "user", "content": "test"}], "response")
handler.start_replaying()
assert handler.get_cached_response("model", [{"role": "user", "content": "test"}]) is None
@pytest.mark.vcr(filter_headers=["authorization"])
def test_cache_expiration():
"""Test that cache expiration works correctly."""
import sqlite3
conn = sqlite3.connect(":memory:")
cursor = conn.cursor()
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS llm_response_cache (
request_hash TEXT PRIMARY KEY,
model TEXT,
messages TEXT,
response TEXT,
timestamp DATETIME DEFAULT CURRENT_TIMESTAMP
)
"""
)
conn.commit()
storage = LLMResponseCacheStorage(":memory:")
original_get_connection = storage._get_connection
storage._get_connection = lambda: conn
try:
model = "test-model"
messages = [{"role": "user", "content": "test"}]
response = "test response"
storage.add(model, messages, response)
assert storage.get(model, messages) == response
storage.cleanup_expired_cache(max_age_days=0)
assert storage.get(model, messages) is None
finally:
storage._get_connection = original_get_connection
conn.close()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_concurrent_cache_access():
"""Test that concurrent cache access works correctly."""
pytest.skip("SQLite in-memory databases are not shared between threads")
# storage = LLMResponseCacheStorage(temp_db.name)

View File

@@ -0,0 +1,93 @@
from unittest.mock import MagicMock, patch
import pytest
from crewai.agent import Agent
from crewai.crew import Crew
from crewai.process import Process
from crewai.task import Task
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_recording_mode():
agent = Agent(
role="Test Agent",
goal="Test the recording functionality",
backstory="A test agent for recording LLM responses",
)
task = Task(
description="Return a simple response",
expected_output="A simple response",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
process=Process.sequential,
record_mode=True,
)
mock_handler = MagicMock()
crew._llm_response_cache_handler = mock_handler
mock_llm = MagicMock()
agent.llm = mock_llm
with patch('crewai.agent.Agent.execute_task', return_value="Test response"):
with patch('crewai.utilities.llm_response_cache_handler.LLMResponseCacheHandler', return_value=mock_handler):
crew.kickoff()
mock_handler.start_recording.assert_called_once()
mock_llm.set_response_cache_handler.assert_called_once_with(mock_handler)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_replay_mode():
agent = Agent(
role="Test Agent",
goal="Test the replay functionality",
backstory="A test agent for replaying LLM responses",
)
task = Task(
description="Return a simple response",
expected_output="A simple response",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
process=Process.sequential,
replay_mode=True,
)
mock_handler = MagicMock()
crew._llm_response_cache_handler = mock_handler
mock_llm = MagicMock()
agent.llm = mock_llm
with patch('crewai.agent.Agent.execute_task', return_value="Test response"):
with patch('crewai.utilities.llm_response_cache_handler.LLMResponseCacheHandler', return_value=mock_handler):
crew.kickoff()
mock_handler.start_replaying.assert_called_once()
mock_llm.set_response_cache_handler.assert_called_once_with(mock_handler)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_record_replay_flags_conflict():
with pytest.raises(ValueError):
crew = Crew(
agents=[],
tasks=[],
process=Process.sequential,
record_mode=True,
replay_mode=True,
)
crew.kickoff()

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -357,7 +357,14 @@ def test_convert_with_instructions():
assert output.age == 30
@pytest.mark.vcr(filter_headers=["authorization"])
# Skip tests that call external APIs when running in CI/CD
skip_external_api = pytest.mark.skipif(
os.getenv("CI") is not None, reason="Skipping tests that call external API in CI/CD"
)
@skip_external_api
@pytest.mark.vcr(filter_headers=["authorization"], record_mode="once")
def test_converter_with_llama3_2_model():
llm = LLM(model="ollama/llama3.2:3b", base_url="http://localhost:11434")
sample_text = "Name: Alice Llama, Age: 30"
@@ -374,7 +381,8 @@ def test_converter_with_llama3_2_model():
assert output.age == 30
@pytest.mark.vcr(filter_headers=["authorization"])
@skip_external_api
@pytest.mark.vcr(filter_headers=["authorization"], record_mode="once")
def test_converter_with_llama3_1_model():
llm = LLM(model="ollama/llama3.1", base_url="http://localhost:11434")
sample_text = "Name: Alice Llama, Age: 30"
@@ -391,6 +399,13 @@ def test_converter_with_llama3_1_model():
assert output.age == 30
# Skip tests that call external APIs when running in CI/CD
skip_external_api = pytest.mark.skipif(
os.getenv("CI") is not None, reason="Skipping tests that call external API in CI/CD"
)
@skip_external_api
@pytest.mark.vcr(filter_headers=["authorization"])
def test_converter_with_nested_model():
llm = LLM(model="gpt-4o-mini")

View File

@@ -1,80 +0,0 @@
import pytest
from unittest.mock import patch, MagicMock
from crewai.utilities.embedding_configurator import EmbeddingConfigurator
class TestEmbeddingConfigurator:
def test_configure_voyageai_embedder(self):
"""Test that the VoyageAI embedder is configured correctly."""
with patch(
"crewai.utilities.embedding_configurator.VoyageAIEmbeddingFunction"
) as mock_voyageai:
mock_instance = MagicMock()
mock_voyageai.return_value = mock_instance
config = {"api_key": "test-key"}
model_name = "voyage-3"
configurator = EmbeddingConfigurator()
embedder = configurator._configure_voyageai(config, model_name)
mock_voyageai.assert_called_once_with(
model_name=model_name, api_key="test-key"
)
assert embedder == mock_instance
def test_configure_embedder_with_voyageai(self):
"""Test that the embedder configurator correctly handles VoyageAI provider."""
with patch(
"crewai.utilities.embedding_configurator.VoyageAIEmbeddingFunction"
) as mock_voyageai:
mock_instance = MagicMock()
mock_voyageai.return_value = mock_instance
embedder_config = {
"provider": "voyageai",
"config": {"api_key": "test-key", "model": "voyage-3"},
}
configurator = EmbeddingConfigurator()
embedder = configurator.configure_embedder(embedder_config)
mock_voyageai.assert_called_once_with(
model_name="voyage-3", api_key="test-key"
)
assert embedder == mock_instance
def test_configure_voyageai_embedder_missing_api_key(self):
"""Test that the VoyageAI embedder raises an error when API key is missing."""
with patch(
"crewai.utilities.embedding_configurator.VoyageAIEmbeddingFunction"
) as mock_voyageai:
mock_voyageai.side_effect = ValueError("API key is required for VoyageAI embeddings")
config = {} # Empty config without API key
model_name = "voyage-3"
configurator = EmbeddingConfigurator()
with pytest.raises(ValueError, match="API key is required"):
configurator._configure_voyageai(config, model_name)
def test_configure_voyageai_embedder_custom_model(self):
"""Test that the VoyageAI embedder works with different model names."""
with patch(
"crewai.utilities.embedding_configurator.VoyageAIEmbeddingFunction"
) as mock_voyageai:
mock_instance = MagicMock()
mock_voyageai.return_value = mock_instance
config = {"api_key": "test-key"}
model_name = "voyage-3.5-lite" # Using a different model
configurator = EmbeddingConfigurator()
embedder = configurator._configure_voyageai(config, model_name)
mock_voyageai.assert_called_once_with(
model_name=model_name, api_key="test-key"
)
assert embedder == mock_instance

10
uv.lock generated
View File

@@ -738,7 +738,7 @@ wheels = [
[[package]]
name = "crewai"
version = "0.119.0"
version = "0.118.0"
source = { editable = "." }
dependencies = [
{ name = "appdirs" },
@@ -828,7 +828,7 @@ requires-dist = [
{ name = "blinker", specifier = ">=1.9.0" },
{ name = "chromadb", specifier = ">=0.5.23" },
{ name = "click", specifier = ">=8.1.7" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = "~=0.44.0" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = "~=0.42.2" },
{ name = "docling", marker = "extra == 'docling'", specifier = ">=2.12.0" },
{ name = "fastembed", marker = "extra == 'fastembed'", specifier = ">=0.4.1" },
{ name = "instructor", specifier = ">=1.3.3" },
@@ -879,7 +879,7 @@ dev = [
[[package]]
name = "crewai-tools"
version = "0.44.0"
version = "0.42.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "chromadb" },
@@ -894,9 +894,9 @@ dependencies = [
{ name = "pytube" },
{ name = "requests" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b8/1f/2977dc72628c1225bf5788ae22a65e5a53df384d19b197646d2c4760684e/crewai_tools-0.44.0.tar.gz", hash = "sha256:44e0c26079396503a326efdd9ff34bf369d410cbf95c362cc523db65b18f3c3a", size = 892004 }
sdist = { url = "https://files.pythonhosted.org/packages/17/34/9e63e2db53d8f5c30353f271a3240687a48e55204bbd176a057c0b7658c8/crewai_tools-0.42.2.tar.gz", hash = "sha256:69365ffb168cccfea970e09b308905aa5007cfec60024d731ffac1362a0153c0", size = 754967 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ba/80/b91aa837d06edbb472445ea3c92d7619518894fd3049d480e5fffbf0c21b/crewai_tools-0.44.0-py3-none-any.whl", hash = "sha256:119e2365fe66ee16e18a5e8e222994b19f76bafcc8c1bb87f61609c1e39b2463", size = 583462 },
{ url = "https://files.pythonhosted.org/packages/4e/43/0f70b95350084e5cb1e1d74e9acb9e18a89ba675b1d579c787c2662baba7/crewai_tools-0.42.2-py3-none-any.whl", hash = "sha256:13727fb68f0efefd21edeb281be3d66ff2f5a3b5029d4e6adef388b11fd5846a", size = 583933 },
]
[[package]]