Compare commits
31 Commits
lg-pytest-
...
lorenze/os
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b8c766c3be | ||
|
|
dce74cbf25 | ||
|
|
b4dfb19a3a | ||
|
|
30ef8ed70b | ||
|
|
e1541b2619 | ||
|
|
7c4889f5c9 | ||
|
|
c403497cf4 | ||
|
|
fed397f745 | ||
|
|
d55e596800 | ||
|
|
f700e014c9 | ||
|
|
4e496d7a20 | ||
|
|
8663c7e1c2 | ||
|
|
cb1a98cabf | ||
|
|
369e6d109c | ||
|
|
2c011631f9 | ||
|
|
d3fc2b4477 | ||
|
|
516d45deaa | ||
|
|
7ad51d9d05 | ||
|
|
e3887ae36e | ||
|
|
e23bc2aaa7 | ||
|
|
7fc405408e | ||
|
|
cac06adc6c | ||
|
|
c8ec03424a | ||
|
|
bfea85d22c | ||
|
|
836e9fc545 | ||
|
|
c3726092fd | ||
|
|
dabf02a90d | ||
|
|
2912c93d77 | ||
|
|
17474a3a0c | ||
|
|
f89c2bfb7e | ||
|
|
2902201bfa |
38
.github/security.md
vendored
@@ -1,19 +1,27 @@
|
||||
CrewAI takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organization.
|
||||
If you believe you have found a security vulnerability in any CrewAI product or service, please report it to us as described below.
|
||||
## CrewAI Security Vulnerability Reporting Policy
|
||||
|
||||
## Reporting a Vulnerability
|
||||
Please do not report security vulnerabilities through public GitHub issues.
|
||||
To report a vulnerability, please email us at security@crewai.com.
|
||||
Please include the requested information listed below so that we can triage your report more quickly
|
||||
CrewAI prioritizes the security of our software products, services, and GitHub repositories. To promptly address vulnerabilities, follow these steps for reporting security issues:
|
||||
|
||||
- Type of issue (e.g. SQL injection, cross-site scripting, etc.)
|
||||
- Full paths of source file(s) related to the manifestation of the issue
|
||||
- The location of the affected source code (tag/branch/commit or direct URL)
|
||||
- Any special configuration required to reproduce the issue
|
||||
- Step-by-step instructions to reproduce the issue (please include screenshots if needed)
|
||||
- Proof-of-concept or exploit code (if possible)
|
||||
- Impact of the issue, including how an attacker might exploit the issue
|
||||
### Reporting Process
|
||||
Do **not** report vulnerabilities via public GitHub issues.
|
||||
|
||||
Once we have received your report, we will respond to you at the email address you provide. If the issue is confirmed, we will release a patch as soon as possible depending on the complexity of the issue.
|
||||
Email all vulnerability reports directly to:
|
||||
**security@crewai.com**
|
||||
|
||||
At this time, we are not offering a bug bounty program. Any rewards will be at our discretion.
|
||||
### Required Information
|
||||
To help us quickly validate and remediate the issue, your report must include:
|
||||
|
||||
- **Vulnerability Type:** Clearly state the vulnerability type (e.g., SQL injection, XSS, privilege escalation).
|
||||
- **Affected Source Code:** Provide full file paths and direct URLs (branch, tag, or commit).
|
||||
- **Reproduction Steps:** Include detailed, step-by-step instructions. Screenshots are recommended.
|
||||
- **Special Configuration:** Document any special settings or configurations required to reproduce.
|
||||
- **Proof-of-Concept (PoC):** Provide exploit or PoC code (if available).
|
||||
- **Impact Assessment:** Clearly explain the severity and potential exploitation scenarios.
|
||||
|
||||
### Our Response
|
||||
- We will acknowledge receipt of your report promptly via your provided email.
|
||||
- Confirmed vulnerabilities will receive priority remediation based on severity.
|
||||
- Patches will be released as swiftly as possible following verification.
|
||||
|
||||
### Reward Notice
|
||||
Currently, we do not offer a bug bounty program. Rewards, if issued, are discretionary.
|
||||
|
||||
25
.github/workflows/linter.yml
vendored
@@ -5,12 +5,29 @@ on: [pull_request]
|
||||
jobs:
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
TARGET_BRANCH: ${{ github.event.pull_request.base.ref }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Install Requirements
|
||||
- name: Fetch Target Branch
|
||||
run: git fetch origin $TARGET_BRANCH --depth=1
|
||||
|
||||
- name: Install Ruff
|
||||
run: pip install ruff
|
||||
|
||||
- name: Get Changed Python Files
|
||||
id: changed-files
|
||||
run: |
|
||||
pip install ruff
|
||||
merge_base=$(git merge-base origin/"$TARGET_BRANCH" HEAD)
|
||||
changed_files=$(git diff --name-only --diff-filter=ACMRTUB "$merge_base" | grep '\.py$' || true)
|
||||
echo "files<<EOF" >> $GITHUB_OUTPUT
|
||||
echo "$changed_files" >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Run Ruff Linter
|
||||
run: ruff check
|
||||
- name: Run Ruff on Changed Files
|
||||
if: ${{ steps.changed-files.outputs.files != '' }}
|
||||
run: |
|
||||
echo "${{ steps.changed-files.outputs.files }}" | tr " " "\n" | xargs -I{} ruff check "{}"
|
||||
|
||||
2
.github/workflows/tests.yml
vendored
@@ -31,4 +31,4 @@ jobs:
|
||||
run: uv sync --dev --all-extras
|
||||
|
||||
- name: Run tests
|
||||
run: uv run pytest tests -vv
|
||||
run: uv run pytest --block-network --timeout=60 -vv
|
||||
|
||||
@@ -2,8 +2,3 @@ exclude = [
|
||||
"templates",
|
||||
"__init__.py",
|
||||
]
|
||||
|
||||
[lint]
|
||||
select = [
|
||||
"I", # isort rules
|
||||
]
|
||||
|
||||
@@ -504,7 +504,7 @@ This example demonstrates how to:
|
||||
|
||||
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
|
||||
|
||||
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models.
|
||||
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models.
|
||||
|
||||
## How CrewAI Compares
|
||||
|
||||
|
||||
@@ -110,6 +110,8 @@ crewai reset-memories [OPTIONS]
|
||||
- `-s, --short`: Reset SHORT TERM memory
|
||||
- `-e, --entities`: Reset ENTITIES memory
|
||||
- `-k, --kickoff-outputs`: Reset LATEST KICKOFF TASK OUTPUTS
|
||||
- `-kn, --knowledge`: Reset KNOWLEDGE storage
|
||||
- `-akn, --agent-knowledge`: Reset AGENT KNOWLEDGE storage
|
||||
- `-a, --all`: Reset ALL memories
|
||||
|
||||
Example:
|
||||
|
||||
@@ -27,7 +27,7 @@ A crew in crewAI represents a collaborative group of agents working together to
|
||||
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
|
||||
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
|
||||
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
|
||||
| **Output Log File** _(optional)_ | `output_log_file` | Set to True to save logs as logs.txt in the current directory or provide a file path. Logs will be in JSON format if the filename ends in .json, otherwise .txt. Defautls to `None`. |
|
||||
| **Output Log File** _(optional)_ | `output_log_file` | Set to True to save logs as logs.txt in the current directory or provide a file path. Logs will be in JSON format if the filename ends in .json, otherwise .txt. Defaults to `None`. |
|
||||
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
|
||||
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
|
||||
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
|
||||
@@ -246,7 +246,7 @@ print(f"Token Usage: {crew_output.token_usage}")
|
||||
You can see real time log of the crew execution, by setting `output_log_file` as a `True(Boolean)` or a `file_name(str)`. Supports logging of events as both `file_name.txt` and `file_name.json`.
|
||||
In case of `True(Boolean)` will save as `logs.txt`.
|
||||
|
||||
In case of `output_log_file` is set as `False(Booelan)` or `None`, the logs will not be populated.
|
||||
In case of `output_log_file` is set as `False(Boolean)` or `None`, the logs will not be populated.
|
||||
|
||||
```python Code
|
||||
# Save crew logs
|
||||
|
||||
@@ -75,11 +75,12 @@ class ExampleFlow(Flow):
|
||||
|
||||
|
||||
flow = ExampleFlow()
|
||||
flow.plot()
|
||||
result = flow.kickoff()
|
||||
|
||||
print(f"Generated fun fact: {result}")
|
||||
```
|
||||
|
||||

|
||||
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
|
||||
|
||||
Each Flow instance automatically receives a unique identifier (UUID) in its state, which helps track and manage flow executions. The state can also store additional data (like the generated city and fun fact) that persists throughout the flow's execution.
|
||||
@@ -146,6 +147,7 @@ class OutputExampleFlow(Flow):
|
||||
|
||||
|
||||
flow = OutputExampleFlow()
|
||||
flow.plot("my_flow_plot")
|
||||
final_output = flow.kickoff()
|
||||
|
||||
print("---- Final Output ----")
|
||||
@@ -158,9 +160,10 @@ Second method received: Output from first_method
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||

|
||||
|
||||
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow.
|
||||
The `kickoff()` method will return the final output, which is then printed to the console.
|
||||
The `kickoff()` method will return the final output, which is then printed to the console. The `plot()` method will generate the HTML file, which will help you understand the flow.
|
||||
|
||||
#### Accessing and Updating State
|
||||
|
||||
@@ -192,6 +195,7 @@ class StateExampleFlow(Flow[ExampleState]):
|
||||
return self.state.message
|
||||
|
||||
flow = StateExampleFlow()
|
||||
flow.plot("my_flow_plot")
|
||||
final_output = flow.kickoff()
|
||||
print(f"Final Output: {final_output}")
|
||||
print("Final State:")
|
||||
@@ -206,6 +210,8 @@ counter=2 message='Hello from first_method - updated by second_method'
|
||||
|
||||
</CodeGroup>
|
||||
|
||||

|
||||
|
||||
In this example, the state is updated by both `first_method` and `second_method`.
|
||||
After the Flow has run, you can access the final state to see the updates made by these methods.
|
||||
|
||||
@@ -249,9 +255,12 @@ class UnstructuredExampleFlow(Flow):
|
||||
|
||||
|
||||
flow = UnstructuredExampleFlow()
|
||||
flow.plot("my_flow_plot")
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||

|
||||
|
||||
**Note:** The `id` field is automatically generated and preserved throughout the flow's execution. You don't need to manage or set it manually, and it will be maintained even when updating the state with new data.
|
||||
|
||||
**Key Points:**
|
||||
@@ -302,6 +311,8 @@ flow = StructuredExampleFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||

|
||||
|
||||
**Key Points:**
|
||||
|
||||
- **Defined Schema:** `ExampleState` clearly outlines the state structure, enhancing code readability and maintainability.
|
||||
@@ -436,6 +447,7 @@ class OrExampleFlow(Flow):
|
||||
|
||||
|
||||
flow = OrExampleFlow()
|
||||
flow.plot("my_flow_plot")
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
@@ -446,6 +458,8 @@ Logger: Hello from the second method
|
||||
|
||||
</CodeGroup>
|
||||
|
||||

|
||||
|
||||
When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`.
|
||||
The `or_` function is used to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
|
||||
|
||||
@@ -474,6 +488,7 @@ class AndExampleFlow(Flow):
|
||||
print(self.state)
|
||||
|
||||
flow = AndExampleFlow()
|
||||
flow.plot()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
@@ -484,6 +499,8 @@ flow.kickoff()
|
||||
|
||||
</CodeGroup>
|
||||
|
||||

|
||||
|
||||
When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output.
|
||||
The `and_` function is used to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
|
||||
|
||||
@@ -527,6 +544,7 @@ class RouterFlow(Flow[ExampleState]):
|
||||
|
||||
|
||||
flow = RouterFlow()
|
||||
flow.plot("my_flow_plot")
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
@@ -538,6 +556,8 @@ Fourth method running
|
||||
|
||||
</CodeGroup>
|
||||
|
||||

|
||||
|
||||
In the above example, the `start_method` generates a random boolean value and sets it in the state.
|
||||
The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean.
|
||||
If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`.
|
||||
@@ -641,6 +661,7 @@ class MarketResearchFlow(Flow[MarketResearchState]):
|
||||
# Usage example
|
||||
async def run_flow():
|
||||
flow = MarketResearchFlow()
|
||||
flow.plot("MarketResearchFlowPlot")
|
||||
result = await flow.kickoff_async(inputs={"product": "AI-powered chatbots"})
|
||||
return result
|
||||
|
||||
@@ -650,6 +671,8 @@ if __name__ == "__main__":
|
||||
asyncio.run(run_flow())
|
||||
```
|
||||
|
||||

|
||||
|
||||
This example demonstrates several key features of using Agents in flows:
|
||||
|
||||
1. **Structured Output**: Using Pydantic models to define the expected output format (`MarketAnalysis`) ensures type safety and structured data throughout the flow.
|
||||
@@ -746,13 +769,16 @@ def kickoff():
|
||||
|
||||
def plot():
|
||||
poem_flow = PoemFlow()
|
||||
poem_flow.plot()
|
||||
poem_flow.plot("PoemFlowPlot")
|
||||
|
||||
if __name__ == "__main__":
|
||||
kickoff()
|
||||
plot()
|
||||
```
|
||||
|
||||
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method.
|
||||
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method. The PoemFlowPlot will be generated by `plot()` method.
|
||||
|
||||

|
||||
|
||||
### Running the Flow
|
||||
|
||||
|
||||
@@ -397,6 +397,53 @@ result = crew.kickoff(inputs={"question": "What city does John live in and how o
|
||||
John is 30 years old and lives in San Francisco.
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
## Query Rewriting
|
||||
|
||||
CrewAI implements an intelligent query rewriting mechanism to optimize knowledge retrieval. When an agent needs to search through knowledge sources, the raw task prompt is automatically transformed into a more effective search query.
|
||||
|
||||
### How Query Rewriting Works
|
||||
|
||||
1. When an agent executes a task with knowledge sources available, the `_get_knowledge_search_query` method is triggered
|
||||
2. The agent's LLM is used to transform the original task prompt into an optimized search query
|
||||
3. This optimized query is then used to retrieve relevant information from knowledge sources
|
||||
|
||||
### Benefits of Query Rewriting
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Improved Retrieval Accuracy" icon="bullseye-arrow">
|
||||
By focusing on key concepts and removing irrelevant content, query rewriting helps retrieve more relevant information.
|
||||
</Card>
|
||||
<Card title="Context Awareness" icon="brain">
|
||||
The rewritten queries are designed to be more specific and context-aware for vector database retrieval.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Implementation Details
|
||||
|
||||
Query rewriting happens transparently using a system prompt that instructs the LLM to:
|
||||
|
||||
- Focus on key words of the intended task
|
||||
- Make the query more specific and context-aware
|
||||
- Remove irrelevant content like output format instructions
|
||||
- Generate only the rewritten query without preamble or postamble
|
||||
|
||||
<Tip>
|
||||
This mechanism is fully automatic and requires no configuration from users. The agent's LLM is used to perform the query rewriting, so using a more capable LLM can improve the quality of rewritten queries.
|
||||
</Tip>
|
||||
|
||||
### Example
|
||||
|
||||
```python
|
||||
# Original task prompt
|
||||
task_prompt = "Answer the following questions about the user's favorite movies: What movie did John watch last week? Format your answer in JSON."
|
||||
|
||||
# Behind the scenes, this might be rewritten as:
|
||||
rewritten_query = "What movies did John watch last week?"
|
||||
```
|
||||
|
||||
The rewritten query is more focused on the core information need and removes irrelevant instructions about output formatting.
|
||||
|
||||
## Clearing Knowledge
|
||||
|
||||
If you need to clear the knowledge stored in CrewAI, you can use the `crewai reset-memories` command with the `--knowledge` option.
|
||||
@@ -450,6 +497,13 @@ crew = Crew(
|
||||
result = crew.kickoff(
|
||||
inputs={"question": "What is the storage capacity of the XPS 13?"}
|
||||
)
|
||||
|
||||
# Resetting the agent specific knowledge via crew object
|
||||
crew.reset_memories(command_type = 'agent_knowledge')
|
||||
|
||||
# Resetting the agent specific knowledge via CLI
|
||||
crewai reset-memories --agent-knowledge
|
||||
crewai reset-memories -akn
|
||||
```
|
||||
|
||||
<Info>
|
||||
@@ -653,4 +707,11 @@ recent_news = SpaceNewsKnowledgeSource(
|
||||
- Configure appropriate embedding models
|
||||
- Consider using local embedding providers for faster processing
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="One Time Knowledge">
|
||||
- With the typical file structure provided by CrewAI, knowledge sources are embedded every time the kickoff is triggered.
|
||||
- If the knowledge sources are large, this leads to inefficiency and increased latency, as the same data is embedded each time.
|
||||
- To resolve this, directly initialize the knowledge parameter instead of the knowledge_sources parameter.
|
||||
- Link to the issue to get complete idea [Github Issue](https://github.com/crewAIInc/crewAI/issues/2755)
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -27,23 +27,19 @@ Large Language Models (LLMs) are the core intelligence behind CrewAI agents. The
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Setting Up Your LLM
|
||||
## Setting up your LLM
|
||||
|
||||
There are three ways to configure LLMs in CrewAI. Choose the method that best fits your workflow:
|
||||
There are different places in CrewAI code where you can specify the model to use. Once you specify the model you are using, you will need to provide the configuration (like an API key) for each of the model providers you use. See the [provider configuration examples](#provider-configuration-examples) section for your provider.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="1. Environment Variables">
|
||||
The simplest way to get started. Set these variables in your environment:
|
||||
The simplest way to get started. Set the model in your environment directly, through an `.env` file or in your app code. If you used `crewai create` to bootstrap your project, it will be set already.
|
||||
|
||||
```bash
|
||||
# Required: Your API key for authentication
|
||||
OPENAI_API_KEY=<your-api-key>
|
||||
```bash .env
|
||||
MODEL=model-id # e.g. gpt-4o, gemini-2.0-flash, claude-3-sonnet-...
|
||||
|
||||
# Optional: Default model selection
|
||||
OPENAI_MODEL_NAME=gpt-4o-mini # Default if not set
|
||||
|
||||
# Optional: Organization ID (if applicable)
|
||||
OPENAI_ORGANIZATION_ID=<your-org-id>
|
||||
# Be sure to set your API keys here too. See the Provider
|
||||
# section below.
|
||||
```
|
||||
|
||||
<Warning>
|
||||
@@ -53,13 +49,13 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
|
||||
<Tab title="2. YAML Configuration">
|
||||
Create a YAML file to define your agent configurations. This method is great for version control and team collaboration:
|
||||
|
||||
```yaml
|
||||
```yaml agents.yaml {6}
|
||||
researcher:
|
||||
role: Research Specialist
|
||||
goal: Conduct comprehensive research and analysis
|
||||
backstory: A dedicated research professional with years of experience
|
||||
verbose: true
|
||||
llm: openai/gpt-4o-mini # your model here
|
||||
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
|
||||
# (see provider configuration examples below for more)
|
||||
```
|
||||
|
||||
@@ -74,23 +70,23 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
|
||||
<Tab title="3. Direct Code">
|
||||
For maximum flexibility, configure LLMs directly in your Python code:
|
||||
|
||||
```python
|
||||
```python {4,8}
|
||||
from crewai import LLM
|
||||
|
||||
# Basic configuration
|
||||
llm = LLM(model="gpt-4")
|
||||
llm = LLM(model="model-id-here") # gpt-4o, gemini-2.0-flash, anthropic/claude...
|
||||
|
||||
# Advanced configuration with detailed parameters
|
||||
llm = LLM(
|
||||
model="gpt-4o-mini",
|
||||
model="model-id-here", # gpt-4o, gemini-2.0-flash, anthropic/claude...
|
||||
temperature=0.7, # Higher for more creative outputs
|
||||
timeout=120, # Seconds to wait for response
|
||||
max_tokens=4000, # Maximum length of response
|
||||
top_p=0.9, # Nucleus sampling parameter
|
||||
frequency_penalty=0.1, # Reduce repetition
|
||||
presence_penalty=0.1, # Encourage topic diversity
|
||||
timeout=120, # Seconds to wait for response
|
||||
max_tokens=4000, # Maximum length of response
|
||||
top_p=0.9, # Nucleus sampling parameter
|
||||
frequency_penalty=0.1 , # Reduce repetition
|
||||
presence_penalty=0.1, # Encourage topic diversity
|
||||
response_format={"type": "json"}, # For structured outputs
|
||||
seed=42 # For reproducible results
|
||||
seed=42 # For reproducible results
|
||||
)
|
||||
```
|
||||
|
||||
@@ -110,7 +106,6 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
|
||||
|
||||
## Provider Configuration Examples
|
||||
|
||||
|
||||
CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities.
|
||||
In this section, you'll find detailed examples that help you select, configure, and optimize the LLM that best fits your project's needs.
|
||||
|
||||
@@ -174,19 +169,55 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Google">
|
||||
Set the following environment variables in your `.env` file:
|
||||
<Accordion title="Google (Gemini API)">
|
||||
Set your API key in your `.env` file. If you need a key, or need to find an
|
||||
existing key, check [AI Studio](https://aistudio.google.com/apikey).
|
||||
|
||||
```toml Code
|
||||
# Option 1: Gemini accessed with an API key.
|
||||
```toml .env
|
||||
# https://ai.google.dev/gemini-api/docs/api-key
|
||||
GEMINI_API_KEY=<your-api-key>
|
||||
|
||||
# Option 2: Vertex AI IAM credentials for Gemini, Anthropic, and Model Garden.
|
||||
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
|
||||
```
|
||||
|
||||
Get credentials from your Google Cloud Console and save it to a JSON file with the following code:
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-2.0-flash",
|
||||
temperature=0.7,
|
||||
)
|
||||
```
|
||||
|
||||
### Gemini models
|
||||
|
||||
Google offers a range of powerful models optimized for different use cases.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|--------------------------------|----------------|-------------------------------------------------------------------|
|
||||
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
|
||||
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
|
||||
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
|
||||
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
|
||||
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
|
||||
|
||||
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
|
||||
|
||||
### Gemma
|
||||
|
||||
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
|
||||
|
||||
| Model | Context Window |
|
||||
|----------------|----------------|
|
||||
| gemma-3-1b-it | 32k tokens |
|
||||
| gemma-3-4b-it | 32k tokens |
|
||||
| gemma-3-12b-it | 32k tokens |
|
||||
| gemma-3-27b-it | 128k tokens |
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="Google (Vertex AI)">
|
||||
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
|
||||
```python Code
|
||||
import json
|
||||
|
||||
@@ -210,14 +241,18 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
vertex_credentials=vertex_credentials_json
|
||||
)
|
||||
```
|
||||
|
||||
Google offers a range of powerful models optimized for different use cases:
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|-----------------------|----------------|------------------------------------------------------------------|
|
||||
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
|
||||
| Model | Context Window | Best For |
|
||||
|--------------------------------|----------------|-------------------------------------------------------------------|
|
||||
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
|
||||
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
|
||||
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
|
||||
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
|
||||
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Azure">
|
||||
@@ -383,7 +418,7 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
| microsoft/phi-3-medium-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
|
||||
| microsoft/phi-3-medium-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
|
||||
| microsoft/phi-3.5-mini-instruct | 128K tokens | Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments |
|
||||
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation |
|
||||
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecture to deliver compute efficient content generation |
|
||||
| microsoft/kosmos-2 | 1,024 tokens | Groundbreaking multimodal model designed to understand and reason about visual elements in images. |
|
||||
| microsoft/phi-3-vision-128k-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
|
||||
| microsoft/phi-3.5-vision-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
|
||||
@@ -407,19 +442,19 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Local NVIDIA NIM Deployed using WSL2">
|
||||
|
||||
NVIDIA NIM enables you to run powerful LLMs locally on your Windows machine using WSL2 (Windows Subsystem for Linux).
|
||||
This approach allows you to leverage your NVIDIA GPU for private, secure, and cost-effective AI inference without relying on cloud services.
|
||||
|
||||
NVIDIA NIM enables you to run powerful LLMs locally on your Windows machine using WSL2 (Windows Subsystem for Linux).
|
||||
This approach allows you to leverage your NVIDIA GPU for private, secure, and cost-effective AI inference without relying on cloud services.
|
||||
Perfect for development, testing, or production scenarios where data privacy or offline capabilities are required.
|
||||
|
||||
|
||||
Here is a step-by-step guide to setting up a local NVIDIA NIM model:
|
||||
|
||||
|
||||
1. Follow installation instructions from [NVIDIA Website](https://docs.nvidia.com/nim/wsl2/latest/getting-started.html)
|
||||
|
||||
2. Install the local model. For Llama 3.1-8b follow [instructions](https://build.nvidia.com/meta/llama-3_1-8b-instruct/deploy)
|
||||
|
||||
3. Configure your crewai local models:
|
||||
|
||||
|
||||
```python Code
|
||||
from crewai.llm import LLM
|
||||
|
||||
@@ -441,7 +476,7 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
config=self.agents_config['researcher'], # type: ignore[index]
|
||||
llm=local_nvidia_nim_llm
|
||||
)
|
||||
|
||||
|
||||
# ...
|
||||
```
|
||||
</Accordion>
|
||||
@@ -637,19 +672,19 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
|
||||
|
||||
When streaming is enabled, responses are delivered in chunks as they're generated, creating a more responsive user experience.
|
||||
</Tab>
|
||||
|
||||
|
||||
<Tab title="Event Handling">
|
||||
CrewAI emits events for each chunk received during streaming:
|
||||
|
||||
|
||||
```python
|
||||
from crewai import LLM
|
||||
from crewai.utilities.events import EventHandler, LLMStreamChunkEvent
|
||||
|
||||
|
||||
class MyEventHandler(EventHandler):
|
||||
def on_llm_stream_chunk(self, event: LLMStreamChunkEvent):
|
||||
# Process each chunk as it arrives
|
||||
print(f"Received chunk: {event.chunk}")
|
||||
|
||||
|
||||
# Register the event handler
|
||||
from crewai.utilities.events import crewai_event_bus
|
||||
crewai_event_bus.register_handler(MyEventHandler())
|
||||
@@ -785,7 +820,7 @@ Learn how to get the most out of your LLM configuration:
|
||||
<Tip>
|
||||
Use larger context models for extensive tasks
|
||||
</Tip>
|
||||
|
||||
|
||||
```python
|
||||
# Large context model
|
||||
llm = LLM(model="openai/gpt-4o") # 128K tokens
|
||||
|
||||
@@ -679,6 +679,7 @@ crewai reset-memories [OPTIONS]
|
||||
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
|
||||
| `-k`, `--kickoff-outputs` | Reset LATEST KICKOFF TASK OUTPUTS. | Flag (boolean) | False |
|
||||
| `-kn`, `--knowledge` | Reset KNOWLEDEGE storage | Flag (boolean) | False |
|
||||
| `-akn`, `--agent-knowledge` | Reset AGENT KNOWLEDGE storage | Flag (boolean) | False |
|
||||
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
|
||||
|
||||
Note: To use the cli command you need to have your crew in a file called crew.py in the same directory.
|
||||
@@ -716,9 +717,11 @@ my_crew.reset_memories(command_type = 'all') # Resets all the memory
|
||||
| `entities` | Reset ENTITIES memory. |
|
||||
| `kickoff_outputs` | Reset LATEST KICKOFF TASK OUTPUTS. |
|
||||
| `knowledge` | Reset KNOWLEDGE memory. |
|
||||
| `agent_knowledge` | Reset AGENT KNOWLEDGE memory. |
|
||||
| `all` | Reset ALL memories. |
|
||||
|
||||
|
||||
|
||||
## Benefits of Using CrewAI's Memory System
|
||||
|
||||
- 🦾 **Adaptive Learning:** Crews become more efficient over time, adapting to new information and refining their approach to tasks.
|
||||
|
||||
@@ -35,7 +35,8 @@ Let's get started building your first crew!
|
||||
Before starting, make sure you have:
|
||||
|
||||
1. Installed CrewAI following the [installation guide](/installation)
|
||||
2. Set up your OpenAI API key in your environment variables
|
||||
2. Set up your LLM API key in your environment, following the [LLM setup
|
||||
guide](/concepts/llms#setting-up-your-llm)
|
||||
3. Basic understanding of Python
|
||||
|
||||
## Step 1: Create a New CrewAI Project
|
||||
@@ -92,7 +93,8 @@ For our research crew, we'll create two agents:
|
||||
1. A **researcher** who excels at finding and organizing information
|
||||
2. An **analyst** who can interpret research findings and create insightful reports
|
||||
|
||||
Let's modify the `agents.yaml` file to define these specialized agents:
|
||||
Let's modify the `agents.yaml` file to define these specialized agents. Be sure
|
||||
to set `llm` to the provider you are using.
|
||||
|
||||
```yaml
|
||||
# src/research_crew/config/agents.yaml
|
||||
@@ -107,7 +109,7 @@ researcher:
|
||||
finding relevant information from various sources. You excel at
|
||||
organizing information in a clear and structured manner, making
|
||||
complex topics accessible to others.
|
||||
llm: openai/gpt-4o-mini
|
||||
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
|
||||
|
||||
analyst:
|
||||
role: >
|
||||
@@ -120,7 +122,7 @@ analyst:
|
||||
and technical writing. You have a talent for identifying patterns
|
||||
and extracting meaningful insights from research data, then
|
||||
communicating those insights effectively through well-crafted reports.
|
||||
llm: openai/gpt-4o-mini
|
||||
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
|
||||
```
|
||||
|
||||
Notice how each agent has a distinct role, goal, and backstory. These elements aren't just descriptive - they actively shape how the agent approaches its tasks. By crafting these carefully, you can create agents with specialized skills and perspectives that complement each other.
|
||||
@@ -282,12 +284,12 @@ This script prepares the environment, specifies our research topic, and kicks of
|
||||
|
||||
Create a `.env` file in your project root with your API keys:
|
||||
|
||||
```
|
||||
OPENAI_API_KEY=your_openai_api_key
|
||||
```sh
|
||||
SERPER_API_KEY=your_serper_api_key
|
||||
# Add your provider's API key here too.
|
||||
```
|
||||
|
||||
You can get a Serper API key from [Serper.dev](https://serper.dev/).
|
||||
See the [LLM Setup guide](/concepts/llms#setting-up-your-llm) for details on configuring your provider of choice. You can get a Serper API key from [Serper.dev](https://serper.dev/).
|
||||
|
||||
## Step 8: Install Dependencies
|
||||
|
||||
|
||||
@@ -45,7 +45,8 @@ Let's dive in and build your first flow!
|
||||
Before starting, make sure you have:
|
||||
|
||||
1. Installed CrewAI following the [installation guide](/installation)
|
||||
2. Set up your OpenAI API key in your environment variables
|
||||
2. Set up your LLM API key in your environment, following the [LLM setup
|
||||
guide](/concepts/llms#setting-up-your-llm)
|
||||
3. Basic understanding of Python
|
||||
|
||||
## Step 1: Create a New CrewAI Flow Project
|
||||
@@ -107,6 +108,8 @@ Now, let's modify the generated files for the content writer crew. We'll set up
|
||||
|
||||
1. First, update the agents configuration file to define our content creation team:
|
||||
|
||||
Remember to set `llm` to the provider you are using.
|
||||
|
||||
```yaml
|
||||
# src/guide_creator_flow/crews/content_crew/config/agents.yaml
|
||||
content_writer:
|
||||
@@ -119,7 +122,7 @@ content_writer:
|
||||
You are a talented educational writer with expertise in creating clear, engaging
|
||||
content. You have a gift for explaining complex concepts in accessible language
|
||||
and organizing information in a way that helps readers build their understanding.
|
||||
llm: openai/gpt-4o-mini
|
||||
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
|
||||
|
||||
content_reviewer:
|
||||
role: >
|
||||
@@ -132,7 +135,7 @@ content_reviewer:
|
||||
content. You have an eye for detail, clarity, and coherence. You excel at
|
||||
improving content while maintaining the original author's voice and ensuring
|
||||
consistent quality across multiple sections.
|
||||
llm: openai/gpt-4o-mini
|
||||
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
|
||||
```
|
||||
|
||||
These agent definitions establish the specialized roles and perspectives that will shape how our AI agents approach content creation. Notice how each agent has a distinct purpose and expertise.
|
||||
@@ -441,10 +444,15 @@ This is the power of flows - combining different types of processing (user inter
|
||||
|
||||
## Step 6: Set Up Your Environment Variables
|
||||
|
||||
Create a `.env` file in your project root with your API keys:
|
||||
Create a `.env` file in your project root with your API keys. See the [LLM setup
|
||||
guide](/concepts/llms#setting-up-your-llm) for details on configuring a provider.
|
||||
|
||||
```
|
||||
```sh .env
|
||||
OPENAI_API_KEY=your_openai_api_key
|
||||
# or
|
||||
GEMINI_API_KEY=your_gemini_api_key
|
||||
# or
|
||||
ANTHROPIC_API_KEY=your_anthropic_api_key
|
||||
```
|
||||
|
||||
## Step 7: Install Dependencies
|
||||
@@ -547,7 +555,10 @@ Let's break down the key components of flows to help you understand how to build
|
||||
Flows allow you to make direct calls to language models when you need simple, structured responses:
|
||||
|
||||
```python
|
||||
llm = LLM(model="openai/gpt-4o-mini", response_format=GuideOutline)
|
||||
llm = LLM(
|
||||
model="model-id-here", # gpt-4o, gemini-2.0-flash, anthropic/claude...
|
||||
response_format=GuideOutline
|
||||
)
|
||||
response = llm.call(messages=messages)
|
||||
```
|
||||
|
||||
|
||||
@@ -68,7 +68,13 @@ We'll create a CrewAI application where two agents collaborate to research and w
|
||||
```python
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from crewai_tools import SerperDevTool
|
||||
from openinference.instrumentation.crewai import CrewAIInstrumentor
|
||||
from phoenix.otel import register
|
||||
|
||||
# setup monitoring for your crew
|
||||
tracer_provider = register(
|
||||
endpoint="http://localhost:6006/v1/traces")
|
||||
CrewAIInstrumentor().instrument(skip_dep_check=True, tracer_provider=tracer_provider)
|
||||
search_tool = SerperDevTool()
|
||||
|
||||
# Define your agents with roles and goals
|
||||
|
||||
BIN
docs/images/crewai-flow-1.png
Normal file
|
After Width: | Height: | Size: 44 KiB |
BIN
docs/images/crewai-flow-2.png
Normal file
|
After Width: | Height: | Size: 43 KiB |
BIN
docs/images/crewai-flow-3.png
Normal file
|
After Width: | Height: | Size: 45 KiB |
BIN
docs/images/crewai-flow-4.png
Normal file
|
After Width: | Height: | Size: 57 KiB |
BIN
docs/images/crewai-flow-5.png
Normal file
|
After Width: | Height: | Size: 48 KiB |
BIN
docs/images/crewai-flow-6.png
Normal file
|
After Width: | Height: | Size: 57 KiB |
BIN
docs/images/crewai-flow-7.png
Normal file
|
After Width: | Height: | Size: 60 KiB |
BIN
docs/images/crewai-flow-8.png
Normal file
|
After Width: | Height: | Size: 48 KiB |
@@ -71,6 +71,10 @@ If you haven't installed `uv` yet, follow **step 1** to quickly get it set up on
|
||||
```
|
||||
</Warning>
|
||||
|
||||
<Warning>
|
||||
If you encounter the `chroma-hnswlib==0.7.6` build error (`fatal error C1083: Cannot open include file: 'float.h'`) on Windows, install (Visual Studio Build Tools)[https://visualstudio.microsoft.com/downloads/] with *Desktop development with C++*.
|
||||
</Warning>
|
||||
|
||||
- To verify that `crewai` is installed, run:
|
||||
```shell
|
||||
uv tool list
|
||||
|
||||
@@ -180,8 +180,9 @@ Follow the steps below to get Crewing! 🚣♂️
|
||||
</Step>
|
||||
<Step title="Set your environment variables">
|
||||
Before running your crew, make sure you have the following keys set as environment variables in your `.env` file:
|
||||
- An [OpenAI API key](https://platform.openai.com/account/api-keys) (or other LLM API key): `OPENAI_API_KEY=sk-...`
|
||||
- A [Serper.dev](https://serper.dev/) API key: `SERPER_API_KEY=YOUR_KEY_HERE`
|
||||
- The configuration for your choice of model, such as an API key. See the
|
||||
[LLM setup guide](/concepts/llms#setting-up-your-llm) to learn how to configure models from any provider.
|
||||
</Step>
|
||||
<Step title="Lock and install the dependencies">
|
||||
- Lock the dependencies and install them by using the CLI command:
|
||||
@@ -317,7 +318,7 @@ email_summarizer:
|
||||
Summarize emails into a concise and clear summary
|
||||
backstory: >
|
||||
You will create a 5 bullet point summary of the report
|
||||
llm: openai/gpt-4o
|
||||
llm: provider/model-id # Add your choice of model here
|
||||
```
|
||||
|
||||
<Tip>
|
||||
|
||||
@@ -22,7 +22,7 @@ streamlining the process of finding specific information within large document c
|
||||
Install the crewai_tools package by running the following command in your terminal:
|
||||
|
||||
```shell
|
||||
pip install 'crewai[tools]'
|
||||
uv pip install docx2txt 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Example
|
||||
@@ -76,4 +76,4 @@ tool = DOCXSearchTool(
|
||||
),
|
||||
)
|
||||
)
|
||||
```
|
||||
```
|
||||
|
||||
@@ -8,10 +8,10 @@ icon: language
|
||||
|
||||
## Description
|
||||
|
||||
This tool is used to convert natural language to SQL queries. When passsed to the agent it will generate queries and then use them to interact with the database.
|
||||
This tool is used to convert natural language to SQL queries. When passed to the agent it will generate queries and then use them to interact with the database.
|
||||
|
||||
This enables multiple workflows like having an Agent to access the database fetch information based on the goal and then use the information to generate a response, report or any other output.
|
||||
Along with that proivdes the ability for the Agent to update the database based on its goal.
|
||||
Along with that provides the ability for the Agent to update the database based on its goal.
|
||||
|
||||
**Attention**: Make sure that the Agent has access to a Read-Replica or that is okay for the Agent to run insert/update queries on the database.
|
||||
|
||||
@@ -81,4 +81,4 @@ The Tool provides endless possibilities on the logic of the Agent and how it can
|
||||
|
||||
```md
|
||||
DB -> Agent -> ... -> Agent -> DB
|
||||
```
|
||||
```
|
||||
|
||||
@@ -143,12 +143,30 @@ config = {
|
||||
"config": {
|
||||
"model": "text-embedding-ada-002"
|
||||
}
|
||||
},
|
||||
"vectordb": {
|
||||
"provider": "elasticsearch",
|
||||
"config": {
|
||||
"collection_name": "my-collection",
|
||||
"cloud_id": "deployment-name:xxxx",
|
||||
"api_key": "your-key",
|
||||
"verify_certs": False
|
||||
}
|
||||
},
|
||||
"chunker": {
|
||||
"chunk_size": 400,
|
||||
"chunk_overlap": 100,
|
||||
"length_function": "len",
|
||||
"min_chunk_size": 0
|
||||
}
|
||||
}
|
||||
|
||||
rag_tool = RagTool(config=config, summarize=True)
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
The internal RAG tool utilizes the Embedchain adapter, allowing you to pass any configuration options that are supported by Embedchain.
|
||||
You can refer to the [Embedchain documentation](https://docs.embedchain.ai/components/introduction) for details.
|
||||
Make sure to review the configuration options available in the .yaml file.
|
||||
|
||||
## Conclusion
|
||||
The `RagTool` provides a powerful way to create and query knowledge bases from various data sources. By leveraging Retrieval-Augmented Generation, it enables agents to access and retrieve relevant information efficiently, enhancing their ability to provide accurate and contextually appropriate responses.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[project]
|
||||
name = "crewai"
|
||||
version = "0.118.0"
|
||||
version = "0.120.0"
|
||||
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.13"
|
||||
@@ -11,7 +11,7 @@ dependencies = [
|
||||
# Core Dependencies
|
||||
"pydantic>=2.4.2",
|
||||
"openai>=1.13.3",
|
||||
"litellm==1.67.1",
|
||||
"litellm==1.68.0",
|
||||
"instructor>=1.3.3",
|
||||
# Text Processing
|
||||
"pdfplumber>=0.11.4",
|
||||
@@ -45,7 +45,7 @@ Documentation = "https://docs.crewai.com"
|
||||
Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = ["crewai-tools~=0.42.2"]
|
||||
tools = ["crewai-tools~=0.45.0"]
|
||||
embeddings = [
|
||||
"tiktoken~=0.7.0"
|
||||
]
|
||||
@@ -85,6 +85,8 @@ dev-dependencies = [
|
||||
"pytest-asyncio>=0.23.7",
|
||||
"pytest-subprocess>=1.5.2",
|
||||
"pytest-recording>=0.13.2",
|
||||
"pytest-randomly>=3.16.0",
|
||||
"pytest-timeout>=2.3.1",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -17,7 +17,7 @@ warnings.filterwarnings(
|
||||
category=UserWarning,
|
||||
module="pydantic.main",
|
||||
)
|
||||
__version__ = "0.118.0"
|
||||
__version__ = "0.120.0"
|
||||
__all__ = [
|
||||
"Agent",
|
||||
"Crew",
|
||||
|
||||
@@ -20,6 +20,7 @@ from crewai.tools.agent_tools.agent_tools import AgentTools
|
||||
from crewai.utilities import Converter, Prompts
|
||||
from crewai.utilities.agent_utils import (
|
||||
get_tool_names,
|
||||
load_agent_from_repository,
|
||||
parse_tools,
|
||||
render_text_description_and_args,
|
||||
)
|
||||
@@ -31,6 +32,14 @@ from crewai.utilities.events.agent_events import (
|
||||
AgentExecutionStartedEvent,
|
||||
)
|
||||
from crewai.utilities.events.crewai_event_bus import crewai_event_bus
|
||||
from crewai.utilities.events.knowledge_events import (
|
||||
KnowledgeQueryCompletedEvent,
|
||||
KnowledgeQueryFailedEvent,
|
||||
KnowledgeQueryStartedEvent,
|
||||
KnowledgeRetrievalCompletedEvent,
|
||||
KnowledgeRetrievalStartedEvent,
|
||||
KnowledgeSearchQueryFailedEvent,
|
||||
)
|
||||
from crewai.utilities.llm_utils import create_llm
|
||||
from crewai.utilities.token_counter_callback import TokenCalcHandler
|
||||
from crewai.utilities.training_handler import CrewTrainingHandler
|
||||
@@ -122,6 +131,20 @@ class Agent(BaseAgent):
|
||||
default=None,
|
||||
description="Knowledge context for the crew.",
|
||||
)
|
||||
knowledge_search_query: Optional[str] = Field(
|
||||
default=None,
|
||||
description="Knowledge search query for the agent dynamically generated by the agent.",
|
||||
)
|
||||
from_repository: Optional[str] = Field(
|
||||
default=None,
|
||||
description="The Agent's role to be used from your repository.",
|
||||
)
|
||||
|
||||
@model_validator(mode="before")
|
||||
def validate_from_repository(cls, v):
|
||||
if v is not None and (from_repository := v.get("from_repository")):
|
||||
return load_agent_from_repository(from_repository) | v
|
||||
return v
|
||||
|
||||
@model_validator(mode="after")
|
||||
def post_init_setup(self):
|
||||
@@ -185,7 +208,7 @@ class Agent(BaseAgent):
|
||||
self,
|
||||
task: Task,
|
||||
context: Optional[str] = None,
|
||||
tools: Optional[List[BaseTool]] = None
|
||||
tools: Optional[List[BaseTool]] = None,
|
||||
) -> str:
|
||||
"""Execute a task with the agent.
|
||||
|
||||
@@ -245,27 +268,65 @@ class Agent(BaseAgent):
|
||||
knowledge_config = (
|
||||
self.knowledge_config.model_dump() if self.knowledge_config else {}
|
||||
)
|
||||
if self.knowledge:
|
||||
agent_knowledge_snippets = self.knowledge.query(
|
||||
[task.prompt()], **knowledge_config
|
||||
)
|
||||
if agent_knowledge_snippets:
|
||||
self.agent_knowledge_context = extract_knowledge_context(
|
||||
agent_knowledge_snippets
|
||||
)
|
||||
if self.agent_knowledge_context:
|
||||
task_prompt += self.agent_knowledge_context
|
||||
|
||||
if self.crew:
|
||||
knowledge_snippets = self.crew.query_knowledge(
|
||||
[task.prompt()], **knowledge_config
|
||||
if self.knowledge:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeRetrievalStartedEvent(
|
||||
agent=self,
|
||||
),
|
||||
)
|
||||
if knowledge_snippets:
|
||||
self.crew_knowledge_context = extract_knowledge_context(
|
||||
knowledge_snippets
|
||||
try:
|
||||
self.knowledge_search_query = self._get_knowledge_search_query(
|
||||
task_prompt
|
||||
)
|
||||
if self.knowledge_search_query:
|
||||
agent_knowledge_snippets = self.knowledge.query(
|
||||
[self.knowledge_search_query], **knowledge_config
|
||||
)
|
||||
if agent_knowledge_snippets:
|
||||
self.agent_knowledge_context = extract_knowledge_context(
|
||||
agent_knowledge_snippets
|
||||
)
|
||||
if self.agent_knowledge_context:
|
||||
task_prompt += self.agent_knowledge_context
|
||||
if self.crew:
|
||||
knowledge_snippets = self.crew.query_knowledge(
|
||||
[self.knowledge_search_query], **knowledge_config
|
||||
)
|
||||
if knowledge_snippets:
|
||||
self.crew_knowledge_context = extract_knowledge_context(
|
||||
knowledge_snippets
|
||||
)
|
||||
if self.crew_knowledge_context:
|
||||
task_prompt += self.crew_knowledge_context
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeRetrievalCompletedEvent(
|
||||
query=self.knowledge_search_query,
|
||||
agent=self,
|
||||
retrieved_knowledge=(
|
||||
(self.agent_knowledge_context or "")
|
||||
+ (
|
||||
"\n"
|
||||
if self.agent_knowledge_context
|
||||
and self.crew_knowledge_context
|
||||
else ""
|
||||
)
|
||||
+ (self.crew_knowledge_context or "")
|
||||
),
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeSearchQueryFailedEvent(
|
||||
query=self.knowledge_search_query or "",
|
||||
agent=self,
|
||||
error=str(e),
|
||||
),
|
||||
)
|
||||
if self.crew_knowledge_context:
|
||||
task_prompt += self.crew_knowledge_context
|
||||
|
||||
tools = tools or self.tools or []
|
||||
self.create_agent_executor(tools=tools, task=task)
|
||||
@@ -288,12 +349,19 @@ class Agent(BaseAgent):
|
||||
|
||||
# Determine execution method based on timeout setting
|
||||
if self.max_execution_time is not None:
|
||||
if not isinstance(self.max_execution_time, int) or self.max_execution_time <= 0:
|
||||
raise ValueError("Max Execution time must be a positive integer greater than zero")
|
||||
result = self._execute_with_timeout(task_prompt, task, self.max_execution_time)
|
||||
if (
|
||||
not isinstance(self.max_execution_time, int)
|
||||
or self.max_execution_time <= 0
|
||||
):
|
||||
raise ValueError(
|
||||
"Max Execution time must be a positive integer greater than zero"
|
||||
)
|
||||
result = self._execute_with_timeout(
|
||||
task_prompt, task, self.max_execution_time
|
||||
)
|
||||
else:
|
||||
result = self._execute_without_timeout(task_prompt, task)
|
||||
|
||||
|
||||
except TimeoutError as e:
|
||||
# Propagate TimeoutError without retry
|
||||
crewai_event_bus.emit(
|
||||
@@ -345,54 +413,46 @@ class Agent(BaseAgent):
|
||||
)
|
||||
return result
|
||||
|
||||
def _execute_with_timeout(
|
||||
self,
|
||||
task_prompt: str,
|
||||
task: Task,
|
||||
timeout: int
|
||||
) -> str:
|
||||
def _execute_with_timeout(self, task_prompt: str, task: Task, timeout: int) -> str:
|
||||
"""Execute a task with a timeout.
|
||||
|
||||
|
||||
Args:
|
||||
task_prompt: The prompt to send to the agent.
|
||||
task: The task being executed.
|
||||
timeout: Maximum execution time in seconds.
|
||||
|
||||
|
||||
Returns:
|
||||
The output of the agent.
|
||||
|
||||
|
||||
Raises:
|
||||
TimeoutError: If execution exceeds the timeout.
|
||||
RuntimeError: If execution fails for other reasons.
|
||||
"""
|
||||
import concurrent.futures
|
||||
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
future = executor.submit(
|
||||
self._execute_without_timeout,
|
||||
task_prompt=task_prompt,
|
||||
task=task
|
||||
self._execute_without_timeout, task_prompt=task_prompt, task=task
|
||||
)
|
||||
|
||||
|
||||
try:
|
||||
return future.result(timeout=timeout)
|
||||
except concurrent.futures.TimeoutError:
|
||||
future.cancel()
|
||||
raise TimeoutError(f"Task '{task.description}' execution timed out after {timeout} seconds. Consider increasing max_execution_time or optimizing the task.")
|
||||
raise TimeoutError(
|
||||
f"Task '{task.description}' execution timed out after {timeout} seconds. Consider increasing max_execution_time or optimizing the task."
|
||||
)
|
||||
except Exception as e:
|
||||
future.cancel()
|
||||
raise RuntimeError(f"Task execution failed: {str(e)}")
|
||||
|
||||
def _execute_without_timeout(
|
||||
self,
|
||||
task_prompt: str,
|
||||
task: Task
|
||||
) -> str:
|
||||
def _execute_without_timeout(self, task_prompt: str, task: Task) -> str:
|
||||
"""Execute a task without a timeout.
|
||||
|
||||
|
||||
Args:
|
||||
task_prompt: The prompt to send to the agent.
|
||||
task: The task being executed.
|
||||
|
||||
|
||||
Returns:
|
||||
The output of the agent.
|
||||
"""
|
||||
@@ -560,6 +620,61 @@ class Agent(BaseAgent):
|
||||
def set_fingerprint(self, fingerprint: Fingerprint):
|
||||
self.security_config.fingerprint = fingerprint
|
||||
|
||||
def _get_knowledge_search_query(self, task_prompt: str) -> str | None:
|
||||
"""Generate a search query for the knowledge base based on the task description."""
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeQueryStartedEvent(
|
||||
task_prompt=task_prompt,
|
||||
agent=self,
|
||||
),
|
||||
)
|
||||
query = self.i18n.slice("knowledge_search_query").format(
|
||||
task_prompt=task_prompt
|
||||
)
|
||||
rewriter_prompt = self.i18n.slice("knowledge_search_query_system_prompt")
|
||||
if not isinstance(self.llm, BaseLLM):
|
||||
self._logger.log(
|
||||
"warning",
|
||||
f"Knowledge search query failed: LLM for agent '{self.role}' is not an instance of BaseLLM",
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeQueryFailedEvent(
|
||||
agent=self,
|
||||
error="LLM is not compatible with knowledge search queries",
|
||||
),
|
||||
)
|
||||
return None
|
||||
|
||||
try:
|
||||
rewritten_query = self.llm.call(
|
||||
[
|
||||
{
|
||||
"role": "system",
|
||||
"content": rewriter_prompt,
|
||||
},
|
||||
{"role": "user", "content": query},
|
||||
]
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeQueryCompletedEvent(
|
||||
query=query,
|
||||
agent=self,
|
||||
),
|
||||
)
|
||||
return rewritten_query
|
||||
except Exception as e:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeQueryFailedEvent(
|
||||
agent=self,
|
||||
error=str(e),
|
||||
),
|
||||
)
|
||||
return None
|
||||
|
||||
def kickoff(
|
||||
self,
|
||||
messages: Union[str, List[Dict[str, str]]],
|
||||
|
||||
@@ -5,5 +5,5 @@ def get_auth_token() -> str:
|
||||
"""Get the authentication token."""
|
||||
access_token = TokenManager().get_token()
|
||||
if not access_token:
|
||||
raise Exception()
|
||||
raise Exception("No token found, make sure you are logged in")
|
||||
return access_token
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
import os
|
||||
from importlib.metadata import version as get_version
|
||||
from typing import Optional, Tuple
|
||||
from typing import Optional
|
||||
|
||||
import click
|
||||
|
||||
@@ -138,12 +137,8 @@ def log_tasks_outputs() -> None:
|
||||
@click.option("-s", "--short", is_flag=True, help="Reset SHORT TERM memory")
|
||||
@click.option("-e", "--entities", is_flag=True, help="Reset ENTITIES memory")
|
||||
@click.option("-kn", "--knowledge", is_flag=True, help="Reset KNOWLEDGE storage")
|
||||
@click.option(
|
||||
"-k",
|
||||
"--kickoff-outputs",
|
||||
is_flag=True,
|
||||
help="Reset LATEST KICKOFF TASK OUTPUTS",
|
||||
)
|
||||
@click.option("-akn", "--agent-knowledge", is_flag=True, help="Reset AGENT KNOWLEDGE storage")
|
||||
@click.option("-k","--kickoff-outputs",is_flag=True,help="Reset LATEST KICKOFF TASK OUTPUTS")
|
||||
@click.option("-a", "--all", is_flag=True, help="Reset ALL memories")
|
||||
def reset_memories(
|
||||
long: bool,
|
||||
@@ -151,18 +146,20 @@ def reset_memories(
|
||||
entities: bool,
|
||||
knowledge: bool,
|
||||
kickoff_outputs: bool,
|
||||
agent_knowledge: bool,
|
||||
all: bool,
|
||||
) -> None:
|
||||
"""
|
||||
Reset the crew memories (long, short, entity, latest_crew_kickoff_ouputs). This will delete all the data saved.
|
||||
Reset the crew memories (long, short, entity, latest_crew_kickoff_ouputs, knowledge, agent_knowledge). This will delete all the data saved.
|
||||
"""
|
||||
try:
|
||||
if not all and not (long or short or entities or knowledge or kickoff_outputs):
|
||||
memory_types = [long, short, entities, knowledge, agent_knowledge, kickoff_outputs, all]
|
||||
if not any(memory_types):
|
||||
click.echo(
|
||||
"Please specify at least one memory type to reset using the appropriate flags."
|
||||
)
|
||||
return
|
||||
reset_memories_command(long, short, entities, knowledge, kickoff_outputs, all)
|
||||
reset_memories_command(long, short, entities, knowledge, agent_knowledge, kickoff_outputs, all)
|
||||
except Exception as e:
|
||||
click.echo(f"An error occurred while resetting memories: {e}", err=True)
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ ENV_VARS = {
|
||||
],
|
||||
"gemini": [
|
||||
{
|
||||
"prompt": "Enter your GEMINI API key (press Enter to skip)",
|
||||
"prompt": "Enter your GEMINI API key from https://ai.dev/apikey (press Enter to skip)",
|
||||
"key_name": "GEMINI_API_KEY",
|
||||
}
|
||||
],
|
||||
|
||||
@@ -4,7 +4,7 @@ import click
|
||||
|
||||
|
||||
# Be mindful about changing this.
|
||||
# on some enviorments we don't use this command but instead uv sync directly
|
||||
# on some environments we don't use this command but instead uv sync directly
|
||||
# so if you expect this to support more things you will need to replicate it there
|
||||
# ask @joaomdmoura if you are unsure
|
||||
def install_crew(proxy_options: list[str]) -> None:
|
||||
|
||||
@@ -14,6 +14,7 @@ class PlusAPI:
|
||||
|
||||
TOOLS_RESOURCE = "/crewai_plus/api/v1/tools"
|
||||
CREWS_RESOURCE = "/crewai_plus/api/v1/crews"
|
||||
AGENTS_RESOURCE = "/crewai_plus/api/v1/agents"
|
||||
|
||||
def __init__(self, api_key: str) -> None:
|
||||
self.api_key = api_key
|
||||
@@ -37,6 +38,9 @@ class PlusAPI:
|
||||
def get_tool(self, handle: str):
|
||||
return self._make_request("GET", f"{self.TOOLS_RESOURCE}/{handle}")
|
||||
|
||||
def get_agent(self, handle: str):
|
||||
return self._make_request("GET", f"{self.AGENTS_RESOURCE}/{handle}")
|
||||
|
||||
def publish_tool(
|
||||
self,
|
||||
handle: str,
|
||||
|
||||
@@ -2,7 +2,7 @@ import subprocess
|
||||
|
||||
import click
|
||||
|
||||
from crewai.cli.utils import get_crew
|
||||
from crewai.cli.utils import get_crews
|
||||
|
||||
|
||||
def reset_memories_command(
|
||||
@@ -10,6 +10,7 @@ def reset_memories_command(
|
||||
short,
|
||||
entity,
|
||||
knowledge,
|
||||
agent_knowledge,
|
||||
kickoff_outputs,
|
||||
all,
|
||||
) -> None:
|
||||
@@ -23,38 +24,56 @@ def reset_memories_command(
|
||||
kickoff_outputs (bool): Whether to reset the latest kickoff task outputs.
|
||||
all (bool): Whether to reset all memories.
|
||||
knowledge (bool): Whether to reset the knowledge.
|
||||
agent_knowledge (bool): Whether to reset the agents knowledge.
|
||||
"""
|
||||
|
||||
try:
|
||||
crew = get_crew()
|
||||
if not crew:
|
||||
raise ValueError("No crew found.")
|
||||
if all:
|
||||
crew.reset_memories(command_type="all")
|
||||
click.echo("All memories have been reset.")
|
||||
return
|
||||
|
||||
if not any([long, short, entity, kickoff_outputs, knowledge]):
|
||||
if not any([long, short, entity, kickoff_outputs, knowledge, agent_knowledge, all]):
|
||||
click.echo(
|
||||
"No memory type specified. Please specify at least one type to reset."
|
||||
)
|
||||
return
|
||||
|
||||
if long:
|
||||
crew.reset_memories(command_type="long")
|
||||
click.echo("Long term memory has been reset.")
|
||||
if short:
|
||||
crew.reset_memories(command_type="short")
|
||||
click.echo("Short term memory has been reset.")
|
||||
if entity:
|
||||
crew.reset_memories(command_type="entity")
|
||||
click.echo("Entity memory has been reset.")
|
||||
if kickoff_outputs:
|
||||
crew.reset_memories(command_type="kickoff_outputs")
|
||||
click.echo("Latest Kickoff outputs stored has been reset.")
|
||||
if knowledge:
|
||||
crew.reset_memories(command_type="knowledge")
|
||||
click.echo("Knowledge has been reset.")
|
||||
crews = get_crews()
|
||||
if not crews:
|
||||
raise ValueError("No crew found.")
|
||||
for crew in crews:
|
||||
if all:
|
||||
crew.reset_memories(command_type="all")
|
||||
click.echo(
|
||||
f"[Crew ({crew.name if crew.name else crew.id})] Reset memories command has been completed."
|
||||
)
|
||||
continue
|
||||
if long:
|
||||
crew.reset_memories(command_type="long")
|
||||
click.echo(
|
||||
f"[Crew ({crew.name if crew.name else crew.id})] Long term memory has been reset."
|
||||
)
|
||||
if short:
|
||||
crew.reset_memories(command_type="short")
|
||||
click.echo(
|
||||
f"[Crew ({crew.name if crew.name else crew.id})] Short term memory has been reset."
|
||||
)
|
||||
if entity:
|
||||
crew.reset_memories(command_type="entity")
|
||||
click.echo(
|
||||
f"[Crew ({crew.name if crew.name else crew.id})] Entity memory has been reset."
|
||||
)
|
||||
if kickoff_outputs:
|
||||
crew.reset_memories(command_type="kickoff_outputs")
|
||||
click.echo(
|
||||
f"[Crew ({crew.name if crew.name else crew.id})] Latest Kickoff outputs stored has been reset."
|
||||
)
|
||||
if knowledge:
|
||||
crew.reset_memories(command_type="knowledge")
|
||||
click.echo(
|
||||
f"[Crew ({crew.name if crew.name else crew.id})] Knowledge has been reset."
|
||||
)
|
||||
if agent_knowledge:
|
||||
crew.reset_memories(command_type="agent_knowledge")
|
||||
click.echo(
|
||||
f"[Crew ({crew.name if crew.name else crew.id})] Agents knowledge has been reset."
|
||||
)
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
click.echo(f"An error occurred while resetting the memories: {e}", err=True)
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.118.0,<1.0.0"
|
||||
"crewai[tools]>=0.120.0,<1.0.0"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.118.0,<1.0.0",
|
||||
"crewai[tools]>=0.120.0,<1.0.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.118.0"
|
||||
"crewai[tools]>=0.120.0"
|
||||
]
|
||||
|
||||
[tool.crewai]
|
||||
|
||||
@@ -2,7 +2,8 @@ import os
|
||||
import shutil
|
||||
import sys
|
||||
from functools import reduce
|
||||
from typing import Any, Dict, List
|
||||
from inspect import isfunction, ismethod
|
||||
from typing import Any, Dict, List, get_type_hints
|
||||
|
||||
import click
|
||||
import tomli
|
||||
@@ -10,6 +11,7 @@ from rich.console import Console
|
||||
|
||||
from crewai.cli.constants import ENV_VARS
|
||||
from crewai.crew import Crew
|
||||
from crewai.flow import Flow
|
||||
|
||||
if sys.version_info >= (3, 11):
|
||||
import tomllib
|
||||
@@ -250,11 +252,11 @@ def write_env_file(folder_path, env_vars):
|
||||
file.write(f"{key}={value}\n")
|
||||
|
||||
|
||||
def get_crew(crew_path: str = "crew.py", require: bool = False) -> Crew | None:
|
||||
"""Get the crew instance from the crew.py file."""
|
||||
def get_crews(crew_path: str = "crew.py", require: bool = False) -> list[Crew]:
|
||||
"""Get the crew instances from the a file."""
|
||||
crew_instances = []
|
||||
try:
|
||||
import importlib.util
|
||||
import os
|
||||
|
||||
for root, _, files in os.walk("."):
|
||||
if crew_path in files:
|
||||
@@ -271,12 +273,10 @@ def get_crew(crew_path: str = "crew.py", require: bool = False) -> Crew | None:
|
||||
spec.loader.exec_module(module)
|
||||
|
||||
for attr_name in dir(module):
|
||||
attr = getattr(module, attr_name)
|
||||
try:
|
||||
if callable(attr) and hasattr(attr, "crew"):
|
||||
crew_instance = attr().crew()
|
||||
return crew_instance
|
||||
module_attr = getattr(module, attr_name)
|
||||
|
||||
try:
|
||||
crew_instances.extend(fetch_crews(module_attr))
|
||||
except Exception as e:
|
||||
print(f"Error processing attribute {attr_name}: {e}")
|
||||
continue
|
||||
@@ -286,7 +286,6 @@ def get_crew(crew_path: str = "crew.py", require: bool = False) -> Crew | None:
|
||||
import traceback
|
||||
|
||||
print(f"Traceback: {traceback.format_exc()}")
|
||||
|
||||
except (ImportError, AttributeError) as e:
|
||||
if require:
|
||||
console.print(
|
||||
@@ -300,7 +299,6 @@ def get_crew(crew_path: str = "crew.py", require: bool = False) -> Crew | None:
|
||||
if require:
|
||||
console.print("No valid Crew instance found in crew.py", style="bold red")
|
||||
raise SystemExit
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
if require:
|
||||
@@ -308,4 +306,36 @@ def get_crew(crew_path: str = "crew.py", require: bool = False) -> Crew | None:
|
||||
f"Unexpected error while loading crew: {str(e)}", style="bold red"
|
||||
)
|
||||
raise SystemExit
|
||||
return crew_instances
|
||||
|
||||
|
||||
def get_crew_instance(module_attr) -> Crew | None:
|
||||
if (
|
||||
callable(module_attr)
|
||||
and hasattr(module_attr, "is_crew_class")
|
||||
and module_attr.is_crew_class
|
||||
):
|
||||
return module_attr().crew()
|
||||
if (ismethod(module_attr) or isfunction(module_attr)) and get_type_hints(
|
||||
module_attr
|
||||
).get("return") is Crew:
|
||||
return module_attr()
|
||||
elif isinstance(module_attr, Crew):
|
||||
return module_attr
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def fetch_crews(module_attr) -> list[Crew]:
|
||||
crew_instances: list[Crew] = []
|
||||
|
||||
if crew_instance := get_crew_instance(module_attr):
|
||||
crew_instances.append(crew_instance)
|
||||
|
||||
if isinstance(module_attr, type) and issubclass(module_attr, Flow):
|
||||
instance = module_attr()
|
||||
for attr_name in dir(instance):
|
||||
attr = getattr(instance, attr_name)
|
||||
if crew_instance := get_crew_instance(attr):
|
||||
crew_instances.append(crew_instance)
|
||||
return crew_instances
|
||||
|
||||
@@ -6,7 +6,17 @@ import warnings
|
||||
from concurrent.futures import Future
|
||||
from copy import copy as shallow_copy
|
||||
from hashlib import md5
|
||||
from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Union, cast
|
||||
from typing import (
|
||||
Any,
|
||||
Callable,
|
||||
Dict,
|
||||
List,
|
||||
Optional,
|
||||
Set,
|
||||
Tuple,
|
||||
Union,
|
||||
cast,
|
||||
)
|
||||
|
||||
from pydantic import (
|
||||
UUID4,
|
||||
@@ -24,6 +34,7 @@ from crewai.agent import Agent
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.agents.cache import CacheHandler
|
||||
from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.flow.flow_trackable import FlowTrackable
|
||||
from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
|
||||
from crewai.llm import LLM, BaseLLM
|
||||
@@ -41,7 +52,7 @@ from crewai.tools.agent_tools.agent_tools import AgentTools
|
||||
from crewai.tools.base_tool import BaseTool, Tool
|
||||
from crewai.types.usage_metrics import UsageMetrics
|
||||
from crewai.utilities import I18N, FileHandler, Logger, RPMController
|
||||
from crewai.utilities.constants import TRAINING_DATA_FILE
|
||||
from crewai.utilities.constants import NOT_SPECIFIED, TRAINING_DATA_FILE
|
||||
from crewai.utilities.evaluators.crew_evaluator_handler import CrewEvaluator
|
||||
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
|
||||
from crewai.utilities.events.crew_events import (
|
||||
@@ -69,7 +80,7 @@ from crewai.utilities.training_handler import CrewTrainingHandler
|
||||
warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")
|
||||
|
||||
|
||||
class Crew(BaseModel):
|
||||
class Crew(FlowTrackable, BaseModel):
|
||||
"""
|
||||
Represents a group of agents, defining how they should collaborate and the tasks they should perform.
|
||||
|
||||
@@ -304,7 +315,9 @@ class Crew(BaseModel):
|
||||
"""Initialize private memory attributes."""
|
||||
self._external_memory = (
|
||||
# External memory doesn’t support a default value since it was designed to be managed entirely externally
|
||||
self.external_memory.set_crew(self) if self.external_memory else None
|
||||
self.external_memory.set_crew(self)
|
||||
if self.external_memory
|
||||
else None
|
||||
)
|
||||
|
||||
self._long_term_memory = self.long_term_memory
|
||||
@@ -333,6 +346,7 @@ class Crew(BaseModel):
|
||||
embedder=self.embedder,
|
||||
collection_name="crew",
|
||||
)
|
||||
self.knowledge.add_sources()
|
||||
|
||||
except Exception as e:
|
||||
self._logger.log(
|
||||
@@ -464,7 +478,7 @@ class Crew(BaseModel):
|
||||
separated by a synchronous task.
|
||||
"""
|
||||
for i, task in enumerate(self.tasks):
|
||||
if task.async_execution and task.context:
|
||||
if task.async_execution and isinstance(task.context, list):
|
||||
for context_task in task.context:
|
||||
if context_task.async_execution:
|
||||
for j in range(i - 1, -1, -1):
|
||||
@@ -482,7 +496,7 @@ class Crew(BaseModel):
|
||||
task_indices = {id(task): i for i, task in enumerate(self.tasks)}
|
||||
|
||||
for task in self.tasks:
|
||||
if task.context:
|
||||
if isinstance(task.context, list):
|
||||
for context_task in task.context:
|
||||
if id(context_task) not in task_indices:
|
||||
continue # Skip context tasks not in the main tasks list
|
||||
@@ -1020,11 +1034,14 @@ class Crew(BaseModel):
|
||||
)
|
||||
return cast(List[BaseTool], tools)
|
||||
|
||||
def _get_context(self, task: Task, task_outputs: List[TaskOutput]):
|
||||
def _get_context(self, task: Task, task_outputs: List[TaskOutput]) -> str:
|
||||
if not task.context:
|
||||
return ""
|
||||
|
||||
context = (
|
||||
aggregate_raw_outputs_from_tasks(task.context)
|
||||
if task.context
|
||||
else aggregate_raw_outputs_from_task_outputs(task_outputs)
|
||||
aggregate_raw_outputs_from_task_outputs(task_outputs)
|
||||
if task.context is NOT_SPECIFIED
|
||||
else aggregate_raw_outputs_from_tasks(task.context)
|
||||
)
|
||||
return context
|
||||
|
||||
@@ -1212,7 +1229,7 @@ class Crew(BaseModel):
|
||||
task_mapping[task.key] = cloned_task
|
||||
|
||||
for cloned_task, original_task in zip(cloned_tasks, self.tasks):
|
||||
if original_task.context:
|
||||
if isinstance(original_task.context, list):
|
||||
cloned_context = [
|
||||
task_mapping[context_task.key]
|
||||
for context_task in original_task.context
|
||||
@@ -1339,7 +1356,7 @@ class Crew(BaseModel):
|
||||
|
||||
Args:
|
||||
command_type: Type of memory to reset.
|
||||
Valid options: 'long', 'short', 'entity', 'knowledge',
|
||||
Valid options: 'long', 'short', 'entity', 'knowledge', 'agent_knowledge'
|
||||
'kickoff_outputs', or 'all'
|
||||
|
||||
Raises:
|
||||
@@ -1352,6 +1369,7 @@ class Crew(BaseModel):
|
||||
"short",
|
||||
"entity",
|
||||
"knowledge",
|
||||
"agent_knowledge",
|
||||
"kickoff_outputs",
|
||||
"all",
|
||||
"external",
|
||||
@@ -1369,8 +1387,6 @@ class Crew(BaseModel):
|
||||
else:
|
||||
self._reset_specific_memory(command_type)
|
||||
|
||||
self._logger.log("info", f"{command_type} memory has been reset")
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to reset {command_type} memory: {str(e)}"
|
||||
self._logger.log("error", error_msg)
|
||||
@@ -1378,21 +1394,22 @@ class Crew(BaseModel):
|
||||
|
||||
def _reset_all_memories(self) -> None:
|
||||
"""Reset all available memory systems."""
|
||||
memory_systems = [
|
||||
("short term", getattr(self, "_short_term_memory", None)),
|
||||
("entity", getattr(self, "_entity_memory", None)),
|
||||
("external", getattr(self, "_external_memory", None)),
|
||||
("long term", getattr(self, "_long_term_memory", None)),
|
||||
("task output", getattr(self, "_task_output_handler", None)),
|
||||
("knowledge", getattr(self, "knowledge", None)),
|
||||
]
|
||||
memory_systems = self._get_memory_systems()
|
||||
|
||||
for name, system in memory_systems:
|
||||
if system is not None:
|
||||
for memory_type, config in memory_systems.items():
|
||||
if (system := config.get('system')) is not None:
|
||||
name = config.get('name')
|
||||
try:
|
||||
system.reset()
|
||||
reset_fn: Callable = cast(Callable, config.get('reset'))
|
||||
reset_fn(system)
|
||||
self._logger.log(
|
||||
"info",
|
||||
f"[Crew ({self.name if self.name else self.id})] {name} memory has been reset",
|
||||
)
|
||||
except Exception as e:
|
||||
raise RuntimeError(f"Failed to reset {name} memory") from e
|
||||
raise RuntimeError(
|
||||
f"[Crew ({self.name if self.name else self.id})] Failed to reset {name} memory: {str(e)}"
|
||||
) from e
|
||||
|
||||
def _reset_specific_memory(self, memory_type: str) -> None:
|
||||
"""Reset a specific memory system.
|
||||
@@ -1403,23 +1420,83 @@ class Crew(BaseModel):
|
||||
Raises:
|
||||
RuntimeError: If the specified memory system fails to reset
|
||||
"""
|
||||
reset_functions = {
|
||||
"long": (getattr(self, "_long_term_memory", None), "long term"),
|
||||
"short": (getattr(self, "_short_term_memory", None), "short term"),
|
||||
"entity": (getattr(self, "_entity_memory", None), "entity"),
|
||||
"knowledge": (getattr(self, "knowledge", None), "knowledge"),
|
||||
"kickoff_outputs": (
|
||||
getattr(self, "_task_output_handler", None),
|
||||
"task output",
|
||||
),
|
||||
"external": (getattr(self, "_external_memory", None), "external"),
|
||||
memory_systems = self._get_memory_systems()
|
||||
config = memory_systems[memory_type]
|
||||
system = config.get('system')
|
||||
name = config.get('name')
|
||||
|
||||
if system is None:
|
||||
raise RuntimeError(f"{name} memory system is not initialized")
|
||||
|
||||
try:
|
||||
reset_fn: Callable = cast(Callable, config.get('reset'))
|
||||
reset_fn(system)
|
||||
self._logger.log(
|
||||
"info",
|
||||
f"[Crew ({self.name if self.name else self.id})] {name} memory has been reset",
|
||||
)
|
||||
except Exception as e:
|
||||
raise RuntimeError(
|
||||
f"[Crew ({self.name if self.name else self.id})] Failed to reset {name} memory: {str(e)}"
|
||||
) from e
|
||||
|
||||
def _get_memory_systems(self):
|
||||
"""Get all available memory systems with their configuration.
|
||||
|
||||
Returns:
|
||||
Dict containing all memory systems with their reset functions and display names.
|
||||
"""
|
||||
def default_reset(memory):
|
||||
return memory.reset()
|
||||
def knowledge_reset(memory):
|
||||
return self.reset_knowledge(memory)
|
||||
|
||||
# Get knowledge for agents
|
||||
agent_knowledges = [getattr(agent, "knowledge", None) for agent in self.agents
|
||||
if getattr(agent, "knowledge", None) is not None]
|
||||
# Get knowledge for crew and agents
|
||||
crew_knowledge = getattr(self, "knowledge", None)
|
||||
crew_and_agent_knowledges = ([crew_knowledge] if crew_knowledge is not None else []) + agent_knowledges
|
||||
|
||||
return {
|
||||
'short': {
|
||||
'system': getattr(self, "_short_term_memory", None),
|
||||
'reset': default_reset,
|
||||
'name': 'Short Term'
|
||||
},
|
||||
'entity': {
|
||||
'system': getattr(self, "_entity_memory", None),
|
||||
'reset': default_reset,
|
||||
'name': 'Entity'
|
||||
},
|
||||
'external': {
|
||||
'system': getattr(self, "_external_memory", None),
|
||||
'reset': default_reset,
|
||||
'name': 'External'
|
||||
},
|
||||
'long': {
|
||||
'system': getattr(self, "_long_term_memory", None),
|
||||
'reset': default_reset,
|
||||
'name': 'Long Term'
|
||||
},
|
||||
'kickoff_outputs': {
|
||||
'system': getattr(self, "_task_output_handler", None),
|
||||
'reset': default_reset,
|
||||
'name': 'Task Output'
|
||||
},
|
||||
'knowledge': {
|
||||
'system': crew_and_agent_knowledges if crew_and_agent_knowledges else None,
|
||||
'reset': knowledge_reset,
|
||||
'name': 'Crew Knowledge and Agent Knowledge'
|
||||
},
|
||||
'agent_knowledge': {
|
||||
'system': agent_knowledges if agent_knowledges else None,
|
||||
'reset': knowledge_reset,
|
||||
'name': 'Agent Knowledge'
|
||||
}
|
||||
}
|
||||
|
||||
memory_system, name = reset_functions[memory_type]
|
||||
if memory_system is None:
|
||||
raise RuntimeError(f"{name} memory system is not initialized")
|
||||
|
||||
try:
|
||||
memory_system.reset()
|
||||
except Exception as e:
|
||||
raise RuntimeError(f"Failed to reset {name} memory") from e
|
||||
def reset_knowledge(self, knowledges: List[Knowledge]) -> None:
|
||||
"""Reset crew and agent knowledge storage."""
|
||||
for ks in knowledges:
|
||||
ks.reset()
|
||||
|
||||
44
src/crewai/flow/flow_trackable.py
Normal file
@@ -0,0 +1,44 @@
|
||||
import inspect
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel, Field, InstanceOf, model_validator
|
||||
|
||||
from crewai.flow import Flow
|
||||
|
||||
|
||||
class FlowTrackable(BaseModel):
|
||||
"""Mixin that tracks the Flow instance that instantiated the object, e.g. a
|
||||
Flow instance that created a Crew or Agent.
|
||||
|
||||
Automatically finds and stores a reference to the parent Flow instance by
|
||||
inspecting the call stack.
|
||||
"""
|
||||
|
||||
parent_flow: Optional[InstanceOf[Flow]] = Field(
|
||||
default=None,
|
||||
description="The parent flow of the instance, if it was created inside a flow.",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _set_parent_flow(self, max_depth: int = 5) -> "FlowTrackable":
|
||||
frame = inspect.currentframe()
|
||||
|
||||
try:
|
||||
if frame is None:
|
||||
return self
|
||||
|
||||
frame = frame.f_back
|
||||
for _ in range(max_depth):
|
||||
if frame is None:
|
||||
break
|
||||
|
||||
candidate = frame.f_locals.get("self")
|
||||
if isinstance(candidate, Flow):
|
||||
self.parent_flow = candidate
|
||||
break
|
||||
|
||||
frame = frame.f_back
|
||||
finally:
|
||||
del frame
|
||||
|
||||
return self
|
||||
@@ -41,7 +41,6 @@ class Knowledge(BaseModel):
|
||||
)
|
||||
self.sources = sources
|
||||
self.storage.initialize_knowledge_storage()
|
||||
self._add_sources()
|
||||
|
||||
def query(
|
||||
self, query: List[str], results_limit: int = 3, score_threshold: float = 0.35
|
||||
@@ -63,7 +62,7 @@ class Knowledge(BaseModel):
|
||||
)
|
||||
return results
|
||||
|
||||
def _add_sources(self):
|
||||
def add_sources(self):
|
||||
try:
|
||||
for source in self.sources:
|
||||
source.storage = self.storage
|
||||
|
||||
@@ -13,6 +13,7 @@ from crewai.agents.parser import (
|
||||
AgentFinish,
|
||||
OutputParserException,
|
||||
)
|
||||
from crewai.flow.flow_trackable import FlowTrackable
|
||||
from crewai.llm import LLM
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.tools.structured_tool import CrewStructuredTool
|
||||
@@ -80,7 +81,7 @@ class LiteAgentOutput(BaseModel):
|
||||
return self.raw
|
||||
|
||||
|
||||
class LiteAgent(BaseModel):
|
||||
class LiteAgent(FlowTrackable, BaseModel):
|
||||
"""
|
||||
A lightweight agent that can process messages and use tools.
|
||||
|
||||
@@ -162,7 +163,7 @@ class LiteAgent(BaseModel):
|
||||
_messages: List[Dict[str, str]] = PrivateAttr(default_factory=list)
|
||||
_iterations: int = PrivateAttr(default=0)
|
||||
_printer: Printer = PrivateAttr(default_factory=Printer)
|
||||
|
||||
|
||||
@model_validator(mode="after")
|
||||
def setup_llm(self):
|
||||
"""Set up the LLM and other components after initialization."""
|
||||
|
||||
@@ -5,8 +5,7 @@ import sys
|
||||
import threading
|
||||
import warnings
|
||||
from collections import defaultdict
|
||||
from contextlib import contextmanager
|
||||
from types import SimpleNamespace
|
||||
from contextlib import contextmanager, redirect_stderr, redirect_stdout
|
||||
from typing import (
|
||||
Any,
|
||||
DefaultDict,
|
||||
@@ -31,7 +30,6 @@ from crewai.utilities.events.llm_events import (
|
||||
LLMCallType,
|
||||
LLMStreamChunkEvent,
|
||||
)
|
||||
from crewai.utilities.events.tool_usage_events import ToolExecutionErrorEvent
|
||||
|
||||
with warnings.catch_warnings():
|
||||
warnings.simplefilter("ignore", UserWarning)
|
||||
@@ -45,6 +43,9 @@ with warnings.catch_warnings():
|
||||
from litellm.utils import supports_response_schema
|
||||
|
||||
|
||||
import io
|
||||
from typing import TextIO
|
||||
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.utilities.events import crewai_event_bus
|
||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
@@ -54,12 +55,17 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
load_dotenv()
|
||||
|
||||
|
||||
class FilteredStream:
|
||||
def __init__(self, original_stream):
|
||||
class FilteredStream(io.TextIOBase):
|
||||
_lock = None
|
||||
|
||||
def __init__(self, original_stream: TextIO):
|
||||
self._original_stream = original_stream
|
||||
self._lock = threading.Lock()
|
||||
|
||||
def write(self, s) -> int:
|
||||
def write(self, s: str) -> int:
|
||||
if not self._lock:
|
||||
self._lock = threading.Lock()
|
||||
|
||||
with self._lock:
|
||||
# Filter out extraneous messages from LiteLLM
|
||||
if (
|
||||
@@ -214,15 +220,11 @@ def suppress_warnings():
|
||||
)
|
||||
|
||||
# Redirect stdout and stderr
|
||||
old_stdout = sys.stdout
|
||||
old_stderr = sys.stderr
|
||||
sys.stdout = FilteredStream(old_stdout)
|
||||
sys.stderr = FilteredStream(old_stderr)
|
||||
try:
|
||||
with (
|
||||
redirect_stdout(FilteredStream(sys.stdout)),
|
||||
redirect_stderr(FilteredStream(sys.stderr)),
|
||||
):
|
||||
yield
|
||||
finally:
|
||||
sys.stdout = old_stdout
|
||||
sys.stderr = old_stderr
|
||||
|
||||
|
||||
class Delta(TypedDict):
|
||||
|
||||
@@ -2,7 +2,6 @@ import datetime
|
||||
import inspect
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
import threading
|
||||
import uuid
|
||||
from concurrent.futures import Future
|
||||
@@ -41,6 +40,7 @@ from crewai.tasks.output_format import OutputFormat
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.utilities.config import process_config
|
||||
from crewai.utilities.constants import NOT_SPECIFIED
|
||||
from crewai.utilities.converter import Converter, convert_to_model
|
||||
from crewai.utilities.events import (
|
||||
TaskCompletedEvent,
|
||||
@@ -97,7 +97,7 @@ class Task(BaseModel):
|
||||
)
|
||||
context: Optional[List["Task"]] = Field(
|
||||
description="Other tasks that will have their output used as context for this task.",
|
||||
default=None,
|
||||
default=NOT_SPECIFIED,
|
||||
)
|
||||
async_execution: Optional[bool] = Field(
|
||||
description="Whether the task should be executed asynchronously or not.",
|
||||
@@ -643,7 +643,7 @@ class Task(BaseModel):
|
||||
|
||||
cloned_context = (
|
||||
[task_mapping[context_task.key] for context_task in self.context]
|
||||
if self.context
|
||||
if isinstance(self.context, list)
|
||||
else None
|
||||
)
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import platform
|
||||
import warnings
|
||||
@@ -9,11 +10,25 @@ from contextlib import contextmanager
|
||||
from importlib.metadata import version
|
||||
from typing import TYPE_CHECKING, Any, Optional
|
||||
|
||||
from opentelemetry import trace
|
||||
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
|
||||
OTLPSpanExporter,
|
||||
)
|
||||
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
|
||||
from opentelemetry.sdk.trace import TracerProvider
|
||||
from opentelemetry.sdk.trace.export import (
|
||||
BatchSpanProcessor,
|
||||
SpanExportResult,
|
||||
)
|
||||
from opentelemetry.trace import Span, Status, StatusCode
|
||||
|
||||
from crewai.telemetry.constants import (
|
||||
CREWAI_TELEMETRY_BASE_URL,
|
||||
CREWAI_TELEMETRY_SERVICE_NAME,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@contextmanager
|
||||
def suppress_warnings():
|
||||
@@ -22,20 +37,20 @@ def suppress_warnings():
|
||||
yield
|
||||
|
||||
|
||||
from opentelemetry import trace # noqa: E402
|
||||
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
|
||||
OTLPSpanExporter, # noqa: E402
|
||||
)
|
||||
from opentelemetry.sdk.resources import SERVICE_NAME, Resource # noqa: E402
|
||||
from opentelemetry.sdk.trace import TracerProvider # noqa: E402
|
||||
from opentelemetry.sdk.trace.export import BatchSpanProcessor # noqa: E402
|
||||
from opentelemetry.trace import Span, Status, StatusCode # noqa: E402
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.crew import Crew
|
||||
from crewai.task import Task
|
||||
|
||||
|
||||
class SafeOTLPSpanExporter(OTLPSpanExporter):
|
||||
def export(self, spans) -> SpanExportResult:
|
||||
try:
|
||||
return super().export(spans)
|
||||
except Exception as e:
|
||||
logger.error(e)
|
||||
return SpanExportResult.FAILURE
|
||||
|
||||
|
||||
class Telemetry:
|
||||
"""A class to handle anonymous telemetry for the crewai package.
|
||||
|
||||
@@ -64,7 +79,7 @@ class Telemetry:
|
||||
self.provider = TracerProvider(resource=self.resource)
|
||||
|
||||
processor = BatchSpanProcessor(
|
||||
OTLPSpanExporter(
|
||||
SafeOTLPSpanExporter(
|
||||
endpoint=f"{CREWAI_TELEMETRY_BASE_URL}/v1/traces",
|
||||
timeout=30,
|
||||
)
|
||||
@@ -217,7 +232,7 @@ class Telemetry:
|
||||
"agent_key": task.agent.key if task.agent else None,
|
||||
"context": (
|
||||
[task.description for task in task.context]
|
||||
if task.context
|
||||
if isinstance(task.context, list)
|
||||
else None
|
||||
),
|
||||
"tools_names": [
|
||||
@@ -733,7 +748,7 @@ class Telemetry:
|
||||
"agent_key": task.agent.key if task.agent else None,
|
||||
"context": (
|
||||
[task.description for task in task.context]
|
||||
if task.context
|
||||
if isinstance(task.context, list)
|
||||
else None
|
||||
),
|
||||
"tools_names": [
|
||||
|
||||
@@ -27,7 +27,9 @@
|
||||
"feedback_instructions": "User feedback: {feedback}\nInstructions: Use this feedback to enhance the next output iteration.\nNote: Do not respond or add commentary.",
|
||||
"lite_agent_system_prompt_with_tools": "You are {role}. {backstory}\nYour personal goal is: {goal}\n\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n```",
|
||||
"lite_agent_system_prompt_without_tools": "You are {role}. {backstory}\nYour personal goal is: {goal}\n\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
|
||||
"lite_agent_response_format": "\nIMPORTANT: Your final answer MUST contain all the information requested in the following format: {response_format}\n\nIMPORTANT: Ensure the final output does not include any code block markers like ```json or ```python."
|
||||
"lite_agent_response_format": "\nIMPORTANT: Your final answer MUST contain all the information requested in the following format: {response_format}\n\nIMPORTANT: Ensure the final output does not include any code block markers like ```json or ```python.",
|
||||
"knowledge_search_query": "The original query is: {task_prompt}.",
|
||||
"knowledge_search_query_system_prompt": "Your goal is to rewrite the user query so that it is optimized for retrieval from a vector database. Consider how the query will be used to find relevant documents, and aim to make it more specific and context-aware. \n\n Do not include any other text than the rewritten query, especially any preamble or postamble and only add expected output format if its relevant to the rewritten query. \n\n Focus on the key words of the intended task and to retrieve the most relevant information. \n\n There will be some extra context provided that might need to be removed such as expected_output formats structured_outputs and other instructions."
|
||||
},
|
||||
"errors": {
|
||||
"force_final_answer_error": "You can't keep going, here is the best final answer you generated:\n\n {formatted_answer}",
|
||||
|
||||
@@ -16,6 +16,7 @@ from crewai.tools.base_tool import BaseTool
|
||||
from crewai.tools.structured_tool import CrewStructuredTool
|
||||
from crewai.tools.tool_types import ToolResult
|
||||
from crewai.utilities import I18N, Printer
|
||||
from crewai.utilities.errors import AgentRepositoryError
|
||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
LLMContextLengthExceededException,
|
||||
)
|
||||
@@ -428,3 +429,41 @@ def show_agent_logs(
|
||||
printer.print(
|
||||
content=f"\033[95m## Final Answer:\033[00m \033[92m\n{formatted_answer.output}\033[00m\n\n"
|
||||
)
|
||||
|
||||
|
||||
def load_agent_from_repository(from_repository: str) -> Dict[str, Any]:
|
||||
attributes: Dict[str, Any] = {}
|
||||
if from_repository:
|
||||
import importlib
|
||||
|
||||
from crewai.cli.authentication.token import get_auth_token
|
||||
from crewai.cli.plus_api import PlusAPI
|
||||
|
||||
client = PlusAPI(api_key=get_auth_token())
|
||||
response = client.get_agent(from_repository)
|
||||
if response.status_code == 404:
|
||||
raise AgentRepositoryError(
|
||||
f"Agent {from_repository} does not exist, make sure the name is correct or the agent is available on your organization"
|
||||
)
|
||||
|
||||
if response.status_code != 200:
|
||||
raise AgentRepositoryError(
|
||||
f"Agent {from_repository} could not be loaded: {response.text}"
|
||||
)
|
||||
|
||||
agent = response.json()
|
||||
for key, value in agent.items():
|
||||
if key == "tools":
|
||||
attributes[key] = []
|
||||
for tool in value:
|
||||
try:
|
||||
module = importlib.import_module("crewai_tools")
|
||||
tool_class = getattr(module, tool["name"])
|
||||
attributes[key].append(tool_class())
|
||||
except Exception as e:
|
||||
raise AgentRepositoryError(
|
||||
f"Tool {tool['name']} could not be loaded: {e}"
|
||||
) from e
|
||||
else:
|
||||
attributes[key] = value
|
||||
return attributes
|
||||
|
||||
@@ -5,3 +5,14 @@ KNOWLEDGE_DIRECTORY = "knowledge"
|
||||
MAX_LLM_RETRY = 3
|
||||
MAX_FILE_NAME_LENGTH = 255
|
||||
EMITTER_COLOR = "bold_blue"
|
||||
|
||||
|
||||
class _NotSpecified:
|
||||
def __repr__(self):
|
||||
return "NOT_SPECIFIED"
|
||||
|
||||
|
||||
# Sentinel value used to detect when no value has been explicitly provided.
|
||||
# Unlike `None`, which might be a valid value from the user, `NOT_SPECIFIED` allows
|
||||
# us to distinguish between "not passed at all" and "explicitly passed None" or "[]".
|
||||
NOT_SPECIFIED = _NotSpecified()
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
"""Error message definitions for CrewAI database operations."""
|
||||
|
||||
from typing import Optional
|
||||
|
||||
|
||||
@@ -37,3 +38,9 @@ class DatabaseError:
|
||||
The formatted error message
|
||||
"""
|
||||
return template.format(str(error))
|
||||
|
||||
|
||||
class AgentRepositoryError(Exception):
|
||||
"""Exception raised when an agent repository is not found."""
|
||||
|
||||
...
|
||||
|
||||
@@ -70,7 +70,12 @@ class CrewAIEventsBus:
|
||||
for event_type, handlers in self._handlers.items():
|
||||
if isinstance(event, event_type):
|
||||
for handler in handlers:
|
||||
handler(source, event)
|
||||
try:
|
||||
handler(source, event)
|
||||
except Exception as e:
|
||||
print(
|
||||
f"[EventBus Error] Handler '{handler.__name__}' failed for event '{event_type.__name__}': {e}"
|
||||
)
|
||||
|
||||
self._signal.send(source, event=event)
|
||||
|
||||
|
||||
@@ -8,6 +8,14 @@ from crewai.telemetry.telemetry import Telemetry
|
||||
from crewai.utilities import Logger
|
||||
from crewai.utilities.constants import EMITTER_COLOR
|
||||
from crewai.utilities.events.base_event_listener import BaseEventListener
|
||||
from crewai.utilities.events.knowledge_events import (
|
||||
KnowledgeQueryCompletedEvent,
|
||||
KnowledgeQueryFailedEvent,
|
||||
KnowledgeQueryStartedEvent,
|
||||
KnowledgeRetrievalCompletedEvent,
|
||||
KnowledgeRetrievalStartedEvent,
|
||||
KnowledgeSearchQueryFailedEvent,
|
||||
)
|
||||
from crewai.utilities.events.llm_events import (
|
||||
LLMCallCompletedEvent,
|
||||
LLMCallFailedEvent,
|
||||
@@ -57,6 +65,8 @@ class EventListener(BaseEventListener):
|
||||
execution_spans: Dict[Task, Any] = Field(default_factory=dict)
|
||||
next_chunk = 0
|
||||
text_stream = StringIO()
|
||||
knowledge_retrieval_in_progress = False
|
||||
knowledge_query_in_progress = False
|
||||
|
||||
def __new__(cls):
|
||||
if cls._instance is None:
|
||||
@@ -342,5 +352,59 @@ class EventListener(BaseEventListener):
|
||||
def on_crew_test_failed(source, event: CrewTestFailedEvent):
|
||||
self.formatter.handle_crew_test_failed(event.crew_name or "Crew")
|
||||
|
||||
@crewai_event_bus.on(KnowledgeRetrievalStartedEvent)
|
||||
def on_knowledge_retrieval_started(
|
||||
source, event: KnowledgeRetrievalStartedEvent
|
||||
):
|
||||
if self.knowledge_retrieval_in_progress:
|
||||
return
|
||||
|
||||
self.knowledge_retrieval_in_progress = True
|
||||
|
||||
self.formatter.handle_knowledge_retrieval_started(
|
||||
self.formatter.current_agent_branch,
|
||||
self.formatter.current_crew_tree,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(KnowledgeRetrievalCompletedEvent)
|
||||
def on_knowledge_retrieval_completed(
|
||||
source, event: KnowledgeRetrievalCompletedEvent
|
||||
):
|
||||
if not self.knowledge_retrieval_in_progress:
|
||||
return
|
||||
|
||||
self.knowledge_retrieval_in_progress = False
|
||||
self.formatter.handle_knowledge_retrieval_completed(
|
||||
self.formatter.current_agent_branch,
|
||||
self.formatter.current_crew_tree,
|
||||
event.retrieved_knowledge,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(KnowledgeQueryStartedEvent)
|
||||
def on_knowledge_query_started(source, event: KnowledgeQueryStartedEvent):
|
||||
pass
|
||||
|
||||
@crewai_event_bus.on(KnowledgeQueryFailedEvent)
|
||||
def on_knowledge_query_failed(source, event: KnowledgeQueryFailedEvent):
|
||||
self.formatter.handle_knowledge_query_failed(
|
||||
self.formatter.current_agent_branch,
|
||||
event.error,
|
||||
self.formatter.current_crew_tree,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(KnowledgeQueryCompletedEvent)
|
||||
def on_knowledge_query_completed(source, event: KnowledgeQueryCompletedEvent):
|
||||
pass
|
||||
|
||||
@crewai_event_bus.on(KnowledgeSearchQueryFailedEvent)
|
||||
def on_knowledge_search_query_failed(
|
||||
source, event: KnowledgeSearchQueryFailedEvent
|
||||
):
|
||||
self.formatter.handle_knowledge_search_query_failed(
|
||||
self.formatter.current_agent_branch,
|
||||
event.error,
|
||||
self.formatter.current_crew_tree,
|
||||
)
|
||||
|
||||
|
||||
event_listener = EventListener()
|
||||
|
||||
56
src/crewai/utilities/events/knowledge_events.py
Normal file
@@ -0,0 +1,56 @@
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.utilities.events.base_events import BaseEvent
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
|
||||
|
||||
class KnowledgeRetrievalStartedEvent(BaseEvent):
|
||||
"""Event emitted when a knowledge retrieval is started."""
|
||||
|
||||
type: str = "knowledge_search_query_started"
|
||||
agent: BaseAgent
|
||||
|
||||
|
||||
class KnowledgeRetrievalCompletedEvent(BaseEvent):
|
||||
"""Event emitted when a knowledge retrieval is completed."""
|
||||
|
||||
query: str
|
||||
type: str = "knowledge_search_query_completed"
|
||||
agent: BaseAgent
|
||||
retrieved_knowledge: Any
|
||||
|
||||
|
||||
class KnowledgeQueryStartedEvent(BaseEvent):
|
||||
"""Event emitted when a knowledge query is started."""
|
||||
|
||||
task_prompt: str
|
||||
type: str = "knowledge_query_started"
|
||||
agent: BaseAgent
|
||||
|
||||
|
||||
class KnowledgeQueryFailedEvent(BaseEvent):
|
||||
"""Event emitted when a knowledge query fails."""
|
||||
|
||||
type: str = "knowledge_query_failed"
|
||||
agent: BaseAgent
|
||||
error: str
|
||||
|
||||
|
||||
class KnowledgeQueryCompletedEvent(BaseEvent):
|
||||
"""Event emitted when a knowledge query is completed."""
|
||||
|
||||
query: str
|
||||
type: str = "knowledge_query_completed"
|
||||
agent: BaseAgent
|
||||
|
||||
|
||||
class KnowledgeSearchQueryFailedEvent(BaseEvent):
|
||||
"""Event emitted when a knowledge search query fails."""
|
||||
|
||||
query: str
|
||||
type: str = "knowledge_search_query_failed"
|
||||
agent: BaseAgent
|
||||
error: str
|
||||
@@ -783,3 +783,202 @@ class ConsoleFormatter:
|
||||
self.update_lite_agent_status(
|
||||
self.current_lite_agent_branch, lite_agent_role, status, **fields
|
||||
)
|
||||
|
||||
def handle_knowledge_retrieval_started(
|
||||
self,
|
||||
agent_branch: Optional[Tree],
|
||||
crew_tree: Optional[Tree],
|
||||
) -> Optional[Tree]:
|
||||
"""Handle knowledge retrieval started event."""
|
||||
if not self.verbose:
|
||||
return None
|
||||
|
||||
branch_to_use = agent_branch or self.current_lite_agent_branch
|
||||
tree_to_use = branch_to_use or crew_tree
|
||||
|
||||
if branch_to_use is None or tree_to_use is None:
|
||||
# If we don't have a valid branch, use crew_tree as the branch if available
|
||||
if crew_tree is not None:
|
||||
branch_to_use = tree_to_use = crew_tree
|
||||
else:
|
||||
return None
|
||||
|
||||
knowledge_branch = branch_to_use.add("")
|
||||
self.update_tree_label(
|
||||
knowledge_branch, "🔍", "Knowledge Retrieval Started", "blue"
|
||||
)
|
||||
|
||||
self.print(tree_to_use)
|
||||
self.print()
|
||||
return knowledge_branch
|
||||
|
||||
def handle_knowledge_retrieval_completed(
|
||||
self,
|
||||
agent_branch: Optional[Tree],
|
||||
crew_tree: Optional[Tree],
|
||||
retrieved_knowledge: Any,
|
||||
) -> None:
|
||||
"""Handle knowledge retrieval completed event."""
|
||||
if not self.verbose:
|
||||
return None
|
||||
|
||||
branch_to_use = self.current_lite_agent_branch or agent_branch
|
||||
tree_to_use = branch_to_use or crew_tree
|
||||
|
||||
if branch_to_use is None and tree_to_use is not None:
|
||||
branch_to_use = tree_to_use
|
||||
|
||||
if branch_to_use is None or tree_to_use is None:
|
||||
if retrieved_knowledge:
|
||||
knowledge_text = str(retrieved_knowledge)
|
||||
if len(knowledge_text) > 500:
|
||||
knowledge_text = knowledge_text[:497] + "..."
|
||||
|
||||
knowledge_panel = Panel(
|
||||
Text(knowledge_text, style="white"),
|
||||
title="📚 Retrieved Knowledge",
|
||||
border_style="green",
|
||||
padding=(1, 2),
|
||||
)
|
||||
self.print(knowledge_panel)
|
||||
self.print()
|
||||
return None
|
||||
|
||||
knowledge_branch_found = False
|
||||
for child in branch_to_use.children:
|
||||
if "Knowledge Retrieval Started" in str(child.label):
|
||||
self.update_tree_label(
|
||||
child, "✅", "Knowledge Retrieval Completed", "green"
|
||||
)
|
||||
knowledge_branch_found = True
|
||||
break
|
||||
|
||||
if not knowledge_branch_found:
|
||||
for child in branch_to_use.children:
|
||||
if (
|
||||
"Knowledge Retrieval" in str(child.label)
|
||||
and "Started" not in str(child.label)
|
||||
and "Completed" not in str(child.label)
|
||||
):
|
||||
self.update_tree_label(
|
||||
child, "✅", "Knowledge Retrieval Completed", "green"
|
||||
)
|
||||
knowledge_branch_found = True
|
||||
break
|
||||
|
||||
if not knowledge_branch_found:
|
||||
knowledge_branch = branch_to_use.add("")
|
||||
self.update_tree_label(
|
||||
knowledge_branch, "✅", "Knowledge Retrieval Completed", "green"
|
||||
)
|
||||
|
||||
self.print(tree_to_use)
|
||||
|
||||
if retrieved_knowledge:
|
||||
knowledge_text = str(retrieved_knowledge)
|
||||
if len(knowledge_text) > 500:
|
||||
knowledge_text = knowledge_text[:497] + "..."
|
||||
|
||||
knowledge_panel = Panel(
|
||||
Text(knowledge_text, style="white"),
|
||||
title="📚 Retrieved Knowledge",
|
||||
border_style="green",
|
||||
padding=(1, 2),
|
||||
)
|
||||
self.print(knowledge_panel)
|
||||
|
||||
self.print()
|
||||
|
||||
def handle_knowledge_query_started(
|
||||
self,
|
||||
agent_branch: Optional[Tree],
|
||||
task_prompt: str,
|
||||
crew_tree: Optional[Tree],
|
||||
) -> None:
|
||||
"""Handle knowledge query generated event."""
|
||||
if not self.verbose:
|
||||
return None
|
||||
|
||||
branch_to_use = self.current_lite_agent_branch or agent_branch
|
||||
tree_to_use = branch_to_use or crew_tree
|
||||
if branch_to_use is None or tree_to_use is None:
|
||||
return None
|
||||
|
||||
query_branch = branch_to_use.add("")
|
||||
self.update_tree_label(
|
||||
query_branch, "🔎", f"Query: {task_prompt[:50]}...", "yellow"
|
||||
)
|
||||
|
||||
self.print(tree_to_use)
|
||||
self.print()
|
||||
|
||||
def handle_knowledge_query_failed(
|
||||
self,
|
||||
agent_branch: Optional[Tree],
|
||||
error: str,
|
||||
crew_tree: Optional[Tree],
|
||||
) -> None:
|
||||
"""Handle knowledge query failed event."""
|
||||
if not self.verbose:
|
||||
return
|
||||
|
||||
tree_to_use = self.current_lite_agent_branch or crew_tree
|
||||
branch_to_use = self.current_lite_agent_branch or agent_branch
|
||||
|
||||
if branch_to_use and tree_to_use:
|
||||
query_branch = branch_to_use.add("")
|
||||
self.update_tree_label(query_branch, "❌", "Knowledge Query Failed", "red")
|
||||
self.print(tree_to_use)
|
||||
self.print()
|
||||
|
||||
# Show error panel
|
||||
error_content = self.create_status_content(
|
||||
"Knowledge Query Failed", "Query Error", "red", Error=error
|
||||
)
|
||||
self.print_panel(error_content, "Knowledge Error", "red")
|
||||
|
||||
def handle_knowledge_query_completed(
|
||||
self,
|
||||
agent_branch: Optional[Tree],
|
||||
crew_tree: Optional[Tree],
|
||||
) -> None:
|
||||
"""Handle knowledge query completed event."""
|
||||
if not self.verbose:
|
||||
return None
|
||||
|
||||
branch_to_use = self.current_lite_agent_branch or agent_branch
|
||||
tree_to_use = branch_to_use or crew_tree
|
||||
|
||||
if branch_to_use is None or tree_to_use is None:
|
||||
return None
|
||||
|
||||
query_branch = branch_to_use.add("")
|
||||
self.update_tree_label(query_branch, "✅", "Knowledge Query Completed", "green")
|
||||
|
||||
self.print(tree_to_use)
|
||||
self.print()
|
||||
|
||||
def handle_knowledge_search_query_failed(
|
||||
self,
|
||||
agent_branch: Optional[Tree],
|
||||
error: str,
|
||||
crew_tree: Optional[Tree],
|
||||
) -> None:
|
||||
"""Handle knowledge search query failed event."""
|
||||
if not self.verbose:
|
||||
return
|
||||
|
||||
tree_to_use = self.current_lite_agent_branch or crew_tree
|
||||
branch_to_use = self.current_lite_agent_branch or agent_branch
|
||||
|
||||
if branch_to_use and tree_to_use:
|
||||
query_branch = branch_to_use.add("")
|
||||
self.update_tree_label(query_branch, "❌", "Knowledge Search Failed", "red")
|
||||
self.print(tree_to_use)
|
||||
self.print()
|
||||
|
||||
# Show error panel
|
||||
error_content = self.create_status_content(
|
||||
"Knowledge Search Failed", "Search Error", "red", Error=error
|
||||
)
|
||||
self.print_panel(error_content, "Search Error", "red")
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import re
|
||||
from typing import TYPE_CHECKING, List
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.task import Task
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
@@ -17,6 +17,11 @@ def aggregate_raw_outputs_from_task_outputs(task_outputs: List["TaskOutput"]) ->
|
||||
|
||||
def aggregate_raw_outputs_from_tasks(tasks: List["Task"]) -> str:
|
||||
"""Generate string context from the tasks."""
|
||||
task_outputs = [task.output for task in tasks if task.output is not None]
|
||||
|
||||
task_outputs = (
|
||||
[task.output for task in tasks if task.output is not None]
|
||||
if isinstance(tasks, list)
|
||||
else []
|
||||
)
|
||||
|
||||
return aggregate_raw_outputs_from_task_outputs(task_outputs)
|
||||
|
||||
@@ -59,7 +59,7 @@ def interpolate_only(
|
||||
# The regex pattern to find valid variable placeholders
|
||||
# Matches {variable_name} where variable_name starts with a letter/underscore
|
||||
# and contains only letters, numbers, and underscores
|
||||
pattern = r"\{([A-Za-z_][A-Za-z0-9_]*)\}"
|
||||
pattern = r"\{([A-Za-z_][A-Za-z0-9_\-]*)\}"
|
||||
|
||||
# Find all matching variables in the input string
|
||||
variables = re.findall(pattern, input_string)
|
||||
|
||||
@@ -2,14 +2,13 @@
|
||||
|
||||
import os
|
||||
from unittest import mock
|
||||
from unittest.mock import patch
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.agents.cache import CacheHandler
|
||||
from crewai.agents.crew_agent_executor import AgentFinish, CrewAgentExecutor
|
||||
from crewai.agents.parser import CrewAgentParser, OutputParserException
|
||||
from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.knowledge.knowledge_config import KnowledgeConfig
|
||||
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
|
||||
@@ -19,6 +18,7 @@ from crewai.tools import tool
|
||||
from crewai.tools.tool_calling import InstructorToolCalling
|
||||
from crewai.tools.tool_usage import ToolUsage
|
||||
from crewai.utilities import RPMController
|
||||
from crewai.utilities.errors import AgentRepositoryError
|
||||
from crewai.utilities.events import crewai_event_bus
|
||||
from crewai.utilities.events.tool_usage_events import ToolUsageFinishedEvent
|
||||
|
||||
@@ -73,6 +73,7 @@ def test_agent_creation():
|
||||
assert agent.goal == "test goal"
|
||||
assert agent.backstory == "test backstory"
|
||||
|
||||
|
||||
def test_agent_with_only_system_template():
|
||||
"""Test that an agent with only system_template works without errors."""
|
||||
agent = Agent(
|
||||
@@ -88,6 +89,7 @@ def test_agent_with_only_system_template():
|
||||
assert agent.goal == "Test Goal"
|
||||
assert agent.backstory == "Test Backstory"
|
||||
|
||||
|
||||
def test_agent_with_only_prompt_template():
|
||||
"""Test that an agent with only system_template works without errors."""
|
||||
agent = Agent(
|
||||
@@ -119,7 +121,8 @@ def test_agent_with_missing_response_template():
|
||||
assert agent.role == "Test Role"
|
||||
assert agent.goal == "Test Goal"
|
||||
assert agent.backstory == "Test Backstory"
|
||||
|
||||
|
||||
|
||||
def test_agent_default_values():
|
||||
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
|
||||
assert agent.llm.model == "gpt-4o-mini"
|
||||
@@ -306,9 +309,7 @@ def test_cache_hitting():
|
||||
def handle_tool_end(source, event):
|
||||
received_events.append(event)
|
||||
|
||||
with (
|
||||
patch.object(CacheHandler, "read") as read,
|
||||
):
|
||||
with (patch.object(CacheHandler, "read") as read,):
|
||||
read.return_value = "0"
|
||||
task = Task(
|
||||
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool, you must use the tool.",
|
||||
@@ -1038,7 +1039,7 @@ def test_agent_human_input():
|
||||
CrewAgentExecutor,
|
||||
"_invoke_loop",
|
||||
return_value=AgentFinish(output="Hello", thought="", text=""),
|
||||
) as mock_invoke_loop,
|
||||
),
|
||||
):
|
||||
# Execute the task
|
||||
output = agent.execute_task(task)
|
||||
@@ -1630,13 +1631,10 @@ def test_agent_with_knowledge_sources():
|
||||
# Create a knowledge source with some content
|
||||
content = "Brandon's favorite color is red and he likes Mexican food."
|
||||
string_source = StringKnowledgeSource(content=content)
|
||||
|
||||
with patch(
|
||||
"crewai.knowledge.storage.knowledge_storage.KnowledgeStorage"
|
||||
) as MockKnowledge:
|
||||
with patch("crewai.knowledge") as MockKnowledge:
|
||||
mock_knowledge_instance = MockKnowledge.return_value
|
||||
mock_knowledge_instance.sources = [string_source]
|
||||
mock_knowledge_instance.query.return_value = [{"content": content}]
|
||||
mock_knowledge_instance.search.return_value = [{"content": content}]
|
||||
|
||||
agent = Agent(
|
||||
role="Information Agent",
|
||||
@@ -1690,7 +1688,7 @@ def test_agent_with_knowledge_sources_with_query_limit_and_score_threshold():
|
||||
|
||||
assert agent.knowledge is not None
|
||||
mock_knowledge_query.assert_called_once_with(
|
||||
[task.prompt()],
|
||||
["Brandon's favorite color"],
|
||||
**knowledge_config.model_dump(),
|
||||
)
|
||||
|
||||
@@ -1727,7 +1725,7 @@ def test_agent_with_knowledge_sources_with_query_limit_and_score_threshold_defau
|
||||
|
||||
assert agent.knowledge is not None
|
||||
mock_knowledge_query.assert_called_once_with(
|
||||
[task.prompt()],
|
||||
["Brandon's favorite color"],
|
||||
**knowledge_config.model_dump(),
|
||||
)
|
||||
|
||||
@@ -1737,9 +1735,7 @@ def test_agent_with_knowledge_sources_extensive_role():
|
||||
content = "Brandon's favorite color is red and he likes Mexican food."
|
||||
string_source = StringKnowledgeSource(content=content)
|
||||
|
||||
with patch(
|
||||
"crewai.knowledge.storage.knowledge_storage.KnowledgeStorage"
|
||||
) as MockKnowledge:
|
||||
with patch("crewai.knowledge") as MockKnowledge:
|
||||
mock_knowledge_instance = MockKnowledge.return_value
|
||||
mock_knowledge_instance.sources = [string_source]
|
||||
mock_knowledge_instance.query.return_value = [{"content": content}]
|
||||
@@ -1803,6 +1799,40 @@ def test_agent_with_knowledge_sources_works_with_copy():
|
||||
assert isinstance(agent_copy.llm, LLM)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_with_knowledge_sources_generate_search_query():
|
||||
content = "Brandon's favorite color is red and he likes Mexican food."
|
||||
string_source = StringKnowledgeSource(content=content)
|
||||
|
||||
with patch("crewai.knowledge") as MockKnowledge:
|
||||
mock_knowledge_instance = MockKnowledge.return_value
|
||||
mock_knowledge_instance.sources = [string_source]
|
||||
mock_knowledge_instance.query.return_value = [{"content": content}]
|
||||
|
||||
agent = Agent(
|
||||
role="Information Agent with extensive role description that is longer than 80 characters",
|
||||
goal="Provide information based on knowledge sources",
|
||||
backstory="You have access to specific knowledge sources.",
|
||||
llm=LLM(model="gpt-4o-mini"),
|
||||
knowledge_sources=[string_source],
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="What is Brandon's favorite color?",
|
||||
expected_output="The answer to the question, in a format like this: `{{name: str, favorite_color: str}}`",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
|
||||
# Updated assertion to check the JSON content
|
||||
assert "Brandon" in str(agent.knowledge_search_query)
|
||||
assert "favorite color" in str(agent.knowledge_search_query)
|
||||
|
||||
assert "red" in result.raw.lower()
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_litellm_auth_error_handling():
|
||||
"""Test that LiteLLM authentication errors are handled correctly and not retried."""
|
||||
@@ -1940,3 +1970,153 @@ def test_litellm_anthropic_error_handling():
|
||||
|
||||
# Verify the LLM call was only made once (no retries)
|
||||
mock_llm_call.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_get_knowledge_search_query():
|
||||
"""Test that _get_knowledge_search_query calls the LLM with the correct prompts."""
|
||||
from crewai.utilities.i18n import I18N
|
||||
|
||||
content = "The capital of France is Paris."
|
||||
string_source = StringKnowledgeSource(content=content)
|
||||
|
||||
agent = Agent(
|
||||
role="Information Agent",
|
||||
goal="Provide information based on knowledge sources",
|
||||
backstory="I have access to knowledge sources",
|
||||
llm=LLM(model="gpt-4"),
|
||||
knowledge_sources=[string_source],
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="What is the capital of France?",
|
||||
expected_output="The capital of France is Paris.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
i18n = I18N()
|
||||
task_prompt = task.prompt()
|
||||
|
||||
with patch.object(agent, "_get_knowledge_search_query") as mock_get_query:
|
||||
mock_get_query.return_value = "Capital of France"
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
crew.kickoff()
|
||||
|
||||
mock_get_query.assert_called_once_with(task_prompt)
|
||||
|
||||
with patch.object(agent.llm, "call") as mock_llm_call:
|
||||
agent._get_knowledge_search_query(task_prompt)
|
||||
|
||||
mock_llm_call.assert_called_once_with(
|
||||
[
|
||||
{
|
||||
"role": "system",
|
||||
"content": i18n.slice(
|
||||
"knowledge_search_query_system_prompt"
|
||||
).format(task_prompt=task.description),
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": i18n.slice("knowledge_search_query").format(
|
||||
task_prompt=task_prompt
|
||||
),
|
||||
},
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_get_auth_token():
|
||||
with patch(
|
||||
"crewai.cli.authentication.token.get_auth_token", return_value="test_token"
|
||||
):
|
||||
yield
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
|
||||
def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
mock_get_response = MagicMock()
|
||||
mock_get_response.status_code = 200
|
||||
mock_get_response.json.return_value = {
|
||||
"role": "test role",
|
||||
"goal": "test goal",
|
||||
"backstory": "test backstory",
|
||||
"tools": [{"name": "SerperDevTool"}],
|
||||
}
|
||||
mock_get_agent.return_value = mock_get_response
|
||||
agent = Agent(from_repository="test_agent")
|
||||
|
||||
assert agent.role == "test role"
|
||||
assert agent.goal == "test goal"
|
||||
assert agent.backstory == "test backstory"
|
||||
assert len(agent.tools) == 1
|
||||
assert isinstance(agent.tools[0], SerperDevTool)
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
|
||||
def test_agent_from_repository_override_attributes(mock_get_agent, mock_get_auth_token):
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
mock_get_response = MagicMock()
|
||||
mock_get_response.status_code = 200
|
||||
mock_get_response.json.return_value = {
|
||||
"role": "test role",
|
||||
"goal": "test goal",
|
||||
"backstory": "test backstory",
|
||||
"tools": [{"name": "SerperDevTool"}],
|
||||
}
|
||||
mock_get_agent.return_value = mock_get_response
|
||||
agent = Agent(from_repository="test_agent", role="Custom Role")
|
||||
|
||||
assert agent.role == "Custom Role"
|
||||
assert agent.goal == "test goal"
|
||||
assert agent.backstory == "test backstory"
|
||||
assert len(agent.tools) == 1
|
||||
assert isinstance(agent.tools[0], SerperDevTool)
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
|
||||
def test_agent_from_repository_with_invalid_tools(mock_get_agent, mock_get_auth_token):
|
||||
mock_get_response = MagicMock()
|
||||
mock_get_response.status_code = 200
|
||||
mock_get_response.json.return_value = {
|
||||
"role": "test role",
|
||||
"goal": "test goal",
|
||||
"backstory": "test backstory",
|
||||
"tools": [{"name": "DoesNotExist"}],
|
||||
}
|
||||
mock_get_agent.return_value = mock_get_response
|
||||
with pytest.raises(
|
||||
AgentRepositoryError,
|
||||
match="Tool DoesNotExist could not be loaded: module 'crewai_tools' has no attribute 'DoesNotExist'",
|
||||
):
|
||||
Agent(from_repository="test_agent")
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
|
||||
def test_agent_from_repository_internal_error(mock_get_agent, mock_get_auth_token):
|
||||
mock_get_response = MagicMock()
|
||||
mock_get_response.status_code = 500
|
||||
mock_get_response.text = "Internal server error"
|
||||
mock_get_agent.return_value = mock_get_response
|
||||
with pytest.raises(
|
||||
AgentRepositoryError,
|
||||
match="Agent test_agent could not be loaded: Internal server error",
|
||||
):
|
||||
Agent(from_repository="test_agent")
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
|
||||
def test_agent_from_repository_agent_not_found(mock_get_agent, mock_get_auth_token):
|
||||
mock_get_response = MagicMock()
|
||||
mock_get_response.status_code = 404
|
||||
mock_get_response.text = "Agent not found"
|
||||
mock_get_agent.return_value = mock_get_response
|
||||
with pytest.raises(
|
||||
AgentRepositoryError,
|
||||
match="Agent test_agent does not exist, make sure the name is correct or the agent is available on your organization",
|
||||
):
|
||||
Agent(from_repository="test_agent")
|
||||
|
||||
@@ -0,0 +1,660 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"input": ["Brandon''s favorite color is red and he likes Mexican food."],
|
||||
"model": "text-embedding-3-small", "encoding_format": "base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '137'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.68.2
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.68.2
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1SaWw+6SrPm799PsbJumTcCIt297hDkjDQnESeTCSAqB0VODfTO/u4T/O/smbkx
|
||||
EYimm+qq5/dU/ce//vrr7zarinz8+5+//m7KYfz7f2zX7umY/v3PX//zX3/99ddf//H7/P+eLN5Z
|
||||
cb+Xn+fv8d/N8nMvlr//+Yv97yv/96F//vpbqXVC1KY6Ai5tbEYUvL2Fz3d3UKbls2ooTr4sDq2d
|
||||
H3I7zRfROAtPcpupB9i58SM02CpPTBYpFV92+RsAu3VxJt5ItRyJHKMCiSkJHii1FzN45mier6KL
|
||||
Ip7L5nWpWVTu8pScO3MAizDvZ1Br3wLbot335HPPZbheQ8HlvOuQzV81D0SsxfIkjqoL9pgYb1Q+
|
||||
zXbinHJXEY9NT+ix5hM+rQMCq7GWOepKI8RXuDL0vftIJ5iHrwKfMaParIgPOQwa5kEkgbyrZRSy
|
||||
Dl6qJCJ3xdLCqeGqGYbZeYdlLiz71WQDBkXLyGPNY7yQ92PjDYVFvOOz4rfZHLYNj8YH5+AsdFSF
|
||||
fF8vF7Xw8iU4PqFsoZFdI98/mwRfEMjoTvVb1JiOQwqjTKqJQ7WHOI5KRG0Ene4vpiGJX7vN8Xkl
|
||||
n2r5vuUCBR9vj43K+tLZPXcWQJp6Jtfiydm0/ewk2JVWSHz18sqmmGQz9HHeupQ5JP3qJVwCucbm
|
||||
iGw9cMhP6ZogL1sb4iS3WqFpw7qQaxpE5KvaKcNuKixoJM6daLe7aXMPwYzgmaQKCcX7XNHYeUkI
|
||||
3JNpip3ZydagABq8jpk18R2gYNB6+IRDyi4kX4/Hig7rUUSxdohIKLtKSCMb8pBrPgjrRbAos0nL
|
||||
CaR9qBOJHTjAy7H4hqOzRi4PdbPijmzII/C8f4hxEqRsXzq6AR7YWXG0OqeeawJpBgXKA+y7Ec0G
|
||||
yecsxOQWcMNwfmSL03AMOF0ilahvBSjr4wZEqLCTiPX38gXLuQ069BIFndg8zik7v4QIqDBQiMIN
|
||||
ZTWnp8uKVFNMiDXMLV1jtgnQddesOL60MFu68VSgM2Ft7KlVXu1fx8GCa3xrybkBLaDcNXmCZiCO
|
||||
OwuvsmeVWx/BipUnbHweGJCBigm8na4m0YW7QOc3QKUo+H6ATQ4nlApvY4BqGj8JJk1CuYtleJCB
|
||||
s+YKZyYKOc1yNdifqDLdWrkEs07uJQTkExPJnyV7r59DGRZItojy5EKFJlfBQaadJ9hMEbEpdzvK
|
||||
SCmfAYn3rVfR7FNPgDGZkERrcQDklt1c+DK/Cw6ekq1weTMbSOdHmaQVNAE3io4FsehbOCbegY4S
|
||||
JR3QQ1fGdx71/bK66wlcM7fEsqwn/eTHUg2Jfg6xw9d2tb/4bgTjA9GJbmpZv9zn0ICi/lRwiGXZ
|
||||
5sGFuqg7qhJ2BWlP50fKWgjsYp8c+92x2gP+lSCr5B/u82GLyrzX0hm+zPOXON/sDRZfoDG6nahP
|
||||
7Gs7KmsrVyX8NPVEgowXlEG87mb4PkdXUhjmMeQv7W2GU21kJG0zmfIN18/geHNaLH3yCcxKFrJI
|
||||
oA5H/OzoKTT7DBM4KB3CEttQMJR1wKPxSe7YYaCc8aK7uOiaa+sE+fu3WlRt1KDb7CBxjVKoZsk7
|
||||
dTCF9pUY6X1UZjm6sfDUpoiYVWcDrn0YOTQn1cBe2noKPw/XFAmlXWLF2i3Z2ATSCq5TmGJ8O3TK
|
||||
nFthDm+nizm9u5GAxWkQhNE5bohmaM9wrYfFg8wOUyIfSNfPTEUt9CgJnRY347Kl4IYW+IF+cytO
|
||||
1Ko12R8ZxIyPkKjgHlXf4yyyUHC7F972l850vUmQJdIJxy/4ohQAbYCN6ToTD/pXOKqKo4H9aGrY
|
||||
Oi5l2LHU9OBvv9x3ZgH+g25QPCbr013aB7AXTX3OaOKpTbD08Kqp80gJOVlDWDITM+RZagZwMDWf
|
||||
HIdxZ9PpMJ/Qh88ZfHycJLr/PJzytz5s+PRoc7obetAvXy8ixaGl7Et1idEnemXEOnZStoJb5ICv
|
||||
c6qxP2Qm4J/pU0OGATrivBLeXlEmiBDYnUvkx/qmQ0NtCcq7oSDF/n7uB/dcWsjcGRnRy0ql2/4G
|
||||
KLy5KZGOetS3wjlfoVUmV3K356PNFzc8HELz/iAyF8pVd13mHI3OHGHND/mMPnfgjWZmzon/gTOg
|
||||
/TAniEFt/ItXezb46xNm7nVH9LPcZVOmlwOcqinE1wvR7dXRahcSo+qxqgth+Lrltxyk+qpjo5VT
|
||||
ZQ8OhQuDg3LFUkpWe76hbICwrM8ktM9cSHfG2qH2Ox1ccZgNMKPMaKHlX0/YpUe1mv1kiX7xj939
|
||||
/VyxBn8tYeKKDZafLlaWxVVFKJrqk1zh81vN4nW3AsZKMnxJkG5zyf0+Q9/Hpsva0dPeD1RMQWFf
|
||||
B+LCfAdIabwN9OEP2sRF7UtZnM87h0LZdMQRD0vfy9A9QaL3GZYqSaP88hE1kfYf7U88sR/x4KCX
|
||||
cbnhmyqAamFO3xSOj72Dj8H3ln0J48dI87DjHg6npqI73bJAaILPdOBiEXybJK6hUiUlya7t2eb3
|
||||
uiIjqdHuxMHDR1kqK6hhHlaFy1OnrVa1PM5IOt0Dl6HGp1oPZ2OCvSAxOBsLF6zcLlrRNWs74iS2
|
||||
ZK9z+awRx8XCFr8PZeaA8oSbnnKJ/wrDRegnD8wj2fTCaijzbioMML6akfjb+Zg7TSpQ3RoHXEA3
|
||||
A7yNDi1sRjvCtpa8s4kpVwGhz7slcnaXwJ6RDAt8wlTDJmfEgGryLEPrBTE2VK0Oeb9wBTDxiz2l
|
||||
zIXrF3GpGCSscYN//8+2SrUikgRvbI9DQOdmSZ/Q0ESV6HSw+lmOrRjy7ZtipVH2ShsUFg8eLyPG
|
||||
j/h0zzhNmT10ip0LVncfPSPv6PsEXBQ2031NXxUhygThbJ9uGPtkHy5OZRqI9hYz7YvnRaHnT5r+
|
||||
0Wd5/mntNRFLiLhafrrgSrBNdayUKEyfK5akg9cvhPEjdLrE6jQqSh1+F6sW/uRPk8MCnSykFvDA
|
||||
i8Ikcne1X6y7ncPds7Cw+z5rYI5qo4StGJ+2fCWH/On90eBdvOduLfJzNaGGTICTd2di57qmjOsF
|
||||
C3Bmogi7p4jtl/zzbKHH1AmJof6tRqn/RpB8LR6fI54LqXk5uvBqFjLGpS9R+nk4T/jwVGVi3u9C
|
||||
mfmdN8HO33ku/JYtWJKHFkBivHoSbnqkzYwSok+0q9y9AYyQ/dUP4JziCSnKKVzrOQ2gY0YXcjvU
|
||||
BZ0L72bA3VMIyFZPlZF/pDN0mIbDDm5Ue8bRYIBm8E940/M9T5dkhqfYvUyH4WrQ2j2XBhqM1iQP
|
||||
QnM6+jCP4bb+ieu0HqyPxE4hcOCbeCzd04XLlhlpLz13m12f9PT8lUV0EvbqtLv0b3u10dL+7mOn
|
||||
OZeAi+15Rf3hGpOTzH3CQe8EB92i04nku8utpy8NtX/0Xt48BGWkF1aDRP9m+KTosKcNTmQw38Oj
|
||||
W08eBxY8JTG6vYOBuKfz2q+PGxXgquoe0Yn5rPaA+UzwOu6DjX9egACGDPAuHiOMy8eJssdLbIHF
|
||||
Qyr25mIBmx57wpPAqcRTlSrjM72cwBolyh99Ph66F4O08zvDGlW1fv4wgQPK4qaTNDya9srLiIWH
|
||||
xkqmw7Hb91TkkAuRhu7Tejyw/cZHBcqr6UVcZ+izrX5O8PR5jiTe9PS7bAcNTpwASJYdXnRZ/UmE
|
||||
qv3gXHFgBtCCGOSAkysV//LLEp6sN6BomdzO+gbZ/PrgFhp6ExBv9kbavj56Czf9j1VQ8/0as6MH
|
||||
Nr2Iz07YZySyTjF8rNIbXxS9CRfDZz1oiJbkvmrrYi/D4cpAekuK7Tx/MvLSuA7C41PE+aZ3VlTt
|
||||
HKhwtkzcib6qpTicO/F8zp8kkjk95NXSnGGYnjribfVnYuoqhYyVZuQ+FSudplRMIAZ2OrHf0Qbv
|
||||
3pFdpHCmPAHZr8NZXzQJTopuEOtwGKvlmjgepGA2sd/vjv0yUNlA+t623Pc0S9UsOdWmRzsLY8qP
|
||||
1Rq9Xx6qL6WJpY9r0UGO5QiJRp9MjGG+wqki0wk0w+i4MzoN/bjxL/zwBYOlQf7S1QIRhM6QnbFu
|
||||
nlG/nN+dAbnIbyakj3M2RbWwAl68hhPIP4+MllbcQYutd+QK1wLQ17G20OGkqxOzg+9s6tydBKaA
|
||||
VdxdX8Rg73YggR0V6UQ3XprFGAcwc+J0klAw9xSiKkH3NmCx07xFe8js64o2fsPqrk+qVbnDGlJE
|
||||
pz/nmwcX4ICqDNyJczJUrVv8omya1Gku+EiZp9QyoP9KHOJVN0aZn8XFgqrVNC58Wauy2vJVglxj
|
||||
cuQX3+ve3LOwgYWOjSCF4aBWsgT6CxsTjXg3One6wECF9Vas+U6Utc8XY0G9PgXEn/cLndOdkMAU
|
||||
SLLLrYPWc96yRCgPd+8fH1RrNXJv9LXdiEhVALf6Fxvgfmgcl9lxdTjeh9VAgsdZOHsugTLXpJRA
|
||||
3VoH7BZKpLCdLkAYvNMP1vapUy23RCjgj88C5jRXy+KeRcjuxIiob39nf4l0FEXDeJRYMjumH6Iy
|
||||
kVD/9gqcfB4ErMu3e4P2GyzY0s5dNqi1WIIfn2kv+AIU710Bph2/c9eM8Eqrj6MkNj0Pib1Qli7S
|
||||
rLXgIl+PE++ra0jbwnChuasrnLqMEA43uQkgFklAsCA9+9lmFgHaQii7ezidq3nMtAJ+bSciGF60
|
||||
aonPXw/8ePynlyZwyx3Yfr2FOJ6BQ37bP/jT6/r5Omft56hrsNwVqSseDud+MaMEwnE9cthUEzmk
|
||||
XZdNMDs7HdGFoKnmaHB5yN7h4s6D/AX7C77w8NWNJ6zublVG+aqWwOngn0mkuEVFH3vzBOPY0bEj
|
||||
rrginrBPYPKabvh8z2eFPsXyBJeXYZB7VEsKpwhfCT7f1uLujLkJF90SPbj5T64f7O9h5yiKA//s
|
||||
78b/+8V4RtC+YoK3fE2XGg8ayIqXTI6sWFajuPQM7AXgEjs5cdk8Ph8MfKzymxzVyyucBcP2oGoW
|
||||
HpYzEtu//At++coO06/dpscWgrRL9vj2WUYwXfnLyhXmion7jgw68YyXo90ztwje6h+9j2UCnzV/
|
||||
c5d+96rmQzAF0AGDhM2Y2MqP58DmJ2DfPfvZaqxdDnNJmMi1nA7KpJbmCgv7Mkzr1RkUyl6eEnx4
|
||||
u3Dal9x/8Q/c/C5sHrtrxelOuf7hA1P1n9kKi3GCadJe3Z1PrtmMo9pAoS7IRB5YE6zYk0WEk1NJ
|
||||
/DY52N2l9VcgLMKdaJfjS6HD7KdwhxOeHDd+5ZB8GuAnQhU+4axUFsBKHbrmpeiiiQpgunRWAC9c
|
||||
DYm5HCO6Hv3v6ceX5DwNPP2j/356wsxOvrJ22jxAdN2P02IKfTYLJpjB8rIMrH53JlhC5iChrT7h
|
||||
k/ypwuWW+wWMD6OOpTPrZ5y3l3m45XtiBs5LebvOV4QqMkIcG2c5ZO/ObKA2SV+b3ooqXm2vNeS/
|
||||
D2USiulF52c1dVDww8A9iWTqF0YWRKiE5weRysOJUoJ8Db3XiE7M7WbYvJz0GtwtUYqTjb/Hfr0z
|
||||
P97A8jWd7T/rPV4Fj5wm7wKo1H1PUKq7CzHO+zUcL5YUQHQ1nljttJ7O1uGZon0+MdP7YigK5fJS
|
||||
QhsfEXm5huC7b88BnHf3L9Hh7VMtC+YTSAy1wmbF7ezFygIG5U3IEDVqX/b8FvISPrzKJuebnWUT
|
||||
LMYBPjwUYifnHJtIg1/D/BjzExivA31u8Sx+ai/GXhx2NmmCgwwqKeo3fXalXKl4b1TuhJKoIA76
|
||||
Wx1xNfz5r9bjU4Hxk/crjGbjRh4G8822emUAqYEjMX88xcVtAFLDxUS5Uk8Z3EDPYXX8KsTO3VaZ
|
||||
bZCIf/wGTEKlWh4iY0DjCxEpJnqsuFsyFz99g2/cPQeUK3csjHVaEyNQ44x7opQHWz7E+Kz22bc6
|
||||
0zesr+rbfW56bHR6nMOOjYF7mMUAtEEexOgQspIL5UrOxn4UWiSmNwEf2+SmbPqLB/L5I2CjZzkw
|
||||
DBfIwI+8Apcxrk1GY399QulyLbG2MnU1ie7BhfaHFbGCs1s4mkFbgO284eIUsdUW7xH45uVA1M1/
|
||||
Xq/8fQbauodukxBFWa8HZv35f+6idnq4wENqgOf+bk5I6XbV5v92sP5435/fbv/hva1eu0ip7v2f
|
||||
+GbdL7PpMTtcvetJhJorVS7Lp20186h1RW3lIHYYTgo5TX2ukBpPF99ccw6/12Bd0fZ98yMayglN
|
||||
UsA3vkL3l2+IGSYTLKfNR+/BHvzqN6LwEROJgE81bHoSKsFyJdbP/9p4C6boQbCpLoeK6qYo/+Fv
|
||||
W8731Xd7v+DsRrMr9OBK6fctDrBAkoXd27OzVzn03/DVeXBaH0i0l0mcYySUnw6rCa8qG6/F0HMO
|
||||
Ila+j52yJNkYwNlqfXI9LVzVYR+kf/jJvd1aez44Sf5bDz7xrdPT9jFPEF258cf3IS8Y9PnLp0Q1
|
||||
tGs/t0fqgahwOKy9/Q+YOEBXyDDXCp82/2GuSSf98atd6AKw6v2zgG3XmPhMjYISorwZQAy9mvpt
|
||||
vfT7+joQOMwbH9tHpkwJ5xfwp0d+ft0Mb4kh/n5PPh6ifm1PuST+/F6FG+SKN8gxR+PZzSfOZkuF
|
||||
+ntwAhwrzhufSP1Saoc3HIzOJPbX6+hbcKIS7dZu/cPzU3C33pAc1qPLQMuquJ1hJTCE0pkch0gH
|
||||
K128GdZJxrh15HjZ5D/2KQg7RnRFxqX920sbAT5eVkzO3ouEgy11OdzqlVtt55lMsBaR9qAKdsR+
|
||||
7B8PYXlD4VVcsBEfG+V9D87Dn3oqFb1EefSSU5jt6guWj9MroxfTkNEasTb2Y9G1xydKWbhIJw5f
|
||||
9GoMp/HqJ5AmMMbFln/XsrFP8Mfv51uAsiZ7swXc/A13af29stVr56eP8HE4N4CeqyUFbJEciC5V
|
||||
fkUVxNTQjXDu8lt94K7JKYDylGrToTVrSmhoR/DZeK8pQVJfzTeJnWHO3Y/kGHwP4Sycoxm2Wnvc
|
||||
3iew6ZFzNHi14mCCTVTTdetvQTFhM4Jj+xG+7yHjAM2VK/fwEQhdRe81occq8vjXX6Dh7uuB9/Md
|
||||
uqJVztlqQ9MQsRha5GzPR2UeC1JA1B4kcnbUTzgn/G0Amz9F7KV9/eFBmDxZ0+W2/txe2RENkG/N
|
||||
EFm7wH6+yY0HeZVRsPOdB7A41dGCWz/mj5+4565eiZJX4E+7/sqCbz8KHXy8soz8/FOuOJxb2DD3
|
||||
AXv+vrTpS8lbsPl7RNUFmq1iJE/QMePL5udcbfqOUxlozxS7C1qDat3rioRqlbpYL3MF0F//4sdf
|
||||
kR1J9h4dTzlc6CpM4tOalGGvBTNaL7un+2GNlZLj0p/Eza+aIJBoP5472wA6x65Ewd97yIXSq/75
|
||||
ycTb9MBry29IapjRre3oqczxHMmoo3I18ZeXGP78fuAM+wvZ+AysfuRp6HPZU+J8ZwewWz4Twdx2
|
||||
5HGpa2V+lV0Mpkbup91Z7cPp5+/ibyLih8/fMmrFhwkeosEjOpzGfvj1UxrmWE/Lxpe8l6AEzsya
|
||||
E5MzeDDNn7MgMqiLp2TwGJsCuHdAfYG++/O/53VSeCiFs4oTsxw2ntMDSHuDwa4xw359vcoTMgwl
|
||||
ndYDmnpaykkEt/7rxFP13X8Lrm5h/IEfbB7sT7VawtNA1Dhp2DW+Yr+SF5vAzb8mliqcMv5UMzVc
|
||||
SjnBqr3jsrn8DjEcX5+R2PUogc9LPRuwVi8a1s+yFU7raNdwPHc1sTmk99yw/xRw84snTrl8w7Uz
|
||||
Xi4gh/mIzfil2/vpyOQwZ40vzvLcUXiTTSHMQz3++e+ApXP7hMZ1veCzvVfsrf/AgG19LsXmyZ61
|
||||
dyWhsSjZaeWPO7DCnHPQ2+3IdKguKli2fjTc9Oq0p43Rc4dJZuBUWxl2vqVBOV+sWMQdW45oG9+z
|
||||
boALKLykM85zONnfrR8CdUXa4bhUFEAjI2NhflwZojnfLltRNovItGoDu2Cl1Yxx0P76M79+Srgg
|
||||
pmpRsJ92+LxzpYzOp9CAlpSzW/+rt5cY32aw9Yuwdqt7e3g1DwOKt6Da/KMadBaIGCAphojTMGTC
|
||||
xRcrHm36kNypwdCxSfWTyIwgI9KnmsHIqM3w6zfhokxSZZG+M4+6Kmyxw8efihxGeYJYHANshcev
|
||||
vb4eyxtZkgCx2Zo1WNdRecNxVbiffwXWmt3XcJIdYetHN/YQvCoWloQ8poO6u1e0LSQX/v2bCvjP
|
||||
f/311//6TRi823vRbIMBY7GM//7vUYF/7/89vNOm+TOGMA3ps/j7n/+aQPj727fv7/i/x7YuPsPf
|
||||
//zF8X9mDf4e2zFt/t/r/9r+6j//9X8AAAD//wMAEEMP2eAgAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 93bd468618792506-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 07 May 2025 02:26:58 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=b8RyPEId4yq9HJyuFnK7KNXV1hEa38vaf3KsPaYMi6U-1746584818-1.0.1.1-D2L05owANBA1NNJNxdD5avYizVIMB0Q9M_6PgN4YJzuXkQLOyORtRMDfNCF4SCptihGS_hISsNIh4LqfOcp9pQDRlLaFsYpAvHOaWt6teXk;
|
||||
path=/; expires=Wed, 07-May-25 02:56:58 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=xH94XekAl_WXtZ8yJYk4wagWOpjufglIcgBHuIK4j5s-1746584818263-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '271'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-6fcbcbb5fd-rlx2b
|
||||
x-envoy-upstream-service-time:
|
||||
- '276'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999986'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_dfb1b7e20cfae7dd4c21a591f5989210
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "Your goal is to rewrite the
|
||||
user query so that it is optimized for retrieval from a vector database. Consider
|
||||
how the query will be used to find relevant documents, and aim to make it more
|
||||
specific and context-aware. \n\n Do not include any other text than the rewritten
|
||||
query, especially any preamble or postamble and only add expected output format
|
||||
if its relevant to the rewritten query. \n\n Focus on the key words of the intended
|
||||
task and to retrieve the most relevant information. \n\n There will be some
|
||||
extra context provided that might need to be removed such as expected_output
|
||||
formats structured_outputs and other instructions."}, {"role": "user", "content":
|
||||
"The original query is: What is Brandon''s favorite color?\n\nThis is the expected
|
||||
criteria for your final answer: The answer to the question, in a format like
|
||||
this: `{{name: str, favorite_color: str}}`\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.."}], "model": "gpt-4o-mini", "stop":
|
||||
["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1054'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.68.2
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.68.2
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA4xSsW7bMBTc9RXEW7pYhaw6leylQJClU4AGyRIEAkM+yUwoPoJ8MloE/veAkmMp
|
||||
aQp04cB7d7w7vpdMCDAadgLUXrLqvc0vb697eroKNzdey/hLF9XzdXm4uLqTZfUTVolBj0+o+I31
|
||||
VVHvLbIhN8EqoGRMqutq8/2i3tTregR60mgTrfOcbyjvjTN5WZSbvKjydX1i78kojLAT95kQQryM
|
||||
Z/LpNP6GnShWbzc9xig7hN15SAgIZNMNyBhNZOkYVjOoyDG60fplkE6T+xJFKw8UDKNQZCn8WM4H
|
||||
bIcok2c3WLsApHPEMmUenT6ckOPZm6XOB3qMH6jQGmfivgkoI7nkIzJ5GNFjJsTD2MHwLhb4QL3n
|
||||
hukZx+fW23LSg7n6Ga1OGBNLuyRtV5/INRpZGhsXJYKSao96ps6Ny0EbWgDZIvTfZj7TnoIb1/2P
|
||||
/AwohZ5RNz6gNup94HksYFrMf42dSx4NQ8RwMAobNhjSR2hs5WCndYH4JzL2TWtch8EHM+1M65vi
|
||||
27asy7LYFpAds1cAAAD//wMA3xmId0EDAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 93bd468ac97dcedd-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 07 May 2025 02:26:58 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=RAnX9bxMu6FRFRvWLdkruoVeTpKeJSsewnbE5u1SKNc-1746584818-1.0.1.1-08O3HvJLNgXLW2GhIFer0bWIw7kc_bnco7201aq5kLNaI2.5R_LzcmmIHlEQmos6TsjWG..AYDzzeYQBts4AfDWCT__jWc1iMNREXvz_Bk4;
|
||||
path=/; expires=Wed, 07-May-25 02:56:58 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=hVuA8E89306pCEvNIEtxK0bavBXUyyJLC45CNZ0NFcY-1746584818774-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '267'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '300'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999769'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_9be67025184f64bbc77df86b89c5f894
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"input": ["Brandon''s favorite color?"], "model": "text-embedding-3-small",
|
||||
"encoding_format": "base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '104'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=b8RyPEId4yq9HJyuFnK7KNXV1hEa38vaf3KsPaYMi6U-1746584818-1.0.1.1-D2L05owANBA1NNJNxdD5avYizVIMB0Q9M_6PgN4YJzuXkQLOyORtRMDfNCF4SCptihGS_hISsNIh4LqfOcp9pQDRlLaFsYpAvHOaWt6teXk;
|
||||
_cfuvid=xH94XekAl_WXtZ8yJYk4wagWOpjufglIcgBHuIK4j5s-1746584818263-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.68.2
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.68.2
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1R6SROyTLPl/vsVT7xb+oZMUsW3YxIRkEJBxI6ODlBkEpGhqqBu3P/eoU9HDxsX
|
||||
QIAkmSfPOZn/+a8/f/7p86a4z//8+88/r3qa//lv32OPbM7++fef//6vP3/+/PnP3+//d2XR5cXj
|
||||
Ub/L3+W/k/X7USz//PsP/3+O/N+L/v3nHy10PsRPcxuIYz6sIG03O6RXYmZOpt0pqpupCN1ta2+K
|
||||
2T5U1Gg2d8TiXAzYNTdb1V+PVxL4y4exz8UvwQOkRxSSJ4tWZ2tkKpuTicTBlRsXtnHarSCuWzyr
|
||||
8Xukk9e3ahtwIUF5NIA1fk8ZqD/yndw55DQYvzVRNRuDD/Jwh/O1q+ZYwffBJUfYrWDZhO2qCmzb
|
||||
4QTlN7B8vB1VcQJa5DzSyqOKN1nwJKknZL74FSx8MrfwwkdHdAzUEqzqZ+tDDT0zooO9ZK6Orqwg
|
||||
AmVAvE8yRvh1ppPagBvB8527NWyD7ETNW8fGwD6xCKcF5OCsG/sAOI+7t67vA/4br9M48GCun6da
|
||||
fSsdR4xB2edrsb9RVQVaRyxn64IFtEuq0mU6kui6e0VLGm0n2LVGhA6R30UCfT4MYLEpRIaVUrYc
|
||||
l2cB6+QCiN9yXrPcxmKAxZO+SCjT2Vv0MS1hpRUFySFXeVQ7nGVYB+cuKB/RPl/X4pRu7waTMTjg
|
||||
1VwV99jD6+2iByUHhYYdz2kPa+0+EVQYMViot6/Vw7Zw8bIUCND94d7DPHy/CNo9p4adDpWv6p/u
|
||||
HbB933hT7Wxt9TpoCJ0rvsqXOWl4NUFbg6T9E+a09NEAw3QwyYH7kIbGfG+oFPY8CmUjjeh4ag3o
|
||||
hZNFTjuReYJX1zx4QfpGns3zDTUymMFFjUOyy9wLmDAeJpgnsYcuo1mNrJw6ET5AdkSaC9/N/Ny8
|
||||
FFh212dwPTveNx6ZDCctGYnzOB4bCZ6UBPI62QWcHzgNW5owUU1NrJCOhiOTqtVp4ZGaJbHmuzi+
|
||||
04vjqq/D1ibIPlYR3atSpjYFZ6Br/Nzn5HVOVjV0XQUd7feNSSF4WoBu6j3eEv+YSwsyeCilmwzL
|
||||
NyPzhOwVBZBkRYSlQmzyCb5UGUqhYpPd8X0yv/Evf89HXnIS2Rw7eQ+54dIRywzNSIKnNYbf+KNg
|
||||
N6WM3yw5B5zS8P/+XywQRYbPOyjxBgNtFDPABvh2spEYM28x5lVWqr6yNkbXi+GMVFhuCaSJfSX7
|
||||
s+OZS9UfeHg5rw65Fk7QYAhOoqr49Qtdju/FZJ+LVapbRxVJoah3sKgbM4DZpSfol89DZA0h5Er7
|
||||
iE58OkfUJ34N59sux8vWNjxh/xRsGMVhg5KLemjoR3NLyEwFEuNYJIBPyN6BJpsM9FSAZdKzJHbw
|
||||
E2MPoYIeGsE+aYW6d7kr8atqYoT6F6iiTXAmaCx0wISsl+HeWc9ffHEBlbTAAEZOFWJvDb3heSle
|
||||
VfvoBOQUG5to0Z4ght7n3gWC1Nk5aWRLho8T9yC7d44i8XC3Wugn7yrgztniMa24KzDE1ZlcrTjM
|
||||
cZt/NGXoUIHs3ftp4t08JX/z86hx51HaZE2tajtfJ9dCbKJfvNXv/YgpbA5MDMGiwfi1iOSuWBeP
|
||||
YfvWw2x73f6+hyfJykb7/T/0zJ1rJPDJq1Nr7xYhJxGXCBstcOHUEhMFxPEZ/w64EH7xErnKxhp/
|
||||
9Q648/lMdro15ROQihKS8tqQo4iShu7VTQbrVrCIb7HFW5v50CuczBNitNrgkXP6EOFhe3fxBsJb
|
||||
RLR1SIFd1W9k3ayK4Zjpraqn4Ib2KJibicXnXv3ia8AP9c1cEHYUqHBYxtDqH80SnkwLLpFcooP9
|
||||
0UfpW1/wXDSYOA9Q52xfJL0an62JPIcvvjWiWCid29Yo2ZtOI9WHVweHRYjJbqqEkfFmV0M7pVYg
|
||||
KaIb8VTzz5BGyMJL5+ueiI63CXzxnOziyTCla252Kt/tHmgHJ3+kurQGanHwHeJowyuX3uRRwM1o
|
||||
GsSysJTTxLxjmHpNTHQeBM38q4+jNpXk3JGIsbHDBrRswuFyaDW2ZvtUUeNBCIgWH/VcFEBoQb2t
|
||||
ECl6aOW8cMI9tPEYBQCfVDZdYiGGEGgpcu/CJl/1QexUd3tPyQ21sbdYV1NWK+1ekKc3XjzJsEyo
|
||||
lpI2B3RoNSBs7gcOVJvlgfL7bgHk9OEK2HJcgQ79o2lGon7OimKzzbcfnpn4CF0bLsfYQDegvMal
|
||||
3WkdFPyPgrdC8xnJ5sQC9W4sMnL85yWnwnKKVXsTV8Ra3q5Jedze1UKuHGJvgsFk2GxXOLbmGx36
|
||||
c2hSKuUywDcqkdPqtw1VroczcMfhQdwdLke+/gQxuIz1QOwmaDwpvxwMsL3eN6TIDDOfe3iy1E8Z
|
||||
60RTdNMUkQ5LOCVHhbgnbHkC3LNW/eIdbrK6NakzJSt875qUGFfhDMRL1UN1e6+vyFW3hrd21SuB
|
||||
xzf0AmixxSSrWrvqvg5M5JXhLhK440YBuK4y4g78sWHsE/fwx29uFYgAf8g4F+rHqCHWM3Sj5UGX
|
||||
Mzic/SfJb8mbzTFvW7AXGp64G0Xypq2zCQBnD3dk8dV7pMpz2/3wCcsvuuZTorSxOixSTJwnvEYS
|
||||
K6cYyrdBxJBDfcNSbjfBMb0txOvuR8b83OFAROSUhJ/IBbz2KDM1CdsTikvM5WtIc1GJEUqRkxVm
|
||||
s6Kt5UAhjSiyyu0CyBXhDJqCvP++7ytipk1FNRILEWlH8WVK+WMbw6tBWxL3Vestet37cLOtPbS/
|
||||
mo/mh9dqe8UMrxpYvSWjcgqtoP+QxJlCb5X0ewG+/REdnsTz+lAZOnALMgFpppSMIu/LFDrD6YXX
|
||||
O9iac3avZbjKJxsZhY88Atd9q2KPg2i/0DaafOUzwVTrZqzozWwOizwkcNdMBQkFkYxLUs936Nht
|
||||
jpwPeAE6PvRAJZw4Eo0e+3ztZreEW0ZNlNunKF9wWKbQn+J9cDOJZeINUWvloeANCZZYGNngxz5c
|
||||
RpUPBNddc9zNbg1fC68hC+QWE2tuOINtgO/IaAQxWty+C2Eevl7EnfFofutfg6vneIHs6XouRCeP
|
||||
A6/y2ZKDTm85e5hlCnmZo+iLb9FyEVcI90l7JLYmad58Hu8yvIGKIsPwHZMeMtGBab1fgx9+0Zz/
|
||||
UNiLR4Pku4Zrpv7hxODHt8zJQ6YgzSUPb8gyg0tYBQ29+UcFHvTqRFIo2EzafwAPOnyaSGQQhS3c
|
||||
cSMDg2kdik930mDh7bZgErITCr71x1tC4AO0ZClxz6ddxHb6FcIMIp/YUTdH81wfDOgM0QsL9LAH
|
||||
4tBGtvp8UkauerYyTM4sVkXU5HhrDQmgLD4P6o/f/fCe3zWvFl619olSpo+A//EDpaEnct29umiF
|
||||
hO9VoxiOxJht6jFVbSlsm/iDLlb1jfdqBrCJpTPabThpXGitywrmDgi/tugImGGZnCqFsk2MvYDy
|
||||
5X1cViXmww3J9aRkQhzJIuTmYItF+NRydtaorFrb9zPgL+/GFGh9kJVXmqFAufpXwMdqmijJuktI
|
||||
npwSIOI2hyDbXrbIvHtjQ0/lbYB9bBDkjrWRT+Yhp2DsZIU8jpMLpNc5obBa+wV58k4GS9FrHBT4
|
||||
lidewYNxiW7wDAnHjwRVw9tjgp1n21er7Uhehq98KfHxDGXPm0ngB864Zlt7hbdXX5Hd5516bC7r
|
||||
DnZbUCOX9v3Irg9aqJJpewTxug1EfR7vMNyf3iRNImouKCh8GM9ThdeN8MjJgHRL/fIPdNzLBKyJ
|
||||
Lp6VrbMRkQc53RT2T9WCZ/O9x9tvPq+Jzp0Be9YLcgO+a5ju8yGshNJCD1ZOIymnjodhlXUEzU/J
|
||||
XBJsDEBlO0CCV7yaU2X7PiyeuyvJrMfQLHFERTWtdyu5e3Pw49cpTN98Sg5BxHmTkfGZim+rRAL5
|
||||
1jWUjuIEv/gdwJKlJhnMRoaiEIboqqu8t2puFav34jZjNVkpY8X5osHayyMsa88T6G+BQiFejAYF
|
||||
0bj3/urDbz4FlMRPRp/jmihavguRMfMtGILN/hvfbY3ubT7kL2WAPNyNbEX6V+/hWf4UIOHhB7lS
|
||||
vQOL9rDvQCyMlljvPh/xLd2WQDc1l+QFn4+L+ZIG9boBBab02EfsroHyL58/AP2R85Ey3uG4Ge+Y
|
||||
aXjH2E5/cnAjDjI6ikgc13TIHHh7DRXa11HtkbZrz+pciDq6bDipmQYpc2C8ja8ohEAbx+/3hXz2
|
||||
eAdLS68Nc3zOgJzJDnjrnz8mcdLcB9ne64KtUWCTNbKvKLMZ+MiXOz2nnaYP0FFFmzieXuWMv0ID
|
||||
DI8CBPAp76J1IasGw4x30F5xhBFf6aOD6T2okcl2fkTts89BHPDo10/ZKs2BDGXODNAxQL4nlWYV
|
||||
qnPgMmQN9dac194O1MSgCTm665z3lZqL4NJ/rmR/ysR8Bu02hWnyhEiv9eO44tC+w6cejZh9+bRQ
|
||||
4mMIX53vojztI3Oh9UGBFPNbdDFZ4rHc5jjIl0FAPFJHYG1S3oWBnzQBj10vYqp2mtRXmiJUfPn+
|
||||
mnNlDUue71Fs1EGDNbdKgH48Nd/+swPzoyY+9NIlJZpMZ3Ot328F4mu3Ek+0zEZwPNIBUQsG4juc
|
||||
Nf7w8McvUbB36og2IwjgpXltvv2g9OiY3WP4qtyVfOuRkVHtMwiTQcMrvtreeqWPFn7u7EkQvfHj
|
||||
rLr9/ZefxMl1i0m/fPniAXH15mjS0t8PMMvwHgv3avVw+B5r0LVahAzVjppBF/MV5pZKvveLG1rZ
|
||||
fgBj8WEhm5iRJ3RG3MHJqGkAo0s5rqJrUdgc0J1YidtGsxVfNSgYaYU0NthsbS08gIPenLCQyW4j
|
||||
gauwQnWqHiRpFivi1fgSwmktOWIb8sVkNtmEAGBiB6FlDDl9ZCEPo7c6o+Ag+ZEoW3oMY/FpoeAV
|
||||
n80v/8igAwY92IwRzrFyGURQiFeT+PVqAOGgBAosF/VC3MwW8l7Grxg+7PiNUC5OHjW6WweV7afE
|
||||
KzvLY1/zTakmk2+jXOt9b8E7z4cI+RY5Z3o+kvowt1AY+RNyK6+LcETvGIRn8CDGp3gAelhoDd54
|
||||
4+CJz48Md9WcQMm0PHRw7TJfLkYawPCoHUl4E51xfDTq8KsXTENjZsumVUJ4FKhNnvNT8uiLnzKl
|
||||
3Moaun/10t/3cfvFRdZ8T8a14KpMFXeh8fMHwCLfNV/VNVciewn6jG4dKVA5YXMnpnASzOnLXxUh
|
||||
DO4BhceR0clhGM7kIwciC7pmDQolAd2ucoN5K/WMBjHrIFdax4CFnuMxon5C+Mq6mPzw5qdPFeua
|
||||
+8ES3K6MteTeQUEnEbK+/RPDtaqhI4D8y//EkYGrQGEGjz666GvlLQFvxSpI3Q+WLkbfsOG6zZTt
|
||||
66xgSZY2I50Neob9wdr/xfvv+wVqXoUMXdRMj8SvH/jLZ+Qcj4PJnz46hEnYnb76rjaHiSEMwirt
|
||||
8DY9T+MCbrEPJyE9kSh6MG8KynBV7cgsvvxWHZfzGCtwkV4D8tx7xyjGNVbdfbhDN0BWb0mCJ4T8
|
||||
6AhY/NbD4vLmXX02Dw6rc2yZxJuqTj0RH6Nbut+a9Oz7LqTbpUIet0lMQT5yDpTeRYPpU3qM7Omf
|
||||
OrhgYU+O1AzMJb5XjvrYjbcAPrbZSNORdmp43j4IGujRW5NUHqAwiifiUw552NEVCiHJaoIK+hlX
|
||||
zx0hPNIJkwNRDHMdCA3lc22F6Kye3h7zKj+DrwOwv/6mFy2fgRYqKIGPtx+tBYv52vSKvgsQMjnl
|
||||
nq/mXk3gvvbNQIXdyr71k0Bx5J9E/+q51XMbDrrociM/fCf5Rdf+6ttD/ekbxi7IBSeXT0j6SHVT
|
||||
ZAIXAwf0OkkMWTAXttE66A+Oga7ffMLuXrIgbZQzOUZylPOXWEigtObe14+bGWvM2oHTh+bkHtzl
|
||||
aN4/BUt17C4PODlgJuGbWwAbsmyJ230ykzm+qKnK0vroDtnGXDcgw6DxLYLulCMmbq2uh+9pL+DY
|
||||
SQ6juH2bdxioQo98bC3jHJ5MWzVewYiM6qR5K+HmFdj7vP/y897sm8aQwY//m9GDmdQgugxfloSR
|
||||
K/KLSd3Dowblsrng5C7tzeXnZx5Zp+MTRz7R6sxz8IsvKr78Y6WX3odffR5sHcHNhWiHKfzqE+Sm
|
||||
uzKaVdZpCr3kB2R3ow6o+9ECWEn4gPbFiJi0XIG79VKWIstJDg3lFX1QfUO+In+Ueybwt3Px80sC
|
||||
enMYWL79FTjPWCX3XfVi86fWHLVL/JQ83uXHXI3bBSo/f8wvmWziX79JumcZvKJQjn7nwVcfBuC2
|
||||
PZh8rTkUBm/uRnan1WpEWZ4sSBpfJI7XGs1S0dWFP7/7UA4qYMGhT/7i+R6LNsPPIVeU6oa0QG7a
|
||||
T7OYfVvDYm5LcqzisplyNbMh3u4lFNjZPNJt/5HBT997t2QP6PbhYyguMwnWr34Sja3Wg1+9G6eP
|
||||
G61VXrp/+aHBe5W5+lZZq2jjn1F04EdAxf5jgJ8etV7WK18Eusm2pLMOyBss7dcvWvidL2Dl6kts
|
||||
5dKwhcoms4NqHHg24dea/vX3dSI4Ed0sOYTn4dAGbbzF+eJCsEKgBgghC47RmoW6r6KSuFi5KlPD
|
||||
vvobTIX7wuI6vnIGmoyDwYvG5Iqd1RyfGp+pNA3vJBtrI2KSf52UzvhIAfvmH8ty6w4F6b7Hy6vw
|
||||
o/HupxP44gVeluieT1//AYjCOURaEojRfMNmCS6bI0/8XXLOmSL1dyC97w0yNsIjouTUKjC6XyRk
|
||||
BL7XjPXFpdB2uUOw7R/myN7mqEEnThN0dNdjxIeLEoIvf/j5n953njCot8P1gQzl+mleX7yHXSUG
|
||||
6MgHVUOJA1twuD/lQFzUcJzSkbZq7k4X4kikbGi4rCHMlvML2Xuzb5Yz1qkqBcYaiO10NdmqqKHy
|
||||
9d/xOtuhyQcxaKEZV2PA2gay7/kz+PrLxDtsX82y3a+T6pi6ggL5ZjfrqQA+OAtLhoyrsIIpfXQ1
|
||||
GFv9jX75OS2KtkLj1gmYa5xPRKIGGFAPxR0ySX9sVqG9TMpPDxyATxl9v+igroV0+enniLd8p4ej
|
||||
UdfE//KlhTtKMrxKvEvyg0cZuU2zDcEx0DF3iEeTlObnDIRt7yMU3+ZxahpDgew+FgHzdjqQuos3
|
||||
wFXIGQlwJnv05u8U0AYwRPHPX4Vho8B3xd+xuFvXhp2EfIB0/w7IcX/B4+IEcfHjr+TXn7/40MPH
|
||||
HYlIjwQ9X79+CDhYaUscctuOS+rpmWL2to90Pj3ma+qSAkrFpAWc+9JN9p0/qD9+f/j2V5Z11QAZ
|
||||
QxDZm2XM1932UIKPfhjIUZ/eEeWuhqNa5ytP9s6zYXOTVAG8hK1JDOV6GJfyKGtQrUOLxGgXRaIZ
|
||||
ZSXUNw+R6JWomOTLLwDYRQk5oDY2KfWqQqXnSsQdWfZMCDafGih++SLhj2/8/OoPu6rE8np+XGeB
|
||||
7/7iz1OhtMGkCGu1vVOP7MRLGC0342ZDbtR4lE0PwVx+84X2POa4t8UZUEsbC3hDtvmdL1SMSP5z
|
||||
gsknZiRqmztbbtisFVUWK3QIouLrh8V3db7tc/ylGc0iH0UXuoY/I4eea3Ntkn0Ibkt4JfoSwZwW
|
||||
p7JUlwrnxAtDnf2d511vVx3tjbFn82lTp5D4Q0qOcLEjfvLKVt341TZQ50M48j6xapgomo6877yU
|
||||
uo/GgE+L91GcVtooTZddBre9eyS7Kzebf/X1N/7IvGOfsRBcLQjCIxeI1Xobhbx3bKjvfER2UZjm
|
||||
tXgWbHh82XIgBJcd4GtNW0ElTQf0zCOXSfY6OTBlRYBM80FG2uDSUtu9+UK691RyvKSbAZ6Su4+8
|
||||
O+uaQSCrrGp8o+OXqhrjb97306OBRGMF4CEVFODubI3YL/Xo8bJi1PAFd1+LKzQjPlTqVu3criaH
|
||||
5LMzv/rxrOYL1NHztd+Bv3yzw9GEfv7PGrgTD1fP9TD39fuWi+fUwLymXMCxSIvEdf/U4JcPIRS8
|
||||
1maRw0ZTv/WJfn71Ejw5HjavskTpV+9JR8601Wve2Mjn8yOg7akO1e/9kasJaU4HKXPl6L2ZkQ72
|
||||
V5OKD1NU8bVdySnaT834Ohcr3PQeIYZ3xt5X7/pQ64GHAkPFjLlvJMIp50rkHF5XtoSvtIVK6ubI
|
||||
knYqoCigrvrXP5OINi6Yqj3cR1hB+5sKPJY0TIbXIr2hvGeDtwhh7YCv3sbiWi5s2fZ7CP/5bQX8
|
||||
17/+/Pkfvw2Drn8Ur+9iwFws83/8n1WB/5D+Y+qy1+vvGgKesrL459//ewPhn8/Yd5/5f859W7yn
|
||||
f/79Z/t31eCfuZ+z1/9z+F/fB/3Xv/4XAAAA//8DAHXQUXneIAAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 93bd468e08302506-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 07 May 2025 02:26:59 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '140'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-678b766599-k7s96
|
||||
x-envoy-upstream-service-time:
|
||||
- '61'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999994'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_22e020337220a8384462c62d1e51bcc6
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Information Agent
|
||||
with extensive role description that is longer than 80 characters. You have
|
||||
access to specific knowledge sources.\nYour personal goal is: Provide information
|
||||
based on knowledge sources\nTo give my best complete final answer to the task
|
||||
respond using the exact following format:\n\nThought: I now can give a great
|
||||
answer\nFinal Answer: Your final answer must be the great and the most complete
|
||||
as possible, it must be outcome described.\n\nI MUST use these formats, my job
|
||||
depends on it!"}, {"role": "user", "content": "\nCurrent Task: What is Brandon''s
|
||||
favorite color?\n\nThis is the expected criteria for your final answer: The
|
||||
answer to the question, in a format like this: `{{name: str, favorite_color:
|
||||
str}}`\nyou MUST return the actual complete content as the final answer, not
|
||||
a summary.Additional Information: Brandon''s favorite color is red and he likes
|
||||
Mexican food.\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
|
||||
"gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1136'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=RAnX9bxMu6FRFRvWLdkruoVeTpKeJSsewnbE5u1SKNc-1746584818-1.0.1.1-08O3HvJLNgXLW2GhIFer0bWIw7kc_bnco7201aq5kLNaI2.5R_LzcmmIHlEQmos6TsjWG..AYDzzeYQBts4AfDWCT__jWc1iMNREXvz_Bk4;
|
||||
_cfuvid=hVuA8E89306pCEvNIEtxK0bavBXUyyJLC45CNZ0NFcY-1746584818774-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.68.2
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.68.2
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFNNb+IwEL3nV4x8JqsQYAu50UOl7mG/JE5LFU3tSXBxPJZt6K4Q/33l
|
||||
QCHtdqVeInnevOf3ZpxDBiC0EhUIucEoO2fy29W3zq2+L2dmpb4sFj+2X5dm80Td9qe8j2KUGPz4
|
||||
RDK+sD5J7pyhqNmeYOkJIyXV8c3082w+nY8XPdCxIpNorYv5lPNOW52XRTnNi5t8PD+zN6wlBVHB
|
||||
rwwA4NB/k0+r6LeooBi9VDoKAVsS1aUJQHg2qSIwBB0i2pPnMyjZRrK99Xuw/AwSLbR6T4DQJtuA
|
||||
NjyTB1jbO23RwLI/V3A4WOyogrW49WgV27UYQYN79jpSLdmwT6AntRbH4/BOT80uYMptd8YMALSW
|
||||
I6a59Wkfzsjxks9w6zw/hjdU0Wirw6b2hIFtyhIiO9GjxwzgoZ/j7tVohPPcuVhH3lJ/XTmenPTE
|
||||
dX0DdHYGI0c0g/pkPnpHr1YUUZsw2ISQKDekrtTr2nCnNA+AbJD6XzfvaZ+Sa9t+RP4KSEkukqqd
|
||||
J6Xl68TXNk/pdf+v7TLl3rAI5PdaUh01+bQJRQ3uzPk/CX9CpK5utG3JO69PD69xdTFZlPOyLBaF
|
||||
yI7ZXwAAAP//AwCISUFdhgMAAA==
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 93bd46929f55cedd-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 07 May 2025 02:27:00 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '394'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '399'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999749'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_08f3bc0843f6a5d9afa8380d28251c47
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -6,7 +6,7 @@ interactions:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
@@ -151,7 +151,7 @@ interactions:
|
||||
6NrP9D+nrsnf4z///rMV/t41+GfqpvT1/z7/1/dV//Wv/wUAAP//AwBcfFVx4CAAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 931fcf607c16eb34-SJC
|
||||
- 93bd535cca31f973-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -159,14 +159,14 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 17 Apr 2025 23:47:53 GMT
|
||||
- Wed, 07 May 2025 02:35:43 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=CncSMPCav.9EJL3emmM0sTqugx5GN6_Oy8JPFBssVho-1744933673-1.0.1.1-Q1XMvHbQdrfEWkkCYeeNHwFdZ1NpjAGJ_0jOUYIk_APelFe7nCanjW_xlOj12b3JQql9.iWQDiHvCeYJDTWkdxnNiMQOEiFMYHX5YZXUuJs;
|
||||
path=/; expires=Fri, 18-Apr-25 00:17:53 GMT; domain=.api.openai.com; HttpOnly;
|
||||
- __cf_bm=FaqN2sfsTata5eZF3jpzsswr9Ry6.aLOWPP..HstyKk-1746585343-1.0.1.1-9IGOA.WxYd0mtZoXXs5PV_DSi6IzwCB.H8l4mQxLdl3V1cQ9rGr5FSQPLoDVJA5uPwxduxFEbLVxJobTW2J_P0iBVcEQSvxcMnsJ8Jtnsxk;
|
||||
path=/; expires=Wed, 07-May-25 03:05:43 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=unfPTYCpF5COtm5PuZDuaJqlhefP0iibfjsXHc9lKq0-1744933673515-0.0.1.1-604800000;
|
||||
- _cfuvid=SlYSO8wQlhrJsTTYoTXd7IBl_D9ZddMlIzW1PTFiZIE-1746585343627-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
@@ -185,15 +185,15 @@ interactions:
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '75'
|
||||
- '38'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-8687b6cbdb-4qpmr
|
||||
- envoy-router-6fcbcbb5fd-pxw6t
|
||||
x-envoy-upstream-service-time:
|
||||
- '46'
|
||||
- '41'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
@@ -207,32 +207,33 @@ interactions:
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_b8c884a7fe2bd4732903ecbdc632576d
|
||||
- req_39d01dc72178a8952d00ba36c7512521
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Information Agent.
|
||||
You have access to specific knowledge sources.\nYour personal goal is: Provide
|
||||
information based on knowledge sources\nTo give my best complete final answer
|
||||
to the task respond using the exact following format:\n\nThought: I now can
|
||||
give a great answer\nFinal Answer: Your final answer must be the great and the
|
||||
most complete as possible, it must be outcome described.\n\nI MUST use these
|
||||
formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent Task:
|
||||
What is Brandon''s favorite color?\n\nThis is the expected criteria for your
|
||||
final answer: Brandon''s favorite color.\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
body: '{"messages": [{"role": "system", "content": "Your goal is to rewrite the
|
||||
user query so that it is optimized for retrieval from a vector database. Consider
|
||||
how the query will be used to find relevant documents, and aim to make it more
|
||||
specific and context-aware. \n\n Do not include any other text than the rewritten
|
||||
query, especially any preamble or postamble and only add expected output format
|
||||
if its relevant to the rewritten query. \n\n Focus on the key words of the intended
|
||||
task and to retrieve the most relevant information. \n\n There will be some
|
||||
extra context provided that might need to be removed such as expected_output
|
||||
formats structured_outputs and other instructions."}, {"role": "user", "content":
|
||||
"The original query is: What is Brandon''s favorite color?\n\nThis is the expected
|
||||
criteria for your final answer: Brandon''s favorite color.\nyou MUST return
|
||||
the actual complete content as the final answer, not a summary.."}], "model":
|
||||
"gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '926'
|
||||
- '992'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
@@ -264,18 +265,19 @@ interactions:
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jJNNi9swEIbv+RWDLr0ki/NBvm5NYUsplFK29NAuZiKNnWlkjVaSk02X
|
||||
/PdiJxtn2y30YrCeecfvvCM/9QAUG7UEpTeYdOXtYPXp7vbLw7evm4ePmObvt/jrsVgVn9/5dbhj
|
||||
1W8Usv5JOj2rbrRU3lJicSesA2GiputwNpksxuPpbNyCSgzZRlb6NJjIoGLHg1E2mgyy2WA4P6s3
|
||||
wpqiWsL3HgDAU/tsfDpDj2oJWf/5pKIYsSS1vBQBqCC2OVEYI8eELql+B7W4RK61/gGc7EGjg5J3
|
||||
BAhlYxvQxT0FgB/ulh1aeNu+L2EV0BlxbyIUuJPAiUCLlQAcwUkCX68ta3sAI7quyCUywA72bMge
|
||||
AHfIFteWYOtkb8mUBFHqoCneXPsLVNQRm4xcbe0VQOckYZNxm8z9mRwvWVgpfZB1/EOqCnYcN3kg
|
||||
jOKauWMSr1p67AHct5nXL2JUPkjlU55kS+3nhtP5qZ/qVt3R0fQMkyS0V6rFpP9Kv9xQQrbxamtK
|
||||
o96Q6aTdirE2LFegdzX1325e632anF35P+07oDX5RCb3gQzrlxN3ZYGaP+FfZZeUW8MqUtixpjwx
|
||||
hWYThgqs7el+qniIiaq8YFdS8IFPl7TweTZejOajUbbIVO/Y+w0AAP//AwA4a1/QsgMAAA==
|
||||
H4sIAAAAAAAAAwAAAP//jFJNa9wwFLz7V4h36WVdvF5nv46BQEsPpYWeSjCK9GwrlfVU6XlpCfvf
|
||||
i+zN2klT6EUHzZvRzOg9ZUKA0XAUoDrJqvc2v/32+fTh68e7bel/UtXFR6/vKv+FP6191cMqMejh
|
||||
ERU/s94r6r1FNuQmWAWUjEl1vau2N/ubTbUZgZ402kRrPecV5b1xJi+LssqLXb7eX9gdGYURjuJ7
|
||||
JoQQT+OZfDqNv+AoitXzTY8xyhbheB0SAgLZdAMyRhNZOobVDCpyjG60fhuk0+TeRdHIEwXDKBRZ
|
||||
CsvxgM0QZbLsBmsXgHSOWKbIo9H7C3K+WrPU+kAP8RUVGuNM7OqAMpJLNiKThxE9Z0LcjxUML1KB
|
||||
D9R7rpl+4PjcereZ9GBufka3F4yJpV2SDqs35GqNLI2Niw5BSdWhnqlz4XLQhhZAtgj9t5m3tKfg
|
||||
xrX/Iz8DSqFn1LUPqI16GXgeC5j28l9j15JHwxAxnIzCmg2G9BEaGznYaVsg/o6Mfd0Y12LwwUwr
|
||||
0/i62BzKfVkWhwKyc/YHAAD//wMAwl9O/EADAAA=
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 931fcf649bdbed40-SJC
|
||||
- 93bd535e5f0b3ad4-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -283,14 +285,14 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 17 Apr 2025 23:47:54 GMT
|
||||
- Wed, 07 May 2025 02:35:43 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=8vySwO0xqpm0u93_1_rXQwTeEIWa2ei3_CD5sAdoo3o-1744933674-1.0.1.1-iqZDpH5poOUp4Rcnhfrb0N2Z0c2662RBiPEcx_gefNW.m3tBA3qyFa8tmFv7PitH8u9vyYK7jxUwy4lPiSi830QWNbTMgCMTbrJ7iaUV7hY;
|
||||
path=/; expires=Fri, 18-Apr-25 00:17:54 GMT; domain=.api.openai.com; HttpOnly;
|
||||
- __cf_bm=4ExRXOhgXGvPCnJZJFlvggG1kkRKGLpJmVtf53soQhg-1746585343-1.0.1.1-X3_EsGB.4aHojKVKihPI6WFlCtq43Qvk.iFgVlsU18nGDyeau8Mi0Y.LCQ8J8.g512gWoCQCEakoWWjNpR4G.sMDqDrKit3KUFaL71iPZXo;
|
||||
path=/; expires=Wed, 07-May-25 03:05:43 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=IXAyT8eWpFERM53ngcYNmaqhocfGbOHWSoe7SFNdoGI-1744933674288-0.0.1.1-604800000;
|
||||
- _cfuvid=vNgB2gnZiY_kSsrGNv.zug22PCkhqeyHmMQUQ5_FfM8-1746585343998-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
@@ -300,16 +302,133 @@ interactions:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '167'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '174'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999783'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_efb615e12a042605322c615ab896925c
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Information Agent.
|
||||
You have access to specific knowledge sources.\nYour personal goal is: Provide
|
||||
information based on knowledge sources\nTo give my best complete final answer
|
||||
to the task respond using the exact following format:\n\nThought: I now can
|
||||
give a great answer\nFinal Answer: Your final answer must be the great and the
|
||||
most complete as possible, it must be outcome described.\n\nI MUST use these
|
||||
formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent Task:
|
||||
What is Brandon''s favorite color?\n\nThis is the expected criteria for your
|
||||
final answer: Brandon''s favorite color.\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '926'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=4ExRXOhgXGvPCnJZJFlvggG1kkRKGLpJmVtf53soQhg-1746585343-1.0.1.1-X3_EsGB.4aHojKVKihPI6WFlCtq43Qvk.iFgVlsU18nGDyeau8Mi0Y.LCQ8J8.g512gWoCQCEakoWWjNpR4G.sMDqDrKit3KUFaL71iPZXo;
|
||||
_cfuvid=vNgB2gnZiY_kSsrGNv.zug22PCkhqeyHmMQUQ5_FfM8-1746585343998-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.68.2
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.68.2
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA4xTTU/bQBC951eM9tJLghITIORWVKFSDq0qoR5aZE12x/aW9Yy7O06IEP+9shPi
|
||||
0FKpF0ueN+/tm6+nEYDxzizB2ArV1k2YXN19Xt/cVtnjh+2XbLH99W399a759HFzy8Xs0Yw7hqx+
|
||||
ktUX1omVugmkXngH20io1KnOLubnZ4uz0/m8B2pxFDpa2ehkLpPas59k02w+mV5MZos9uxJvKZkl
|
||||
fB8BADz1384nO3o0S5iOXyI1pYQlmeUhCcBECV3EYEo+KbKa8QBaYSXurd8AywYsMpR+TYBQdrYB
|
||||
OW0oAvzga88Y4H3/v4SriOyE3yUocC3RK4GVIBF8AhaFpl0Fb8MWnNi2JlZy4Bms1LVw2AKu0Qdc
|
||||
BYIHlk0gVxIkaaOldALXEgGtbSMqgedCYo1dP8fgFTbSBgcrghUlBRXA9PBiB5yPZDVsQSJY4dQG
|
||||
hYZiks77Xh82FUUCrXw6Focat51sqjCSOzluU6SiTdiNitsQjgBkFu3Z/YDu98jzYSRByibKKv1B
|
||||
NYVnn6o8Eibhrv1JpTE9+jwCuO9H376apmmi1I3mKg/UPzc7X+z0zLBxAzq/3IMqimGIZ7OL8Rt6
|
||||
uSNFH9LR8hiLtiI3UIdNw9Z5OQJGR1X/7eYt7V3lnsv/kR8Aa6lRcnkTyXn7uuIhLVJ3kP9KO3S5
|
||||
N2wSxbW3lKun2E3CUYFt2J2JSdukVOeF55JiE/3uVoomn55eZossm15Ozeh59BsAAP//AwAaTaZd
|
||||
OQQAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 93bd53604e3f3ad4-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 07 May 2025 02:35:45 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '489'
|
||||
- '933'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '936'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
@@ -317,13 +436,13 @@ interactions:
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999801'
|
||||
- '149999802'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_151f9d0b786f2022f249ee20ea108b43
|
||||
- req_0001c38df543cc383617c370087f0ee3
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
|
||||
@@ -6,7 +6,7 @@ interactions:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
@@ -150,8 +150,10 @@ interactions:
|
||||
Mj9mH/7zuxXwX//68+d//G4YtN0jf30vBkz5Mv3H/7kq8B/if4xt+nr9vYZAxrTI//n3/76B8M9n
|
||||
6NrP9D+nrsnf4z///rMV/t41+GfqpvT1/z7/1/dV//Wv/wUAAP//AwBcfFVx4CAAAA==
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 931fceef786ded38-SJC
|
||||
- 93bd57189acf15be-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -159,14 +161,14 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 17 Apr 2025 23:47:35 GMT
|
||||
- Wed, 07 May 2025 02:38:16 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=fj4RMXSXRDQjE2CFC6CGC3dVcJ8cl2Cbu8alijwMHA8-1744933655-1.0.1.1-M3c3AI4XQa.0GJoanNACuOm2aEL4xjqHR1grxIP3olFvq3e0eFHwQTvCF20YwR_OLiMJUH87eNUwgziawMccsxjR9OVZyDr5._5Wts6CrqA;
|
||||
path=/; expires=Fri, 18-Apr-25 00:17:35 GMT; domain=.api.openai.com; HttpOnly;
|
||||
- __cf_bm=VGdrMAj2834vuX5RC6lPbHVNwWHXnBmqLb0kAhiGO4g-1746585496-1.0.1.1-kvgkEGO9fI9sasCfJjizGBG4k82_KhCRbH8CEyFrjJatzMoxhM0Z3suJO_hFFH13Wyi2wThiM9QSPvH3dddjfC7hC_tscxijZwiGqtCVnnE;
|
||||
path=/; expires=Wed, 07-May-25 03:08:16 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=MSkpJsQZtdyIGvrl2mIwy0a_We8H6CIrS7etFgRBl2Y-1744933655703-0.0.1.1-604800000;
|
||||
- _cfuvid=sAoMYVxAaEFBkQttcKO7GlBZ5NlUNUIaJomZ05pGlCs-1746585496569-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
@@ -178,22 +180,20 @@ interactions:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '140'
|
||||
- '69'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-84959bbcd5-rzqvq
|
||||
- envoy-router-7d545f8f56-jx5wk
|
||||
x-envoy-upstream-service-time:
|
||||
- '110'
|
||||
- '52'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
@@ -207,32 +207,33 @@ interactions:
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_dd3ef61c4765b46ed7db80ddfe261f41
|
||||
- req_73f3f0d371e3c19b16c7a6d7cc45d3ee
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Information Agent.
|
||||
You have access to specific knowledge sources.\nYour personal goal is: Provide
|
||||
information based on knowledge sources\nTo give my best complete final answer
|
||||
to the task respond using the exact following format:\n\nThought: I now can
|
||||
give a great answer\nFinal Answer: Your final answer must be the great and the
|
||||
most complete as possible, it must be outcome described.\n\nI MUST use these
|
||||
formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent Task:
|
||||
What is Brandon''s favorite color?\n\nThis is the expected criteria for your
|
||||
final answer: Brandon''s favorite color.\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
body: '{"messages": [{"role": "system", "content": "Your goal is to rewrite the
|
||||
user query so that it is optimized for retrieval from a vector database. Consider
|
||||
how the query will be used to find relevant documents, and aim to make it more
|
||||
specific and context-aware. \n\n Do not include any other text than the rewritten
|
||||
query, especially any preamble or postamble and only add expected output format
|
||||
if its relevant to the rewritten query. \n\n Focus on the key words of the intended
|
||||
task and to retrieve the most relevant information. \n\n There will be some
|
||||
extra context provided that might need to be removed such as expected_output
|
||||
formats structured_outputs and other instructions."}, {"role": "user", "content":
|
||||
"The original query is: What is Brandon''s favorite color?\n\nThis is the expected
|
||||
criteria for your final answer: Brandon''s favorite color.\nyou MUST return
|
||||
the actual complete content as the final answer, not a summary.."}], "model":
|
||||
"gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '926'
|
||||
- '992'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
@@ -264,20 +265,19 @@ interactions:
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLBbtswDL37KwhdeokLx0mbOLd2WIAC63YZdthWGIpMO9xkUZDkpEWR
|
||||
fx/kpLG7dUAvBszHR733yOcEQFAlViDUVgbVWp3efv66LvKPRd7Q/f57vtaf7m6ePjze774sv23E
|
||||
JDJ48wtVeGFdKm6txkBsjrByKAPGqdPFfF7MZtdXVz3QcoU60hob0jmnLRlK8yyfp9kinS5P7C2T
|
||||
Qi9W8CMBAHjuv1GnqfBRrCCbvFRa9F42KFbnJgDhWMeKkN6TD9IEMRlAxSag6aXfgeE9KGmgoR2C
|
||||
hCbKBmn8Hh3AT7MmIzXc9P8ruHXSVGwuPNRyx44CgmLNDsjDRnd4OX7GYd15Ga2aTusRII3hIGNU
|
||||
vcGHE3I4W9LcWMcb/xdV1GTIb0uH0rOJ8n1gK3r0kAA89NF1r9IQ1nFrQxn4N/bPTa+Xx3li2NgI
|
||||
LU5g4CD1qL5cTN6YV1YYJGk/Cl8oqbZYDdRhU7KriEdAMnL9r5q3Zh+dk2neM34AlEIbsCqtw4rU
|
||||
a8dDm8N40P9rO6fcCxYe3Y4UloHQxU1UWMtOH89M+CcfsC1rMg066+h4a7Uts1mRL/M8KzKRHJI/
|
||||
AAAA//8DALRhJdF5AwAA
|
||||
H4sIAAAAAAAAA4xSy27bMBC86yuIvfRiFbKs+HVLUKBFL0YPRosWgcCQK5kNxSXItZEi8L8XlBxL
|
||||
aVOgFx44O8OZ4T5nQoDRsBWgDpJV521+t989PX78tF/fnr5X+w/fdgv6WnxuWruzX25hlhj08BMV
|
||||
v7DeK+q8RTbkBlgFlIxJdb6qljfrm2qz7IGONNpEaz3nFeWdcSYvi7LKi1U+X1/YBzIKI2zFj0wI
|
||||
IZ77M/l0Gp9gK4rZy02HMcoWYXsdEgIC2XQDMkYTWTqG2Qgqcoyut34XpNPk3kXRyBMFwygUWQrT
|
||||
8YDNMcpk2R2tnQDSOWKZIvdG7y/I+WrNUusDPcQ/qNAYZ+KhDigjuWQjMnno0XMmxH1fwfFVKvCB
|
||||
Os810yP2z81Xi0EPxuZHdHnBmFjaKWkze0Ou1sjS2DjpEJRUB9QjdSxcHrWhCZBNQv9t5i3tIbhx
|
||||
7f/Ij4BS6Bl17QNqo14HHscCpr3819i15N4wRAwno7BmgyF9hMZGHu2wLRB/RcauboxrMfhghpVp
|
||||
fF0sNuW6LItNAdk5+w0AAP//AwDAmd1xQAMAAA==
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 931fcef51a67f947-SJC
|
||||
- 93bd571a5a7267e2-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -285,14 +285,14 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 17 Apr 2025 23:47:36 GMT
|
||||
- Wed, 07 May 2025 02:38:17 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=7agwu5JV1OJvEFNhvfvqdgWf.HoMyIni9D85soRl3WE-1744933656-1.0.1.1-dKUwAZnjjuuiswFKWGsxpwHNBJUpjhYlZvfZpyNQIejxEJrXMCppgPvtQ9wa4SKezLmKqftvn_H.bAx_AEFJD2EWm5V6R_uK8.odneErR6A;
|
||||
path=/; expires=Fri, 18-Apr-25 00:17:36 GMT; domain=.api.openai.com; HttpOnly;
|
||||
- __cf_bm=62_LRbzx15KBnTorpnulb_ZMoUJCYXHWEnTXVApNOr4-1746585497-1.0.1.1-KqnrR_1Udr1SzCiZW4umsNj1gQgcKOjAPf24HsqotTebuxO48nvo8g_X5O7Mng9tGurC0otvvkjYjsSWuRaddXculJnfdeGq5W3hJhxI21k;
|
||||
path=/; expires=Wed, 07-May-25 03:08:17 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=LdTrzwZYrB6ZyQLY7NdaaHVpDVFvIjYm3arSpNy87wU-1744933656504-0.0.1.1-604800000;
|
||||
- _cfuvid=LPWfk79PGAoGrMHseblqRazN9H8qdBY0BP50Y1Bp5wI-1746585497006-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
@@ -305,11 +305,130 @@ interactions:
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '540'
|
||||
- '183'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '187'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999783'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_50fa35cb9ba592c55aacf7ddded877ac
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Information Agent.
|
||||
You have access to specific knowledge sources.\nYour personal goal is: Provide
|
||||
information based on knowledge sources\nTo give my best complete final answer
|
||||
to the task respond using the exact following format:\n\nThought: I now can
|
||||
give a great answer\nFinal Answer: Your final answer must be the great and the
|
||||
most complete as possible, it must be outcome described.\n\nI MUST use these
|
||||
formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent Task:
|
||||
What is Brandon''s favorite color?\n\nThis is the expected criteria for your
|
||||
final answer: Brandon''s favorite color.\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '926'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=62_LRbzx15KBnTorpnulb_ZMoUJCYXHWEnTXVApNOr4-1746585497-1.0.1.1-KqnrR_1Udr1SzCiZW4umsNj1gQgcKOjAPf24HsqotTebuxO48nvo8g_X5O7Mng9tGurC0otvvkjYjsSWuRaddXculJnfdeGq5W3hJhxI21k;
|
||||
_cfuvid=LPWfk79PGAoGrMHseblqRazN9H8qdBY0BP50Y1Bp5wI-1746585497006-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.68.2
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.68.2
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFNNb9swDL3nVxC67JIMSZo0aW4ttmI77bIO3UdhMBLtcJVJQZKTBkX/
|
||||
+2CnrdOuA3YxYD4+8lGPvB8AGHZmBcZuMNs6+NHF1Zc7Xycnp/Rhdv3x2/fL82r+9Tr/uDrxn8yw
|
||||
Zej6N9n8xHpvtQ6eMqscYBsJM7VVJ4vZ6Xw5n50tOqBWR76lVSGPZjqqWXg0HU9no/FiNFk+sjfK
|
||||
lpJZwc8BAMB99211iqM7s4Lx8ClSU0pYkVk9JwGYqL6NGEyJU0bJZtiDViWTdNI/g+gOLApUvCVA
|
||||
qFrZgJJ2FAF+ySULejjv/ldwEVGcyrsEJW41ciaw6jUCJxDNEJq1Z+v3cCu6E9AIuEX2uPYELGC1
|
||||
rlU60JOrCJI20VIaAiYIFJO2zUKkkiKJpQSeb+lVrwQYCfI+sEXv9xAibzEToLhukC3GPezYkd8D
|
||||
1ioVsDjesmvQJ9hx3mhzpDRtMJIDllJjja1/74/fKlLZJGz9ksb7IwBFNHf5nUs3j8jDsy9eqxB1
|
||||
nV5RTcnCaVNEwqTSepCyBtOhDwOAm87/5oWlJkStQy6y3lLXbnK6PNQz/dr16GzxCGbN6Pv4dDIf
|
||||
vlGvcJSRfTraIGPRbsj11H7dsHGsR8DgaOq/1bxV+zA5S/U/5XvAWgqZXBEiObYvJ+7TIrVX+a+0
|
||||
51fuBJtEccuWiswUWyccldj4w62YtE+Z6qJkqSiGyIeDKUMxPjmbLqfT8dnYDB4GfwAAAP//AwA/
|
||||
0jeHPgQAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 93bd571c9cf367e2-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 07 May 2025 02:38:18 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '785'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '931'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
@@ -323,7 +442,7 @@ interactions:
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_8837be6510731522fd5ac4b75c11d486
|
||||
- req_9bf7c8e011b2b1a8e8546b68c82384a7
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
|
||||
1899
tests/cassettes/test_docling_source.yaml
Normal file
333
tests/cassettes/test_get_knowledge_search_query.yaml
Normal file
@@ -0,0 +1,333 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"input": ["Capital of France"], "model": "text-embedding-3-small", "encoding_format":
|
||||
"base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '96'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.68.2
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.68.2
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1R6Ww+yyrbl+/4VK+uV3pGLUFXrjbsISKEgYqfTAUEEBeRWQJ2c/97Br/t094uJ
|
||||
WIZUzVljjjHm/I9//fXX321a5Y/x73/++vtTDuPf/217liVj8vc/f/33f/31119//cfv8/9bmddp
|
||||
nmVlU/yW/34smyxf/v7nL/a/nvzfRf/89XfIFDaOu8OtX0ttiOESiQE+pXaRrnxot/BYFAy5kEUP
|
||||
ONOCJaymU0mOVijT9bPeH/BZXg4kYptvv4S7/QQztojx/ay+Ul5kFR8F+9XEUeizziKFvQ+uIr4T
|
||||
E107h77ZSw1r9zPiUCqSYL4SxUbv5usRy5ZLusgkecBnVXUYOzGbrs25voiDG1c4vEhCTxHz8GCW
|
||||
BAdvL9+sngv6iwv3uxDj7HWgPUkXUIBvFy3YnkpJo7AQIvipwoIEgINatQMnCZbG54EP4NtQGgm1
|
||||
j84HJBDbhxeNI+7DBsboBuS5G27VeLUbGV4TmSEyXDVNYO2qg963GbCJrrazpK3iIk9XzQn1/aGa
|
||||
HvHpDatPvZDj9a5UvL67PpApmxGO6maite0dfTjujIfHLN4F0Dun8IiTkpSc0vyuLRl9hRAz+on4
|
||||
10ruubG6mchYruLEKW2rfYOdyUKFSzKs4XeRsjXNYsmeAMVeXOk9n7hjDu9xnGNTZ57p3DVSDWr5
|
||||
fMXpOdNo4zDGG1m7/QtHFVHoPOe4gw9gDliBlhTQ8z7hRY3aLjneZ4Uu8u3TAfe1ODjU9lrFPfww
|
||||
hvez5uNTtZcBx1Zhglg5CrF231NAhv5ewND/BCQ5FX2/CJdjAW8sKYkyly7lHFe2AdR9HZ9Jc6EC
|
||||
8O8qelzOmcdoegrac2xJ0KLvI8be8UXp28I+fJ/uiCQH4ews2KhKpFTMwxPPo5S+M+Oxh4esr7Hb
|
||||
TDigi6VIaPf8aMS6GVol/PItMgbfA8z3HghaTguYSWJFrmr1AovFCxN6lWOCH8Zy0Cj3tG24rysZ
|
||||
n9r61c8ncz9A32GbqY46xeETxDNIpUswCWHyqRbOfMVIaG0bG4uy9Et2iGSYhTeJqEEyARpwbgEt
|
||||
fzpMxzJXU9bImBBMrPAicq0fwCyP9wiGaWsRR481jeNRCZHD8pG348i7IurdyaE9iRTHha1X0yeE
|
||||
NbicCPCK19j063LnYvSxK5+oxyWshHE3y2h/CD/kfrjXdEaVFIL3y9fwbQWHgApiPMGnM2U4gotf
|
||||
rZ/1nKNkskqcWfRTrXUytNBlfAWr5VD1rak4ETSWm4iVsLxt+8tVwDlaOfWqg7XlfHY6cDjfPGIy
|
||||
4EFJveg+1JsHxmnzzgDl35IHDwafeKabnYLRmXoXamJ1m+ZoDTQara0OOTVZsHr21YoviMHCV9w2
|
||||
Xhyo0Fma0VZh/dXfON9HRTV/7vMAFpFeJxbnZ20Rs28L+x0YiFcKNzDj21P95SexzvRYcVejYNHL
|
||||
3p9w6l15ZzaGYQZWc7gQo/rMKQEr1P+s12KUOD+8gU/9ir193EcVh3mkwnF3eGDjcbS1LV9l8aiE
|
||||
lbfU4hqMtzd6S8X3POFnBStnNPtbjcDbqLGm0n06n77BCkrCD948wBDMDjY8+L7vWRztmSPoLobs
|
||||
/fCAaLN67eeMnxKYHFWJHOpFTvlQEfcwPpWLR7+T6nBPHQ9S9MI6ceprH0yrM/OI8IDBTs6jdNu/
|
||||
DN9WTr3yzmgaX5ATD3fr0/R4tjlWbCmuMlpPOSTqnak0sgPGHo6vm7bhs619j3KhIsKLzDT/8EeV
|
||||
1hpe4nkhF+HtaLPNtTnkXo2LtUA2KUfDD4SscAUY21nSc339iVEYXiSiAuAEfKu9BnSa9hl+HG4i
|
||||
WMTs1aGT3p/w7z7wL7dc0RLgAeMrbKtB08gM7kdXw9nexVrfiZGOPs9uT3R6PmksFWUT1dXxhuWD
|
||||
++l7duVnMGVqiPVdGjpcC+YHhF+uw9bK7bWFrcIYct54xGcd2g6b9t7lT71zGdpRKl2frhTXkeJ9
|
||||
TEnVKDafDxh0yteD3rOp1rgp3uiZ2Tnxjq2vsVdrlKRLvC5Y9vRLT+tFv6DrpZ+JWoHIWVRpfcMt
|
||||
HsSNn2oq3OsgRiqlAbaSl+1wVXSckWv6N5I43Q3wrfdkwHzTHyRWXyxYSb/av/wg3m6t6cJPsJTU
|
||||
lDFxoE2rs6T3ywxD+rxO+8ukpUIowTesh9kh0fhM6dqy1IXtM9OxfBtdwKUjTuA8ePrEcrmiCTwT
|
||||
STCTvSvOZOPjbPjeIvdFHeKJitivfqDP6G7CjBzN/KSNuyV4w21/WAuA3As69GPIvT4uPsx2oq1R
|
||||
zQ+wegGbGLWJtCH83k0QNKHhzVnUpHzifh4ofz1PHmdKpdOuB30AeUY/3jc+Tz2NKrH9xYPkr7Oh
|
||||
Cca7f0PXOb2wdXocA5Zw7AOBV6wSP9bHtPM5wkrIHQdispqv0fNBidD3XJUT3zHYoW7nQyh809Hb
|
||||
H42vI2QobaEa+0+c4XB11putTSA5yhLxrDFKZ41QHd4WDXit8S2rqTo+HvA1zphoPP4GRHncV4jl
|
||||
YzR9g2oE81l+mGLTgpToVcqAWc7LPUpQhDzY2FfA8rkO4Wjw2OOy+FEtmjBc4GMML0S1StXhbOH1
|
||||
RkrTavhxApPGqcevDleddYhy+egBOUmCj2q3GT3JSz/BELGi+wd/5LfHBvRY7WTYIQETXDSPlN3w
|
||||
AgyJhXDEAEgH7RTLcOc5KjlcTueeFtkl+lMPsEISh33czh26z7gnp8K9g29/kic0WAMkh8WgQBh0
|
||||
5gKKy74ix0vLassrjy5o6d4SkfkP0eawuEyQu5V3YufKBcyXbIboy6bvaTl1di+8IzRImSWY09kl
|
||||
PhAI/A6QdbuBKLcG9QMPvrzQTt8nztr6VbHzwJhSFKcslmP9lAp5UXQIjFPiOUTrwWqpX+tP/mtc
|
||||
iPvJCZZQcly+J9rXtirhtexZIJhVTzb8C9aIGy1YfS8x8T7tJ2gMl1/h/az4ntBbRb9AtS7gEoEA
|
||||
O80bgd/9hnyvCuQQJp+eWnp5kXhcatOeQwZdNHy2YWsZPLbXz6tarvfrAz1hKGKv65j+veEV5BhT
|
||||
nahzKpxZqqMVupNRYVvMvJ7/Sv0FbnhHrLpfHHrX5hB+PE8gWos9sAiP3Sy1RnzBxzQK+3l9v2oQ
|
||||
0uyKQ3RuANH8kw6/kxzjbOOXQmSGK/RVTCdwMqtq3Yn6/MtPbL1fOhBOr0uNBll38F2LngHHFZ4O
|
||||
F/4wEcv4qpUQa14Ob2eh8Fgdvx1ivKs3xPWww7kIg2BelEcLLr4tT8vjyqRrVS/RH34h2+2gLexq
|
||||
s7DjvHxipUJK6cangcZxM06uwSWgOHcvcMoDwXvkDy7o+ciZ4XqpP8RRHezwQ/JiID/bxSS6vaJ9
|
||||
X/arQyKBPTGGbglG4SGscPceBhzQQ9ovtzf3hvU1QRPParO27KWwg/zJ2OHj1B6D9ZuZMVQiV8VB
|
||||
9DgFg8a8CpAnB8YTZlvSfvkNtXlf4xv4HgArpW0BNzwm6l37BlRT8g4wnxudqslQg8mLAhOZgoG8
|
||||
9cdHEsRDIKwumnapmPbzQJMJHoKaxcrGX3/4iMDpBYnmfc1KcLsYwvq4PxLtNA2UCF3ng+h10qdd
|
||||
nRFap50aojJPeqx+TnPaSb26R8wcf7G/C0j1zbuOgXs3IB6bjExA90dBB32sF95Mj1YvbN/Rxk9I
|
||||
Yow64HOQFKChzJngJv72q6r2rOSwbIQV5VgFg4dWH5k3scTu+q0Ambs3A1fR8vBt/I7BPBwOCdyJ
|
||||
fEiMV6pro+d9JVCY94FYXzcDvaWXPkpFGeN4qY2eqyJlRnBfH7yPfmTpwOc6I92mMMJX5MQVbYyO
|
||||
lcbbxd/yray2/V/g5/o2cZ7qi7a691eC1FoVf3yhmp+q+YbvCjUkQ/lXowa9qHA9PSBR7WFK5/KR
|
||||
WX/qXYfYCfQ5m13gGLy/2FuPQz/o+rlD6mwM5NZbcs9zXlLD4/1IyHH+wIrGZ1aCA7l88AlWJFjk
|
||||
OjQRMydfD5qDDFojYyL44+PO+lyDRa1OM7zlzh5rytugQh2QGEz6ycAGcvbV+sZ6iXKDuU/rwW/o
|
||||
d+MbwJyHFWdfjnVmfgAJ+O4PDnHzB5cSK/AHwJeJOQmrPFakbNccJRwi2OqRo/G96Tykl1buvPHw
|
||||
clK2ZVAJxITTSJ4chWAKqBPDTb8RV3zNdEnvyQq9em9h88GE/WrRtQU/fXXAg06Xj36SITeKb2Ie
|
||||
P0Ww6GaygmOUO8RwzmUwd/xLRdM9GsnJrJd+0x8h2PALO1nTASp/Yhvm/JvDxySVwAgnkUcbvyWG
|
||||
SC4O6198HlbOtcAGhiCYHkzBI2ZiGqw97RP9jNXThGlkn7APhKPDO8ESAeMqGMTK0Fwt+nOXgy6S
|
||||
NSID4aiRXVyyUGVYi4RZ7lA+oE4CZRrJRNvPvEM13zChJr5u3szEVzoPV8SDa6Iy+CSpaiXgQZog
|
||||
aOgJK2fz7FBWGX3QS7DE2vHm9iy+3VT49mJ54saPowmPq1xA4SSJZLu/gGbJ/EblmwrePn1UYCK7
|
||||
eoYnK2KJ9uNb2KgKVLZZThyLVMEsj+fwd37TaxrELV8T9Yd3WJ51tV+b8+RDJN/DTZ/LgQAPlxYq
|
||||
OU+J+Rk4rVX2hxruibNMHGNeAqGLqIys0rLIvTqCYH6xCgPZ83rFch+9wNAyXAmsTH16/J75gtWF
|
||||
7gPmXmOTw+GLUrryJxPeB93A12BXgpmmvQtxpcX48OHNfnnqeIK/88eTH/XLcjR4KJbu5efngHm3
|
||||
93kwOe+AeJirwLpOwR5BE7XE84vM4YmX+fAwV19vJ5lvSpMskyHoZIpl6B37ae/DCGqWBIjiiZlG
|
||||
iywJYQpG6rH53ajmvBRXsHfPBGNmycDS5EHNFfYhmCStmOliGZMLW2Xq8UkJb9Wq3ezo5/8Qyw7T
|
||||
fmzBPof5x2L/3C9CSz9H3rMQt/yWA77eKwWsDl+RHGuurKZz0/vQct45OZiaXa0JtSZo4zEmj/Rt
|
||||
pbN/vOSIUZ8L0eXqQckS7UNkZfKThPdD3xP541uIpHebeIOwVpueV9FTv2FsVB8/ZfscenCPUYST
|
||||
p2AGwtwNDEj1ycWHwOpTOuj8Bb7qu4LtyXsFlG2tEpYscyVHxaXOUvrnAaaXR01wRnC62rtklTa+
|
||||
hR1hGdI581Ifci2TTNLkKcH8uJ1bBKvHk+SPqA8oLeMc6L6h4OTerRUlju8iIn0+5KkQyVm3+waR
|
||||
LVv48PMn9kfNhsGS7DF2A+/P+6Dn6fuJ9xqxmupvasKyKsNpOZtnjQaXVw43vj7B6kaccfN34GGI
|
||||
NXw5LmxFJLDm6E7BBePzxwros/Yn1LRiSix/jyk7MRIPzVGzscmVWspl+6aAnkk6rL9V2elEET+A
|
||||
wWkFUTAbAbI2UQdTLmwnfsOrmR9oAn5+1MZXNYrIZwbnqSqwBaq4GqD8SX7xxNiBI/3hF6yk4U4y
|
||||
G52BUAdNjN479eEtHybu1+SdXODiFxYxjF2bzk3UxaAy44hc47HqOU6Q9rC7hA/y03NLt+/sP3h0
|
||||
P9xNSoB/lmHER8MEnsvNmdeDPv38S2IEO5XysGFruOHHJLEO2087f+mQJl9sfCx0Lp2E9mVLDfz8
|
||||
H3z9+Y+92T5J0N1TjRejdwKb2e+wB0eZCoo+MPBz3j9JnD4qOvO2HyI218epZ773dGaZvSVKTvbE
|
||||
zmRdnMU2YxNsfi22FokJ6DU/1jDx4yOJDFF0Vv0hJbBdbwdyXIoo5cluWiHiISLKs/s4czHd9/B0
|
||||
OjPeromPlaDloICptzJTtfEvfqsvUHbCK85pvNPWTI8hXKAMiBJBvRLq3TpA53jUsXWm34rW5ZWF
|
||||
9sFsiCwUFqDrTsvBhic/Pzcgh1PRIl5/etNSlhdnLcx1QMHBabB5CWQgiPpeguvudfeks9CDGV01
|
||||
CRJhrxDsxGGwgK8yo9N95TwmONvpli8xOuFBxUotrikVTkYsbviDbZsw2urm5A0vjnojxqbvOYja
|
||||
C9TOk7Pxt5VOh+fDhrITXYlyMPlgIl52gT7lLwTv/XMw9zl0oV0HKsafMtJ6rRH3YNPHHkpz0SEO
|
||||
c6rBFl8PHm4inU971UM/v3njc/1YKr0PkuQBMN78ofVjehbIh0KdOss79/RCwgeaUWdgIypwRa/D
|
||||
Xv35H8T/HCetu+elBze/yqNB+OxX7aaG6B4OAlZrq3MWOC0s2vQrPi4Fn071/ljCULAhdkKxC4Tj
|
||||
3Vp//yd4ifV0OUlaKL5cZ/akMI77OWsECAfxGk/DdQrB/LAqV9rwHpt6V1fLK899wIZPcfps/IGA
|
||||
t9FC/eN+SSSKVT/sv6MJvOvjgI/bfRluJcx/+OsJwuVQkW1/8OcnRZ/hqpEhGAawOMmTyGe3SJfY
|
||||
st9wjdGM5UEBThMOng02PMHpeIrTdZ1SCT7vuYCtpf5Um1/jwWU3lfiw3yf9eH5KJuSr73d68c8d
|
||||
HZ5CfIGJ2EdeluC6H+7+bpB0r9thNb9qwdLg0Yab3+S9SLMC2j3F5Fff8LFdOUCYizTDyzv4bHpP
|
||||
6ac6eXfwgFh92vCkWhRNVCWPNyti2/rH+cMHNj1JsNoFlHuN7gB/eu6w6c25a9Y3PPUePzEgrau5
|
||||
H8sWfs+vkiiS0zgr2U3zD3+J+wVawLMiq4p5qitb/jt0nC+6h+BYnT1287/G5XhioWDjCza+Xgfo
|
||||
O+ImOEGXkOweukDoxFwH0XNRsU+Pbb/A8wMC9TsU5K64VBuDR79CY/QCchIPLaBRtbTIL+Mz9oJz
|
||||
l87v0OrgtH9Zf/g0a+CDBJ4NPhNN2ec9laPXgPKhVLE5JdeKX0x5RUNfM8S5srrDea1Xw+FCCXFD
|
||||
CrRe5NUYNpa/89Ys+VYziBMXuJ5oEWVcdsEqioccNP1RmVB9P1V00BkfIvKZsDKadUA2fQ/XhcrE
|
||||
PJNDwNvzOwTIJQO2ETvR9YVYH8nXu0WwA0+UZRIA4bQcpenr9i9tuJvHHAYnk8XWk+oOnYzWRvVR
|
||||
OmI1HL7OWl4SHsTCqnprBXhtBfPK/PETVLK8gxXoTQt/9ce1rUOwvtxuhTrfdjgUOSXd+jUR4n1o
|
||||
kNvVYPqhOQAIG9ioRFWYkfbV1JVQNdl8i9/ozBoBJpS6XvLKPnlr9BBHe9hO/dMT/UNTzbS7t+Cn
|
||||
f8yBVakwn+UL0voZYVUyBNpMT52Hm77CrtAjbfzohiotXS0RXSqkYNz8Ydg4TE0co7lWa/SwHrAX
|
||||
fEDOF8WmA2aphGqp/GAVnYyK/eknTG4zdrf+Bx837RvWt64izidZKpKhtIOWjn1vTU9cvw4OKP74
|
||||
G27y9SqujqvyT3ycnM+C+aenLkXDTEAzW0B//mckqQGxxPuQbniXw7NPD9P+ZlT93BwoA5Rl4rFN
|
||||
Mgp+fgtqC2AS75kuwXjlpQhKB7qfaIwkrb37wgCUnKWbuXTUtn5OCX563rt7WbVemb6Al0cee4Ji
|
||||
zXRuojL+g/+O63zBuvVv4MjnPTGKwk6XJCgH4BhZNoHhwlS02KEYZucu8sQDklN+5y8t2urptLjE
|
||||
p+MLExaQUztjL3T1YN09CxMpjOYRZ/emv35E9/N7sBeNM/0mQTdB8WR+seXga7pgoy/h7/w3Pkzn
|
||||
PNQiKO/PNn7gRAl+/BSuzXTC0fgEdG57aw+as4mx7J/NXiidaw09XTbxTx99Gc/rwOPb9eTg3phf
|
||||
P6aFy90WPenD3qtl67cARTR1bJH7uZ/rZ7GiD7u+iMWhD5imW9BB/l7zxFJq4AxN1CVQi/rrNOtp
|
||||
nC7P1/AGSuFzePOHqzk9f99w81vxIXqfKf1IdvvTo+QEKxyspfSB0OpmER8SsDrLvm5l2Do0m3a+
|
||||
eXXW2pxldLw7hDjbeQ9bPwKW7TPHbrC8Kirydgw7WUqJcUw9IJCR+n/8Gquw39Xy45PdyaNbf0yv
|
||||
eNeWINSDucUGIUK6/PrV/bd4TWCrH6yUFgXqtM8O6+cdAtMtsHW4a5OAWN6t0pYqPUAoNUzk9bJS
|
||||
/vRVDiRDD6dff41w4/wGh3Z4ELvwTMpLvSr9/GmibPySpgstkYZcxoNKYlLaRUCFW78HywNk6fLh
|
||||
8lL6Puob1hP1E7TNATDQGTST2JE3ONRIPy6MmF3mwQ0vh7h9F1B+zBGJk5et8Y3bmpDgffTzryjX
|
||||
vp0WTiz3msRVJ2CuVWcPHGXhyeH2dCrOlDwVdIXJEwdNu75Tzxcf7YziTDJ2bqu1Fx4Q/v2bCvjP
|
||||
f/311//4TRjUbZZ/tsGAMV/Gf//XqMC/hX8PdfL5/BlDmIakyP/+539PIPz97dv6O/7PsX3nzfD3
|
||||
P38Jf0YN/h7bMfn8P4//tb3oP//1vwAAAP//AwDPjjDU3iAAAA==
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 93c2407849cb943e-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 07 May 2025 16:56:39 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=iRcThWdZ.NO0HPhZg.pUV1AiG0u.0Dkd58N9HGucKdQ-1746636999-1.0.1.1-Cswtia9bUNC0npExHV2GcZLT2MVo6tEQbFU_dsKpjNN5R3s37B6JGWTE1IIZV9V0UGLhiy04og474anpJW4c6yLw0.9q5F4MPcxtAOjwBvo;
|
||||
path=/; expires=Wed, 07-May-25 17:26:39 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=rvDDZbBWaissP0luvtyuyyAWcPx3AiaoZS9LkAuK4sM-1746636999152-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '116'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-6b78fbf94c-z6prb
|
||||
x-envoy-upstream-service-time:
|
||||
- '123'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999996'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_3f67e7a1b90d845c25e9cef31147aba0
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Information Agent.
|
||||
I have access to knowledge sources\nYour personal goal is: Provide information
|
||||
based on knowledge sources\nTo give my best complete final answer to the task
|
||||
respond using the exact following format:\n\nThought: I now can give a great
|
||||
answer\nFinal Answer: Your final answer must be the great and the most complete
|
||||
as possible, it must be outcome described.\n\nI MUST use these formats, my job
|
||||
depends on it!"}, {"role": "user", "content": "\nCurrent Task: What is the capital
|
||||
of France?\n\nThis is the expected criteria for your final answer: The capital
|
||||
of France is Paris.\nyou MUST return the actual complete content as the final
|
||||
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"}],
|
||||
"model": "gpt-4", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '911'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.68.2
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.68.2
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA4xSwW7bMAy9+ysIXXpJgmbN0iW3DEPQAtswDBt2WAuDlWhbjSxpEt00KPLvg+Uk
|
||||
drYeetGBj3x6fHwvGYDQSixByApZ1t6MP/6UX75vV82n7Yf17uvKX4dVuXn/Sxc3n2+uxKidcA+P
|
||||
JPk4NZGu9oZYO9vBMhAytazT69l8fjVfLBYJqJ0i046Vnsez8eV8eiCUldOSoljC7wwA4CW9rTar
|
||||
6Fks4XJ0rNQUI5YklqcmABGcaSsCY9SR0bIY9aB0lskmubcg0VrH4IN70ooA7Q4cVxRA28KFGtst
|
||||
ACNwRcAYNyANYTA7iIxMXZ2ePUkmBYW2aABt3FIAtAqUo2gvGAL9aXQgQKV0y4hmyD+BW4iVa4w6
|
||||
6ehoUfKR7cCgJnf2zq7TP6uELOFHRSDRa0YDroB1QCsJdIRvGHScDFcPVDQRW8ttY8wASC4kMcn0
|
||||
+wOyP9lsXOmDe4j/jIpCWx2rPBBGZ1tLIzsvErrPAO7TOZuzCwkfXO05Z7eh9N10vuj4RJ+cHp1N
|
||||
DyA7RtPX300PITjnyxUxahMHgRASZUWqH+3Tg43SbgBkg63/V/Mad7e5tuVb6HtASvJMKveBlJbn
|
||||
G/dtgR5Tsl5vO7mcBItI4UlLyllTaC+hqMDGdNEXcReZ6rzQtqTgg075by+Z7bO/AAAA//8DAI9H
|
||||
rN32AwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 93c2407d5cfaface-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 07 May 2025 16:56:41 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=g.ZMxB8fB7ZSkwD6w5ws93pGGw6nEi3uFVh.JDp2OOU-1746637001-1.0.1.1-59mPPW0bDWyD6ngFx6m9LdHurrdN9Kaem.eFcKAwWp_H_4kabp2CzCRiEaW2QhRYYPWE6fZPgqWU8amQtZqpRZHtEjTyoL8t6UtyzyTCoAQ;
|
||||
path=/; expires=Wed, 07-May-25 17:26:41 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=WOtfHIloFTPkupN1gC2z.3cExzObgfz.p4fXYpCK0aI-1746637001631-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '2084'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '2086'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '1000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '999805'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 11ms
|
||||
x-request-id:
|
||||
- req_1ff0f0c079f8e7f5feb17fe762b5e40a
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -16,7 +16,7 @@ interactions:
|
||||
answer MUST contain all the information requested in the following format: {\n \"summary\":
|
||||
str,\n \"confidence\": int\n}\n\nIMPORTANT: Ensure the final output does not
|
||||
include any code block markers like ```json or ```python."}, {"role": "user",
|
||||
"content": "What is the population of Tokyo? Return your strucutred output in
|
||||
"content": "What is the population of Tokyo? Return your structured output in
|
||||
JSON format with the following fields: summary, confidence"}], "model": "gpt-4o-mini",
|
||||
"stop": []}'
|
||||
headers:
|
||||
|
||||
3321
tests/cassettes/test_multiple_docling_sources.yaml
Normal file
121
tests/cassettes/test_task_interpolation_with_hyphens.yaml
Normal file
@@ -0,0 +1,121 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Researcher. You''re
|
||||
an expert researcher, specialized in technology, software engineering, AI and
|
||||
startups. You work as a freelancer and is now working on doing research and
|
||||
analysis for a new customer.\nYour personal goal is: be an assistant that responds
|
||||
with say hello world\nTo give my best complete final answer to the task respond
|
||||
using the exact following format:\n\nThought: I now can give a great answer\nFinal
|
||||
Answer: Your final answer must be the great and the most complete as possible,
|
||||
it must be outcome described.\n\nI MUST use these formats, my job depends on
|
||||
it!"}, {"role": "user", "content": "\nCurrent Task: be an assistant that responds
|
||||
with say hello world\n\nThis is the expected criteria for your final answer:
|
||||
The response should be addressing: say hello world\nyou MUST return the actual
|
||||
complete content as the final answer, not a summary.\n\nBegin! This is VERY
|
||||
important to you, use the tools available and give your best Final Answer, your
|
||||
job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1108'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.68.2
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.68.2
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA4xSTW/UMBC951cMPicoScMu3RuIooUDcOOrVeS1J4mp4zG2sy2q9r9XTrqbtBSJ
|
||||
iyX7zXt+b2buEgCmJNsAEx0Porc6e/vt4nf3xVxweVZ+3v/Q17fF9+pjs92+O//0iqWRQbtfKMKR
|
||||
9VJQbzUGRWaChUMeMKoW62pdropVfjYCPUnUkdbakFWU9cqorMzLKsvXWfH6gd2REujZBn4mAAB3
|
||||
4xl9Gom3bAN5enzp0XveItucigCYIx1fGPde+cBNYOkMCjIBzWj9Axi6AcENtGqPwKGNtoEbf4MO
|
||||
4NK8V4ZreDPeN7BFrSmFr+S0fLGUdNgMnsdYZtB6AXBjKPDYljHM1QNyONnX1FpHO/+EyhpllO9q
|
||||
h9yTiVZ9IMtG9JAAXI1tGh4lZ9ZRb0Md6BrH78p8NemxeTozWhzBQIHrBass02f0aomBK+0XjWaC
|
||||
iw7lTJ2nwgepaAEki9R/u3lOe0quTPs/8jMgBNqAsrYOpRKPE89lDuPy/qvs1OXRMPPo9kpgHRS6
|
||||
OAmJDR/0tFLM//EB+7pRpkVnnZr2qrH1utjl5bo6bzhLDsk9AAAA//8DAAxaM/dlAwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 93fdd19cdbfb6428-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 14 May 2025 22:26:43 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=eCtOgOCsKt_ybdNPdtFAocCmuQbNltR52chaHVe7Y_Q-1747261603-1.0.1.1-827eoA7wHS5SOkTsTqoMq6OSioi0VznQBVjvmabNSVX1bf5PpWZvblw58iggZ_wyKDB0EuVoeLKFspgBJa0kuQYR17hu43Y2C14sgdvOXIE;
|
||||
path=/; expires=Wed, 14-May-25 22:56:43 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=QUa5MnypdaVxO826bwdQaN4G6CBEV8HYVV.7OLF.qvQ-1747261603742-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '307'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '309'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999757'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_61d9066e0258b7095517f9f9c01d38e9
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
221
tests/cassettes/test_telemetry_fails_due_connect_timeout.yaml
Normal file
@@ -18,6 +18,7 @@ from crewai.cli.cli import (
|
||||
train,
|
||||
version,
|
||||
)
|
||||
from crewai.crew import Crew
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
@@ -55,81 +56,143 @@ def test_train_invalid_string_iterations(train_crew, runner):
|
||||
)
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.reset_memories_command.get_crew")
|
||||
def test_reset_all_memories(mock_get_crew, runner):
|
||||
mock_crew = mock.Mock()
|
||||
mock_get_crew.return_value = mock_crew
|
||||
@pytest.fixture
|
||||
def mock_crew():
|
||||
_mock = mock.Mock(spec=Crew, name="test_crew")
|
||||
_mock.name = "test_crew"
|
||||
return _mock
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_get_crews(mock_crew):
|
||||
with mock.patch(
|
||||
"crewai.cli.reset_memories_command.get_crews", return_value=[mock_crew]
|
||||
) as mock_get_crew:
|
||||
yield mock_get_crew
|
||||
|
||||
|
||||
def test_reset_all_memories(mock_get_crews, runner):
|
||||
result = runner.invoke(reset_memories, ["-a"])
|
||||
|
||||
mock_crew.reset_memories.assert_called_once_with(command_type="all")
|
||||
assert result.output == "All memories have been reset.\n"
|
||||
call_count = 0
|
||||
for crew in mock_get_crews.return_value:
|
||||
crew.reset_memories.assert_called_once_with(command_type="all")
|
||||
assert (
|
||||
f"[Crew ({crew.name})] Reset memories command has been completed."
|
||||
in result.output
|
||||
)
|
||||
call_count += 1
|
||||
|
||||
assert call_count == 1, "reset_memories should have been called once"
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.reset_memories_command.get_crew")
|
||||
def test_reset_short_term_memories(mock_get_crew, runner):
|
||||
mock_crew = mock.Mock()
|
||||
mock_get_crew.return_value = mock_crew
|
||||
def test_reset_short_term_memories(mock_get_crews, runner):
|
||||
result = runner.invoke(reset_memories, ["-s"])
|
||||
call_count = 0
|
||||
for crew in mock_get_crews.return_value:
|
||||
crew.reset_memories.assert_called_once_with(command_type="short")
|
||||
assert (
|
||||
f"[Crew ({crew.name})] Short term memory has been reset." in result.output
|
||||
)
|
||||
call_count += 1
|
||||
|
||||
mock_crew.reset_memories.assert_called_once_with(command_type="short")
|
||||
assert result.output == "Short term memory has been reset.\n"
|
||||
assert call_count == 1, "reset_memories should have been called once"
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.reset_memories_command.get_crew")
|
||||
def test_reset_entity_memories(mock_get_crew, runner):
|
||||
mock_crew = mock.Mock()
|
||||
mock_get_crew.return_value = mock_crew
|
||||
def test_reset_entity_memories(mock_get_crews, runner):
|
||||
result = runner.invoke(reset_memories, ["-e"])
|
||||
call_count = 0
|
||||
for crew in mock_get_crews.return_value:
|
||||
crew.reset_memories.assert_called_once_with(command_type="entity")
|
||||
assert f"[Crew ({crew.name})] Entity memory has been reset." in result.output
|
||||
call_count += 1
|
||||
|
||||
mock_crew.reset_memories.assert_called_once_with(command_type="entity")
|
||||
assert result.output == "Entity memory has been reset.\n"
|
||||
assert call_count == 1, "reset_memories should have been called once"
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.reset_memories_command.get_crew")
|
||||
def test_reset_long_term_memories(mock_get_crew, runner):
|
||||
mock_crew = mock.Mock()
|
||||
mock_get_crew.return_value = mock_crew
|
||||
def test_reset_long_term_memories(mock_get_crews, runner):
|
||||
result = runner.invoke(reset_memories, ["-l"])
|
||||
call_count = 0
|
||||
for crew in mock_get_crews.return_value:
|
||||
crew.reset_memories.assert_called_once_with(command_type="long")
|
||||
assert f"[Crew ({crew.name})] Long term memory has been reset." in result.output
|
||||
call_count += 1
|
||||
|
||||
mock_crew.reset_memories.assert_called_once_with(command_type="long")
|
||||
assert result.output == "Long term memory has been reset.\n"
|
||||
assert call_count == 1, "reset_memories should have been called once"
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.reset_memories_command.get_crew")
|
||||
def test_reset_kickoff_outputs(mock_get_crew, runner):
|
||||
mock_crew = mock.Mock()
|
||||
mock_get_crew.return_value = mock_crew
|
||||
def test_reset_kickoff_outputs(mock_get_crews, runner):
|
||||
result = runner.invoke(reset_memories, ["-k"])
|
||||
call_count = 0
|
||||
for crew in mock_get_crews.return_value:
|
||||
crew.reset_memories.assert_called_once_with(command_type="kickoff_outputs")
|
||||
assert (
|
||||
f"[Crew ({crew.name})] Latest Kickoff outputs stored has been reset."
|
||||
in result.output
|
||||
)
|
||||
call_count += 1
|
||||
|
||||
mock_crew.reset_memories.assert_called_once_with(command_type="kickoff_outputs")
|
||||
assert result.output == "Latest Kickoff outputs stored has been reset.\n"
|
||||
assert call_count == 1, "reset_memories should have been called once"
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.reset_memories_command.get_crew")
|
||||
def test_reset_multiple_memory_flags(mock_get_crew, runner):
|
||||
mock_crew = mock.Mock()
|
||||
mock_get_crew.return_value = mock_crew
|
||||
def test_reset_multiple_memory_flags(mock_get_crews, runner):
|
||||
result = runner.invoke(reset_memories, ["-s", "-l"])
|
||||
call_count = 0
|
||||
for crew in mock_get_crews.return_value:
|
||||
crew.reset_memories.assert_has_calls(
|
||||
[mock.call(command_type="long"), mock.call(command_type="short")]
|
||||
)
|
||||
assert (
|
||||
f"[Crew ({crew.name})] Long term memory has been reset.\n"
|
||||
f"[Crew ({crew.name})] Short term memory has been reset.\n" in result.output
|
||||
)
|
||||
call_count += 1
|
||||
|
||||
# Check that reset_memories was called twice with the correct arguments
|
||||
assert mock_crew.reset_memories.call_count == 2
|
||||
mock_crew.reset_memories.assert_has_calls(
|
||||
[mock.call(command_type="long"), mock.call(command_type="short")]
|
||||
)
|
||||
assert (
|
||||
result.output
|
||||
== "Long term memory has been reset.\nShort term memory has been reset.\n"
|
||||
)
|
||||
assert call_count == 1, "reset_memories should have been called once"
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.reset_memories_command.get_crew")
|
||||
def test_reset_knowledge(mock_get_crew, runner):
|
||||
mock_crew = mock.Mock()
|
||||
mock_get_crew.return_value = mock_crew
|
||||
def test_reset_knowledge(mock_get_crews, runner):
|
||||
result = runner.invoke(reset_memories, ["--knowledge"])
|
||||
call_count = 0
|
||||
for crew in mock_get_crews.return_value:
|
||||
crew.reset_memories.assert_called_once_with(command_type="knowledge")
|
||||
assert f"[Crew ({crew.name})] Knowledge has been reset." in result.output
|
||||
call_count += 1
|
||||
|
||||
assert call_count == 1, "reset_memories should have been called once"
|
||||
|
||||
|
||||
def test_reset_agent_knowledge(mock_get_crews, runner):
|
||||
result = runner.invoke(reset_memories, ["--agent-knowledge"])
|
||||
call_count = 0
|
||||
for crew in mock_get_crews.return_value:
|
||||
crew.reset_memories.assert_called_once_with(command_type="agent_knowledge")
|
||||
assert f"[Crew ({crew.name})] Agents knowledge has been reset." in result.output
|
||||
call_count += 1
|
||||
|
||||
assert call_count == 1, "reset_memories should have been called once"
|
||||
|
||||
|
||||
def test_reset_memory_from_many_crews(mock_get_crews, runner):
|
||||
crews = []
|
||||
for crew_id in ["id-1234", "id-5678"]:
|
||||
mock_crew = mock.Mock(spec=Crew)
|
||||
mock_crew.name = None
|
||||
mock_crew.id = crew_id
|
||||
crews.append(mock_crew)
|
||||
|
||||
mock_get_crews.return_value = crews
|
||||
|
||||
# Run the command
|
||||
result = runner.invoke(reset_memories, ["--knowledge"])
|
||||
|
||||
mock_crew.reset_memories.assert_called_once_with(command_type="knowledge")
|
||||
assert result.output == "Knowledge has been reset.\n"
|
||||
call_count = 0
|
||||
for crew in crews:
|
||||
call_count += 1
|
||||
crew.reset_memories.assert_called_once_with(command_type="knowledge")
|
||||
assert f"[Crew ({crew.id})] Knowledge has been reset." in result.output
|
||||
|
||||
assert call_count == 2, "reset_memories should have been called twice"
|
||||
|
||||
|
||||
def test_reset_no_memory_flags(runner):
|
||||
|
||||
@@ -3,12 +3,13 @@ import tempfile
|
||||
import unittest
|
||||
import unittest.mock
|
||||
from contextlib import contextmanager
|
||||
from io import StringIO
|
||||
from unittest import mock
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from pytest import raises
|
||||
|
||||
from crewai.cli.authentication.utils import TokenManager
|
||||
from crewai.cli.tools.main import ToolCommand
|
||||
|
||||
|
||||
@@ -23,17 +24,20 @@ def in_temp_dir():
|
||||
os.chdir(original_dir)
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.subprocess.run")
|
||||
def test_create_success(mock_subprocess):
|
||||
with in_temp_dir():
|
||||
tool_command = ToolCommand()
|
||||
@pytest.fixture
|
||||
def tool_command():
|
||||
TokenManager().save_tokens("test-token", 36000)
|
||||
tool_command = ToolCommand()
|
||||
with patch.object(tool_command, "login"):
|
||||
yield tool_command
|
||||
|
||||
with (
|
||||
patch.object(tool_command, "login") as mock_login,
|
||||
patch("sys.stdout", new=StringIO()) as fake_out,
|
||||
):
|
||||
tool_command.create("test-tool")
|
||||
output = fake_out.getvalue()
|
||||
|
||||
@patch("crewai.cli.tools.main.subprocess.run")
|
||||
def test_create_success(mock_subprocess, capsys, tool_command):
|
||||
with in_temp_dir():
|
||||
tool_command.create("test-tool")
|
||||
output = capsys.readouterr().out
|
||||
assert "Creating custom tool test_tool..." in output
|
||||
|
||||
assert os.path.isdir("test_tool")
|
||||
assert os.path.isfile(os.path.join("test_tool", "README.md"))
|
||||
@@ -47,15 +51,12 @@ def test_create_success(mock_subprocess):
|
||||
content = f.read()
|
||||
assert "class TestTool" in content
|
||||
|
||||
mock_login.assert_called_once()
|
||||
mock_subprocess.assert_called_once_with(["git", "init"], check=True)
|
||||
|
||||
assert "Creating custom tool test_tool..." in output
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.subprocess.run")
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
|
||||
def test_install_success(mock_get, mock_subprocess_run):
|
||||
def test_install_success(mock_get, mock_subprocess_run, capsys, tool_command):
|
||||
mock_get_response = MagicMock()
|
||||
mock_get_response.status_code = 200
|
||||
mock_get_response.json.return_value = {
|
||||
@@ -65,11 +66,9 @@ def test_install_success(mock_get, mock_subprocess_run):
|
||||
mock_get.return_value = mock_get_response
|
||||
mock_subprocess_run.return_value = MagicMock(stderr=None)
|
||||
|
||||
tool_command = ToolCommand()
|
||||
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
tool_command.install("sample-tool")
|
||||
output = fake_out.getvalue()
|
||||
tool_command.install("sample-tool")
|
||||
output = capsys.readouterr().out
|
||||
assert "Successfully installed sample-tool" in output
|
||||
|
||||
mock_get.assert_has_calls([mock.call("sample-tool"), mock.call().json()])
|
||||
mock_subprocess_run.assert_any_call(
|
||||
@@ -86,54 +85,42 @@ def test_install_success(mock_get, mock_subprocess_run):
|
||||
env=unittest.mock.ANY,
|
||||
)
|
||||
|
||||
assert "Successfully installed sample-tool" in output
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
|
||||
def test_install_tool_not_found(mock_get):
|
||||
def test_install_tool_not_found(mock_get, capsys, tool_command):
|
||||
mock_get_response = MagicMock()
|
||||
mock_get_response.status_code = 404
|
||||
mock_get.return_value = mock_get_response
|
||||
|
||||
tool_command = ToolCommand()
|
||||
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
try:
|
||||
tool_command.install("non-existent-tool")
|
||||
except SystemExit:
|
||||
pass
|
||||
output = fake_out.getvalue()
|
||||
with raises(SystemExit):
|
||||
tool_command.install("non-existent-tool")
|
||||
output = capsys.readouterr().out
|
||||
assert "No tool found with this name" in output
|
||||
|
||||
mock_get.assert_called_once_with("non-existent-tool")
|
||||
assert "No tool found with this name" in output
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
|
||||
def test_install_api_error(mock_get):
|
||||
def test_install_api_error(mock_get, capsys, tool_command):
|
||||
mock_get_response = MagicMock()
|
||||
mock_get_response.status_code = 500
|
||||
mock_get.return_value = mock_get_response
|
||||
|
||||
tool_command = ToolCommand()
|
||||
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
try:
|
||||
tool_command.install("error-tool")
|
||||
except SystemExit:
|
||||
pass
|
||||
output = fake_out.getvalue()
|
||||
with raises(SystemExit):
|
||||
tool_command.install("error-tool")
|
||||
output = capsys.readouterr().out
|
||||
assert "Failed to get tool details" in output
|
||||
|
||||
mock_get.assert_called_once_with("error-tool")
|
||||
assert "Failed to get tool details" in output
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.git.Repository.is_synced", return_value=False)
|
||||
def test_publish_when_not_in_sync(mock_is_synced):
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out, raises(SystemExit):
|
||||
tool_command = ToolCommand()
|
||||
def test_publish_when_not_in_sync(mock_is_synced, capsys, tool_command):
|
||||
with raises(SystemExit):
|
||||
tool_command.publish(is_public=True)
|
||||
|
||||
assert "Local changes need to be resolved before publishing" in fake_out.getvalue()
|
||||
output = capsys.readouterr().out
|
||||
assert "Local changes need to be resolved before publishing" in output
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
|
||||
@@ -157,13 +144,13 @@ def test_publish_when_not_in_sync_and_force(
|
||||
mock_get_project_description,
|
||||
mock_get_project_version,
|
||||
mock_get_project_name,
|
||||
tool_command,
|
||||
):
|
||||
mock_publish_response = MagicMock()
|
||||
mock_publish_response.status_code = 200
|
||||
mock_publish_response.json.return_value = {"handle": "sample-tool"}
|
||||
mock_publish.return_value = mock_publish_response
|
||||
|
||||
tool_command = ToolCommand()
|
||||
tool_command.publish(is_public=True, force=True)
|
||||
|
||||
mock_get_project_name.assert_called_with(require=True)
|
||||
@@ -205,13 +192,13 @@ def test_publish_success(
|
||||
mock_get_project_description,
|
||||
mock_get_project_version,
|
||||
mock_get_project_name,
|
||||
tool_command,
|
||||
):
|
||||
mock_publish_response = MagicMock()
|
||||
mock_publish_response.status_code = 200
|
||||
mock_publish_response.json.return_value = {"handle": "sample-tool"}
|
||||
mock_publish.return_value = mock_publish_response
|
||||
|
||||
tool_command = ToolCommand()
|
||||
tool_command.publish(is_public=True)
|
||||
|
||||
mock_get_project_name.assert_called_with(require=True)
|
||||
@@ -251,25 +238,22 @@ def test_publish_failure(
|
||||
mock_get_project_description,
|
||||
mock_get_project_version,
|
||||
mock_get_project_name,
|
||||
capsys,
|
||||
tool_command,
|
||||
):
|
||||
mock_publish_response = MagicMock()
|
||||
mock_publish_response.status_code = 422
|
||||
mock_publish_response.json.return_value = {"name": ["is already taken"]}
|
||||
mock_publish.return_value = mock_publish_response
|
||||
|
||||
tool_command = ToolCommand()
|
||||
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
try:
|
||||
tool_command.publish(is_public=True)
|
||||
except SystemExit:
|
||||
pass
|
||||
output = fake_out.getvalue()
|
||||
|
||||
mock_publish.assert_called_once()
|
||||
with raises(SystemExit):
|
||||
tool_command.publish(is_public=True)
|
||||
output = capsys.readouterr().out
|
||||
assert "Failed to complete operation" in output
|
||||
assert "Name is already taken" in output
|
||||
|
||||
mock_publish.assert_called_once()
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
|
||||
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
|
||||
@@ -290,6 +274,8 @@ def test_publish_api_error(
|
||||
mock_get_project_description,
|
||||
mock_get_project_version,
|
||||
mock_get_project_name,
|
||||
capsys,
|
||||
tool_command,
|
||||
):
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 500
|
||||
@@ -297,14 +283,9 @@ def test_publish_api_error(
|
||||
mock_response.ok = False
|
||||
mock_publish.return_value = mock_response
|
||||
|
||||
tool_command = ToolCommand()
|
||||
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
try:
|
||||
tool_command.publish(is_public=True)
|
||||
except SystemExit:
|
||||
pass
|
||||
output = fake_out.getvalue()
|
||||
with raises(SystemExit):
|
||||
tool_command.publish(is_public=True)
|
||||
output = capsys.readouterr().out
|
||||
assert "Request to Enterprise API failed" in output
|
||||
|
||||
mock_publish.assert_called_once()
|
||||
assert "Request to Enterprise API failed" in output
|
||||
|
||||
@@ -2,21 +2,19 @@
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import tempfile
|
||||
from concurrent.futures import Future
|
||||
from unittest import mock
|
||||
from unittest.mock import MagicMock, patch
|
||||
from unittest.mock import ANY, MagicMock, patch
|
||||
|
||||
import pydantic_core
|
||||
import pytest
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.agents import CacheHandler
|
||||
from crewai.agents.cache import CacheHandler
|
||||
from crewai.agents.crew_agent_executor import CrewAgentExecutor
|
||||
from crewai.crew import Crew
|
||||
from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.flow import Flow, start
|
||||
from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||
from crewai.llm import LLM
|
||||
from crewai.memory.contextual.contextual_memory import ContextualMemory
|
||||
@@ -42,29 +40,38 @@ from crewai.utilities.events.event_listener import EventListener
|
||||
from crewai.utilities.rpm_controller import RPMController
|
||||
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
|
||||
|
||||
ceo = Agent(
|
||||
role="CEO",
|
||||
goal="Make sure the writers in your company produce amazing content.",
|
||||
backstory="You're an long time CEO of a content creation agency with a Senior Writer on the team. You're now working on a new project and want to make sure the content produced is amazing.",
|
||||
allow_delegation=True,
|
||||
)
|
||||
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents",
|
||||
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
writer = Agent(
|
||||
role="Senior Writer",
|
||||
goal="Write the best content about AI and AI agents.",
|
||||
backstory="You're a senior writer, specialized in technology, software engineering, AI and startups. You work as a freelancer and are now working on writing content for a new customer.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
@pytest.fixture
|
||||
def ceo():
|
||||
return Agent(
|
||||
role="CEO",
|
||||
goal="Make sure the writers in your company produce amazing content.",
|
||||
backstory="You're an long time CEO of a content creation agency with a Senior Writer on the team. You're now working on a new project and want to make sure the content produced is amazing.",
|
||||
allow_delegation=True,
|
||||
)
|
||||
|
||||
|
||||
def test_crew_with_only_conditional_tasks_raises_error():
|
||||
@pytest.fixture
|
||||
def researcher():
|
||||
return Agent(
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents",
|
||||
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def writer():
|
||||
return Agent(
|
||||
role="Senior Writer",
|
||||
goal="Write the best content about AI and AI agents.",
|
||||
backstory="You're a senior writer, specialized in technology, software engineering, AI and startups. You work as a freelancer and are now working on writing content for a new customer.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
|
||||
def test_crew_with_only_conditional_tasks_raises_error(researcher):
|
||||
"""Test that creating a crew with only conditional tasks raises an error."""
|
||||
|
||||
def condition_func(task_output: TaskOutput) -> bool:
|
||||
@@ -146,7 +153,9 @@ def test_crew_config_conditional_requirement():
|
||||
]
|
||||
|
||||
|
||||
def test_async_task_cannot_include_sequential_async_tasks_in_context():
|
||||
def test_async_task_cannot_include_sequential_async_tasks_in_context(
|
||||
researcher, writer
|
||||
):
|
||||
task1 = Task(
|
||||
description="Task 1",
|
||||
async_execution=True,
|
||||
@@ -194,7 +203,7 @@ def test_async_task_cannot_include_sequential_async_tasks_in_context():
|
||||
pytest.fail("Unexpected ValidationError raised")
|
||||
|
||||
|
||||
def test_context_no_future_tasks():
|
||||
def test_context_no_future_tasks(researcher, writer):
|
||||
task2 = Task(
|
||||
description="Task 2",
|
||||
expected_output="output",
|
||||
@@ -258,7 +267,7 @@ def test_crew_config_with_wrong_keys():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_creation():
|
||||
def test_crew_creation(researcher, writer):
|
||||
tasks = [
|
||||
Task(
|
||||
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
|
||||
@@ -290,7 +299,7 @@ def test_crew_creation():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_sync_task_execution():
|
||||
def test_sync_task_execution(researcher, writer):
|
||||
from unittest.mock import patch
|
||||
|
||||
tasks = [
|
||||
@@ -331,7 +340,7 @@ def test_sync_task_execution():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_hierarchical_process():
|
||||
def test_hierarchical_process(researcher, writer):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -352,7 +361,7 @@ def test_hierarchical_process():
|
||||
)
|
||||
|
||||
|
||||
def test_manager_llm_requirement_for_hierarchical_process():
|
||||
def test_manager_llm_requirement_for_hierarchical_process(researcher, writer):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -367,7 +376,7 @@ def test_manager_llm_requirement_for_hierarchical_process():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_manager_agent_delegating_to_assigned_task_agent():
|
||||
def test_manager_agent_delegating_to_assigned_task_agent(researcher, writer):
|
||||
"""
|
||||
Test that the manager agent delegates to the assigned task agent.
|
||||
"""
|
||||
@@ -419,7 +428,7 @@ def test_manager_agent_delegating_to_assigned_task_agent():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_manager_agent_delegating_to_all_agents():
|
||||
def test_manager_agent_delegating_to_all_agents(researcher, writer):
|
||||
"""
|
||||
Test that the manager agent delegates to all agents when none are specified.
|
||||
"""
|
||||
@@ -529,7 +538,7 @@ def test_manager_agent_delegates_with_varied_role_cases():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_with_delegating_agents():
|
||||
def test_crew_with_delegating_agents(ceo, writer):
|
||||
tasks = [
|
||||
Task(
|
||||
description="Produce and amazing 1 paragraph draft of an article about AI Agents.",
|
||||
@@ -553,7 +562,7 @@ def test_crew_with_delegating_agents():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_with_delegating_agents_should_not_override_task_tools():
|
||||
def test_crew_with_delegating_agents_should_not_override_task_tools(ceo, writer):
|
||||
from typing import Type
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
@@ -615,7 +624,7 @@ def test_crew_with_delegating_agents_should_not_override_task_tools():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_with_delegating_agents_should_not_override_agent_tools():
|
||||
def test_crew_with_delegating_agents_should_not_override_agent_tools(ceo, writer):
|
||||
from typing import Type
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
@@ -679,7 +688,7 @@ def test_crew_with_delegating_agents_should_not_override_agent_tools():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_tools_override_agent_tools():
|
||||
def test_task_tools_override_agent_tools(researcher):
|
||||
from typing import Type
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
@@ -734,7 +743,7 @@ def test_task_tools_override_agent_tools():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_tools_override_agent_tools_with_allow_delegation():
|
||||
def test_task_tools_override_agent_tools_with_allow_delegation(researcher, writer):
|
||||
"""
|
||||
Test that task tools override agent tools while preserving delegation tools when allow_delegation=True
|
||||
"""
|
||||
@@ -817,7 +826,7 @@ def test_task_tools_override_agent_tools_with_allow_delegation():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_verbose_output(capsys):
|
||||
def test_crew_verbose_output(researcher, writer, capsys):
|
||||
tasks = [
|
||||
Task(
|
||||
description="Research AI advancements.",
|
||||
@@ -877,7 +886,7 @@ def test_crew_verbose_output(capsys):
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_cache_hitting_between_agents():
|
||||
def test_cache_hitting_between_agents(researcher, writer, ceo):
|
||||
from unittest.mock import call, patch
|
||||
|
||||
from crewai.tools import tool
|
||||
@@ -1050,7 +1059,7 @@ def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_sequential_async_task_execution_completion():
|
||||
def test_sequential_async_task_execution_completion(researcher, writer):
|
||||
list_ideas = Task(
|
||||
description="Give me a list of 5 interesting ideas to explore for an article, what makes them unique and interesting.",
|
||||
expected_output="Bullet point list of 5 important events.",
|
||||
@@ -1204,7 +1213,7 @@ async def test_crew_async_kickoff():
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
async def test_async_task_execution_call_count():
|
||||
async def test_async_task_execution_call_count(researcher, writer):
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
list_ideas = Task(
|
||||
@@ -1707,7 +1716,7 @@ def test_agents_do_not_get_delegation_tools_with_there_is_only_one_agent():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_sequential_crew_creation_tasks_without_agents():
|
||||
def test_sequential_crew_creation_tasks_without_agents(researcher):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -1757,7 +1766,7 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_hierarchical_crew_creation_tasks_with_agents():
|
||||
def test_hierarchical_crew_creation_tasks_with_agents(researcher, writer):
|
||||
"""
|
||||
Agents are not required for tasks in a hierarchical process but sometimes they are still added
|
||||
This test makes sure that the manager still delegates the task to the agent even if the agent is passed in the task
|
||||
@@ -1810,7 +1819,7 @@ def test_hierarchical_crew_creation_tasks_with_agents():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_hierarchical_crew_creation_tasks_with_async_execution():
|
||||
def test_hierarchical_crew_creation_tasks_with_async_execution(researcher, writer, ceo):
|
||||
"""
|
||||
Tests that async tasks in hierarchical crews are handled correctly with proper delegation tools
|
||||
"""
|
||||
@@ -1867,7 +1876,7 @@ def test_hierarchical_crew_creation_tasks_with_async_execution():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_hierarchical_crew_creation_tasks_with_sync_last():
|
||||
def test_hierarchical_crew_creation_tasks_with_sync_last(researcher, writer, ceo):
|
||||
"""
|
||||
Agents are not required for tasks in a hierarchical process but sometimes they are still added
|
||||
This test makes sure that the manager still delegates the task to the agent even if the agent is passed in the task
|
||||
@@ -2153,7 +2162,6 @@ def test_tools_with_custom_caching():
|
||||
with patch.object(
|
||||
CacheHandler, "add", wraps=crew._cache_handler.add
|
||||
) as add_to_cache:
|
||||
|
||||
result = crew.kickoff()
|
||||
|
||||
# Check that add_to_cache was called exactly twice
|
||||
@@ -2170,7 +2178,7 @@ def test_tools_with_custom_caching():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_conditional_task_uses_last_output():
|
||||
def test_conditional_task_uses_last_output(researcher, writer):
|
||||
"""Test that conditional tasks use the last task output for condition evaluation."""
|
||||
task1 = Task(
|
||||
description="First task",
|
||||
@@ -2244,7 +2252,7 @@ def test_conditional_task_uses_last_output():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_conditional_tasks_result_collection():
|
||||
def test_conditional_tasks_result_collection(researcher, writer):
|
||||
"""Test that task outputs are properly collected based on execution status."""
|
||||
task1 = Task(
|
||||
description="Normal task that always executes",
|
||||
@@ -2325,7 +2333,7 @@ def test_conditional_tasks_result_collection():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_multiple_conditional_tasks():
|
||||
def test_multiple_conditional_tasks(researcher, writer):
|
||||
"""Test that having multiple conditional tasks in sequence works correctly."""
|
||||
task1 = Task(
|
||||
description="Initial research task",
|
||||
@@ -2560,7 +2568,7 @@ def test_disabled_memory_using_contextual_memory():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_log_file_output(tmp_path):
|
||||
def test_crew_log_file_output(tmp_path, researcher):
|
||||
test_file = tmp_path / "logs.txt"
|
||||
tasks = [
|
||||
Task(
|
||||
@@ -2658,7 +2666,7 @@ def test_crew_output_file_validation_failures():
|
||||
Crew(agents=[agent], tasks=[task]).kickoff()
|
||||
|
||||
|
||||
def test_manager_agent():
|
||||
def test_manager_agent(researcher, writer):
|
||||
from unittest.mock import patch
|
||||
|
||||
task = Task(
|
||||
@@ -2696,7 +2704,7 @@ def test_manager_agent():
|
||||
mock_execute_sync.assert_called()
|
||||
|
||||
|
||||
def test_manager_agent_in_agents_raises_exception():
|
||||
def test_manager_agent_in_agents_raises_exception(researcher, writer):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -2718,7 +2726,7 @@ def test_manager_agent_in_agents_raises_exception():
|
||||
)
|
||||
|
||||
|
||||
def test_manager_agent_with_tools_raises_exception():
|
||||
def test_manager_agent_with_tools_raises_exception(researcher, writer):
|
||||
from crewai.tools import tool
|
||||
|
||||
@tool
|
||||
@@ -2755,7 +2763,7 @@ def test_manager_agent_with_tools_raises_exception():
|
||||
@patch("crewai.crew.TaskEvaluator")
|
||||
@patch("crewai.crew.Crew.copy")
|
||||
def test_crew_train_success(
|
||||
copy_mock, task_evaluator, crew_training_handler, kickoff_mock
|
||||
copy_mock, task_evaluator, crew_training_handler, kickoff_mock, researcher, writer
|
||||
):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
@@ -2831,7 +2839,7 @@ def test_crew_train_success(
|
||||
assert isinstance(received_events[1], CrewTrainCompletedEvent)
|
||||
|
||||
|
||||
def test_crew_train_error():
|
||||
def test_crew_train_error(researcher, writer):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -2850,7 +2858,7 @@ def test_crew_train_error():
|
||||
)
|
||||
|
||||
|
||||
def test__setup_for_training():
|
||||
def test__setup_for_training(researcher, writer):
|
||||
researcher.allow_delegation = True
|
||||
writer.allow_delegation = True
|
||||
agents = [researcher, writer]
|
||||
@@ -2881,7 +2889,7 @@ def test__setup_for_training():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_replay_feature():
|
||||
def test_replay_feature(researcher, writer):
|
||||
list_ideas = Task(
|
||||
description="Generate a list of 5 interesting ideas to explore for an article, where each bulletpoint is under 15 words.",
|
||||
expected_output="Bullet point list of 5 important events. No additional commentary.",
|
||||
@@ -2918,7 +2926,7 @@ def test_replay_feature():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_replay_error():
|
||||
def test_crew_replay_error(researcher, writer):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -3130,6 +3138,30 @@ def test_replay_with_context():
|
||||
assert crew.tasks[1].context[0].output.raw == "context raw output"
|
||||
|
||||
|
||||
def test_replay_with_context_set_to_nullable():
|
||||
agent = Agent(role="test_agent", backstory="Test Description", goal="Test Goal")
|
||||
task1 = Task(
|
||||
description="Context Task", expected_output="Say Task Output", agent=agent
|
||||
)
|
||||
task2 = Task(
|
||||
description="Test Task", expected_output="Say Hi", agent=agent, context=[]
|
||||
)
|
||||
task3 = Task(
|
||||
description="Test Task 3", expected_output="Say Hi", agent=agent, context=None
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task1, task2, task3], process=Process.sequential)
|
||||
with patch("crewai.task.Task.execute_sync") as mock_execute_task:
|
||||
mock_execute_task.return_value = TaskOutput(
|
||||
description="Test Task Output",
|
||||
raw="test raw output",
|
||||
agent="test_agent",
|
||||
)
|
||||
crew.kickoff()
|
||||
|
||||
mock_execute_task.assert_called_with(agent=ANY, context="", tools=ANY)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_replay_with_invalid_task_id():
|
||||
agent = Agent(role="test_agent", backstory="Test Description", goal="Test Goal")
|
||||
@@ -3314,7 +3346,7 @@ def test_replay_setup_context():
|
||||
assert crew.tasks[1].prompt_context == "context raw output"
|
||||
|
||||
|
||||
def test_key():
|
||||
def test_key(researcher, writer):
|
||||
tasks = [
|
||||
Task(
|
||||
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
|
||||
@@ -3383,7 +3415,9 @@ def test_key_with_interpolated_inputs():
|
||||
assert crew.key == curr_key
|
||||
|
||||
|
||||
def test_conditional_task_requirement_breaks_when_singular_conditional_task():
|
||||
def test_conditional_task_requirement_breaks_when_singular_conditional_task(
|
||||
researcher, writer
|
||||
):
|
||||
def condition_fn(output) -> bool:
|
||||
return output.raw.startswith("Andrew Ng has!!")
|
||||
|
||||
@@ -3401,7 +3435,7 @@ def test_conditional_task_requirement_breaks_when_singular_conditional_task():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_conditional_task_last_task_when_conditional_is_true():
|
||||
def test_conditional_task_last_task_when_conditional_is_true(researcher, writer):
|
||||
def condition_fn(output) -> bool:
|
||||
return True
|
||||
|
||||
@@ -3428,7 +3462,7 @@ def test_conditional_task_last_task_when_conditional_is_true():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_conditional_task_last_task_when_conditional_is_false():
|
||||
def test_conditional_task_last_task_when_conditional_is_false(researcher, writer):
|
||||
def condition_fn(output) -> bool:
|
||||
return False
|
||||
|
||||
@@ -3452,7 +3486,7 @@ def test_conditional_task_last_task_when_conditional_is_false():
|
||||
assert result.raw == "Hi"
|
||||
|
||||
|
||||
def test_conditional_task_requirement_breaks_when_task_async():
|
||||
def test_conditional_task_requirement_breaks_when_task_async(researcher, writer):
|
||||
def my_condition(context):
|
||||
return context.get("some_value") > 10
|
||||
|
||||
@@ -3477,7 +3511,7 @@ def test_conditional_task_requirement_breaks_when_task_async():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_conditional_should_skip():
|
||||
def test_conditional_should_skip(researcher, writer):
|
||||
task1 = Task(description="Return hello", expected_output="say hi", agent=researcher)
|
||||
|
||||
condition_mock = MagicMock(return_value=False)
|
||||
@@ -3509,7 +3543,7 @@ def test_conditional_should_skip():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_conditional_should_execute():
|
||||
def test_conditional_should_execute(researcher, writer):
|
||||
task1 = Task(description="Return hello", expected_output="say hi", agent=researcher)
|
||||
|
||||
condition_mock = MagicMock(
|
||||
@@ -3542,7 +3576,7 @@ def test_conditional_should_execute():
|
||||
@mock.patch("crewai.crew.CrewEvaluator")
|
||||
@mock.patch("crewai.crew.Crew.copy")
|
||||
@mock.patch("crewai.crew.Crew.kickoff")
|
||||
def test_crew_testing_function(kickoff_mock, copy_mock, crew_evaluator):
|
||||
def test_crew_testing_function(kickoff_mock, copy_mock, crew_evaluator, researcher):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -3592,7 +3626,7 @@ def test_crew_testing_function(kickoff_mock, copy_mock, crew_evaluator):
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_hierarchical_verbose_manager_agent():
|
||||
def test_hierarchical_verbose_manager_agent(researcher, writer):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -3613,7 +3647,7 @@ def test_hierarchical_verbose_manager_agent():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_hierarchical_verbose_false_manager_agent():
|
||||
def test_hierarchical_verbose_false_manager_agent(researcher, writer):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -4186,7 +4220,7 @@ def test_before_kickoff_without_inputs():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_with_knowledge_sources_works_with_copy():
|
||||
def test_crew_with_knowledge_sources_works_with_copy(researcher, writer):
|
||||
content = "Brandon's favorite color is red and he likes Mexican food."
|
||||
string_source = StringKnowledgeSource(content=content)
|
||||
|
||||
@@ -4195,7 +4229,6 @@ def test_crew_with_knowledge_sources_works_with_copy():
|
||||
tasks=[Task(description="test", expected_output="test", agent=researcher)],
|
||||
knowledge_sources=[string_source],
|
||||
)
|
||||
|
||||
crew_copy = crew.copy()
|
||||
|
||||
assert crew_copy.knowledge_sources == crew.knowledge_sources
|
||||
@@ -4339,3 +4372,197 @@ def test_crew_copy_with_memory():
|
||||
raise e # Re-raise other validation errors
|
||||
except Exception as e:
|
||||
pytest.fail(f"Copying crew raised an unexpected exception: {e}")
|
||||
|
||||
|
||||
def test_sets_parent_flow_when_outside_flow(researcher, writer):
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
process=Process.sequential,
|
||||
tasks=[
|
||||
Task(description="Task 1", expected_output="output", agent=researcher),
|
||||
Task(description="Task 2", expected_output="output", agent=writer),
|
||||
],
|
||||
)
|
||||
assert crew.parent_flow is None
|
||||
|
||||
|
||||
def test_sets_parent_flow_when_inside_flow(researcher, writer):
|
||||
class MyFlow(Flow):
|
||||
@start()
|
||||
def start(self):
|
||||
return Crew(
|
||||
agents=[researcher, writer],
|
||||
process=Process.sequential,
|
||||
tasks=[
|
||||
Task(
|
||||
description="Task 1", expected_output="output", agent=researcher
|
||||
),
|
||||
Task(description="Task 2", expected_output="output", agent=writer),
|
||||
],
|
||||
)
|
||||
|
||||
flow = MyFlow()
|
||||
result = flow.kickoff()
|
||||
assert result.parent_flow is flow
|
||||
|
||||
|
||||
def test_reset_knowledge_with_no_crew_knowledge(researcher,writer):
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
process=Process.sequential,
|
||||
tasks=[
|
||||
Task(description="Task 1", expected_output="output", agent=researcher),
|
||||
Task(description="Task 2", expected_output="output", agent=writer),
|
||||
]
|
||||
)
|
||||
|
||||
with pytest.raises(RuntimeError) as excinfo:
|
||||
crew.reset_memories(command_type='knowledge')
|
||||
|
||||
# Optionally, you can also check the error message
|
||||
assert "Crew Knowledge and Agent Knowledge memory system is not initialized" in str(excinfo.value) # Replace with the expected message
|
||||
|
||||
|
||||
def test_reset_knowledge_with_only_crew_knowledge(researcher,writer):
|
||||
mock_ks = MagicMock(spec=Knowledge)
|
||||
|
||||
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
process=Process.sequential,
|
||||
tasks=[
|
||||
Task(description="Task 1", expected_output="output", agent=researcher),
|
||||
Task(description="Task 2", expected_output="output", agent=writer),
|
||||
],
|
||||
knowledge=mock_ks
|
||||
)
|
||||
|
||||
crew.reset_memories(command_type='knowledge')
|
||||
mock_reset_agent_knowledge.assert_called_once_with([mock_ks])
|
||||
|
||||
|
||||
def test_reset_knowledge_with_crew_and_agent_knowledge(researcher,writer):
|
||||
mock_ks_crew = MagicMock(spec=Knowledge)
|
||||
mock_ks_research = MagicMock(spec=Knowledge)
|
||||
mock_ks_writer = MagicMock(spec=Knowledge)
|
||||
|
||||
researcher.knowledge = mock_ks_research
|
||||
writer.knowledge = mock_ks_writer
|
||||
|
||||
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
process=Process.sequential,
|
||||
tasks=[
|
||||
Task(description="Task 1", expected_output="output", agent=researcher),
|
||||
Task(description="Task 2", expected_output="output", agent=writer),
|
||||
],
|
||||
knowledge=mock_ks_crew
|
||||
)
|
||||
|
||||
crew.reset_memories(command_type='knowledge')
|
||||
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_crew,mock_ks_research,mock_ks_writer])
|
||||
|
||||
|
||||
def test_reset_knowledge_with_only_agent_knowledge(researcher,writer):
|
||||
mock_ks_research = MagicMock(spec=Knowledge)
|
||||
mock_ks_writer = MagicMock(spec=Knowledge)
|
||||
|
||||
researcher.knowledge = mock_ks_research
|
||||
writer.knowledge = mock_ks_writer
|
||||
|
||||
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
process=Process.sequential,
|
||||
tasks=[
|
||||
Task(description="Task 1", expected_output="output", agent=researcher),
|
||||
Task(description="Task 2", expected_output="output", agent=writer),
|
||||
],
|
||||
)
|
||||
|
||||
crew.reset_memories(command_type='knowledge')
|
||||
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_research,mock_ks_writer])
|
||||
|
||||
|
||||
def test_reset_agent_knowledge_with_no_agent_knowledge(researcher,writer):
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
process=Process.sequential,
|
||||
tasks=[
|
||||
Task(description="Task 1", expected_output="output", agent=researcher),
|
||||
Task(description="Task 2", expected_output="output", agent=writer),
|
||||
],
|
||||
)
|
||||
|
||||
with pytest.raises(RuntimeError) as excinfo:
|
||||
crew.reset_memories(command_type='agent_knowledge')
|
||||
|
||||
# Optionally, you can also check the error message
|
||||
assert "Agent Knowledge memory system is not initialized" in str(excinfo.value) # Replace with the expected message
|
||||
|
||||
|
||||
def test_reset_agent_knowledge_with_only_crew_knowledge(researcher,writer):
|
||||
mock_ks = MagicMock(spec=Knowledge)
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
process=Process.sequential,
|
||||
tasks=[
|
||||
Task(description="Task 1", expected_output="output", agent=researcher),
|
||||
Task(description="Task 2", expected_output="output", agent=writer),
|
||||
],
|
||||
knowledge=mock_ks
|
||||
)
|
||||
|
||||
with pytest.raises(RuntimeError) as excinfo:
|
||||
crew.reset_memories(command_type='agent_knowledge')
|
||||
|
||||
# Optionally, you can also check the error message
|
||||
assert "Agent Knowledge memory system is not initialized" in str(excinfo.value) # Replace with the expected message
|
||||
|
||||
|
||||
def test_reset_agent_knowledge_with_crew_and_agent_knowledge(researcher,writer):
|
||||
mock_ks_crew = MagicMock(spec=Knowledge)
|
||||
mock_ks_research = MagicMock(spec=Knowledge)
|
||||
mock_ks_writer = MagicMock(spec=Knowledge)
|
||||
|
||||
researcher.knowledge = mock_ks_research
|
||||
writer.knowledge = mock_ks_writer
|
||||
|
||||
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
process=Process.sequential,
|
||||
tasks=[
|
||||
Task(description="Task 1", expected_output="output", agent=researcher),
|
||||
Task(description="Task 2", expected_output="output", agent=writer),
|
||||
],
|
||||
knowledge=mock_ks_crew
|
||||
)
|
||||
|
||||
crew.reset_memories(command_type='agent_knowledge')
|
||||
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_research,mock_ks_writer])
|
||||
|
||||
|
||||
def test_reset_agent_knowledge_with_only_agent_knowledge(researcher,writer):
|
||||
mock_ks_research = MagicMock(spec=Knowledge)
|
||||
mock_ks_writer = MagicMock(spec=Knowledge)
|
||||
|
||||
researcher.knowledge = mock_ks_research
|
||||
writer.knowledge = mock_ks_writer
|
||||
|
||||
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
process=Process.sequential,
|
||||
tasks=[
|
||||
Task(description="Task 1", expected_output="output", agent=researcher),
|
||||
Task(description="Task 2", expected_output="output", agent=writer),
|
||||
],
|
||||
)
|
||||
|
||||
crew.reset_memories(command_type='agent_knowledge')
|
||||
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_research,mock_ks_writer])
|
||||
|
||||
|
||||
|
||||
@@ -547,6 +547,7 @@ def test_excel_knowledge_source(mock_vector_db, tmpdir):
|
||||
mock_vector_db.query.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.vcr
|
||||
def test_docling_source(mock_vector_db):
|
||||
docling_source = CrewDoclingSource(
|
||||
file_paths=[
|
||||
@@ -567,6 +568,7 @@ def test_docling_source(mock_vector_db):
|
||||
mock_vector_db.query.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.vcr
|
||||
def test_multiple_docling_sources():
|
||||
urls: List[Union[Path, str]] = [
|
||||
"https://lilianweng.github.io/posts/2024-11-28-reward-hacking/",
|
||||
|
||||
@@ -837,9 +837,6 @@ def test_interpolate_inputs():
|
||||
|
||||
def test_interpolate_only():
|
||||
"""Test the interpolate_only method for various scenarios including JSON structure preservation."""
|
||||
task = Task(
|
||||
description="Unused in this test", expected_output="Unused in this test"
|
||||
)
|
||||
|
||||
# Test JSON structure preservation
|
||||
json_string = '{"info": "Look at {placeholder}", "nested": {"val": "{nestedVal}"}}'
|
||||
@@ -871,10 +868,6 @@ def test_interpolate_only():
|
||||
|
||||
def test_interpolate_only_with_dict_inside_expected_output():
|
||||
"""Test the interpolate_only method for various scenarios including JSON structure preservation."""
|
||||
task = Task(
|
||||
description="Unused in this test",
|
||||
expected_output="Unused in this test: {questions}",
|
||||
)
|
||||
|
||||
json_string = '{"questions": {"main_question": "What is the user\'s name?", "secondary_question": "What is the user\'s age?"}}'
|
||||
result = interpolate_only(
|
||||
@@ -1094,11 +1087,6 @@ def test_task_execution_times():
|
||||
|
||||
|
||||
def test_interpolate_with_list_of_strings():
|
||||
task = Task(
|
||||
description="Test list interpolation",
|
||||
expected_output="List: {items}",
|
||||
)
|
||||
|
||||
# Test simple list of strings
|
||||
input_str = "Available items: {items}"
|
||||
inputs = {"items": ["apple", "banana", "cherry"]}
|
||||
@@ -1112,11 +1100,6 @@ def test_interpolate_with_list_of_strings():
|
||||
|
||||
|
||||
def test_interpolate_with_list_of_dicts():
|
||||
task = Task(
|
||||
description="Test list of dicts interpolation",
|
||||
expected_output="People: {people}",
|
||||
)
|
||||
|
||||
input_data = {
|
||||
"people": [
|
||||
{"name": "Alice", "age": 30, "skills": ["Python", "AI"]},
|
||||
@@ -1137,11 +1120,6 @@ def test_interpolate_with_list_of_dicts():
|
||||
|
||||
|
||||
def test_interpolate_with_nested_structures():
|
||||
task = Task(
|
||||
description="Test nested structures",
|
||||
expected_output="Company: {company}",
|
||||
)
|
||||
|
||||
input_data = {
|
||||
"company": {
|
||||
"name": "TechCorp",
|
||||
@@ -1165,11 +1143,6 @@ def test_interpolate_with_nested_structures():
|
||||
|
||||
|
||||
def test_interpolate_with_special_characters():
|
||||
task = Task(
|
||||
description="Test special characters in dicts",
|
||||
expected_output="Data: {special_data}",
|
||||
)
|
||||
|
||||
input_data = {
|
||||
"special_data": {
|
||||
"quotes": """This has "double" and 'single' quotes""",
|
||||
@@ -1188,11 +1161,6 @@ def test_interpolate_with_special_characters():
|
||||
|
||||
|
||||
def test_interpolate_mixed_types():
|
||||
task = Task(
|
||||
description="Test mixed type interpolation",
|
||||
expected_output="Mixed: {data}",
|
||||
)
|
||||
|
||||
input_data = {
|
||||
"data": {
|
||||
"name": "Test Dataset",
|
||||
@@ -1214,11 +1182,6 @@ def test_interpolate_mixed_types():
|
||||
|
||||
|
||||
def test_interpolate_complex_combination():
|
||||
task = Task(
|
||||
description="Test complex combination",
|
||||
expected_output="Report: {report}",
|
||||
)
|
||||
|
||||
input_data = {
|
||||
"report": [
|
||||
{
|
||||
@@ -1243,11 +1206,6 @@ def test_interpolate_complex_combination():
|
||||
|
||||
|
||||
def test_interpolate_invalid_type_validation():
|
||||
task = Task(
|
||||
description="Test invalid type validation",
|
||||
expected_output="Should never reach here",
|
||||
)
|
||||
|
||||
# Test with invalid top-level type
|
||||
with pytest.raises(ValueError) as excinfo:
|
||||
interpolate_only("{data}", {"data": set()}) # type: ignore we are purposely testing this failure
|
||||
@@ -1268,11 +1226,6 @@ def test_interpolate_invalid_type_validation():
|
||||
|
||||
|
||||
def test_interpolate_custom_object_validation():
|
||||
task = Task(
|
||||
description="Test custom object rejection",
|
||||
expected_output="Should never reach here",
|
||||
)
|
||||
|
||||
class CustomObject:
|
||||
def __init__(self, value):
|
||||
self.value = value
|
||||
@@ -1304,11 +1257,6 @@ def test_interpolate_custom_object_validation():
|
||||
|
||||
|
||||
def test_interpolate_valid_complex_types():
|
||||
task = Task(
|
||||
description="Test valid complex types",
|
||||
expected_output="Validation should pass",
|
||||
)
|
||||
|
||||
# Valid complex structure
|
||||
valid_data = {
|
||||
"name": "Valid Dataset",
|
||||
@@ -1328,11 +1276,6 @@ def test_interpolate_valid_complex_types():
|
||||
|
||||
|
||||
def test_interpolate_edge_cases():
|
||||
task = Task(
|
||||
description="Test edge cases",
|
||||
expected_output="Edge case handling",
|
||||
)
|
||||
|
||||
# Test empty dict and list
|
||||
assert interpolate_only("{}", {"data": {}}) == "{}"
|
||||
assert interpolate_only("[]", {"data": []}) == "[]"
|
||||
@@ -1347,11 +1290,6 @@ def test_interpolate_edge_cases():
|
||||
|
||||
|
||||
def test_interpolate_valid_types():
|
||||
task = Task(
|
||||
description="Test valid types including null and boolean",
|
||||
expected_output="Should pass validation",
|
||||
)
|
||||
|
||||
# Test with boolean and null values (valid JSON types)
|
||||
valid_data = {
|
||||
"name": "Test",
|
||||
@@ -1373,11 +1311,11 @@ def test_interpolate_valid_types():
|
||||
|
||||
def test_task_with_no_max_execution_time():
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents",
|
||||
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
|
||||
allow_delegation=False,
|
||||
max_execution_time=None
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents",
|
||||
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
|
||||
allow_delegation=False,
|
||||
max_execution_time=None,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
@@ -1386,7 +1324,7 @@ def test_task_with_no_max_execution_time():
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
with patch.object(Agent, "_execute_without_timeout", return_value = "ok") as execute:
|
||||
with patch.object(Agent, "_execute_without_timeout", return_value="ok") as execute:
|
||||
result = task.execute_sync(agent=researcher)
|
||||
assert result.raw == "ok"
|
||||
execute.assert_called_once()
|
||||
@@ -1395,6 +1333,7 @@ def test_task_with_no_max_execution_time():
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_with_max_execution_time():
|
||||
from crewai.tools import tool
|
||||
|
||||
"""Test that execution raises TimeoutError when max_execution_time is exceeded."""
|
||||
|
||||
@tool("what amazing tool", result_as_answer=True)
|
||||
@@ -1412,7 +1351,7 @@ def test_task_with_max_execution_time():
|
||||
),
|
||||
allow_delegation=False,
|
||||
tools=[my_tool],
|
||||
max_execution_time=4
|
||||
max_execution_time=4,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
@@ -1428,6 +1367,7 @@ def test_task_with_max_execution_time():
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_with_max_execution_time_exceeded():
|
||||
from crewai.tools import tool
|
||||
|
||||
"""Test that execution raises TimeoutError when max_execution_time is exceeded."""
|
||||
|
||||
@tool("what amazing tool", result_as_answer=True)
|
||||
@@ -1445,7 +1385,7 @@ def test_task_with_max_execution_time_exceeded():
|
||||
),
|
||||
allow_delegation=False,
|
||||
tools=[my_tool],
|
||||
max_execution_time=1
|
||||
max_execution_time=1,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
@@ -1455,4 +1395,28 @@ def test_task_with_max_execution_time_exceeded():
|
||||
)
|
||||
|
||||
with pytest.raises(TimeoutError):
|
||||
task.execute_sync(agent=researcher)
|
||||
task.execute_sync(agent=researcher)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_interpolation_with_hyphens():
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="be an assistant that responds with {interpolation-with-hyphens}",
|
||||
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
task = Task(
|
||||
description="be an assistant that responds with {interpolation-with-hyphens}",
|
||||
expected_output="The response should be addressing: {interpolation-with-hyphens}",
|
||||
agent=agent,
|
||||
)
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
verbose=True,
|
||||
)
|
||||
result = crew.kickoff(inputs={"interpolation-with-hyphens": "say hello world"})
|
||||
assert "say hello world" in task.prompt()
|
||||
|
||||
assert result.raw == "Hello, World!"
|
||||
|
||||
69
tests/telemetry/test_telemetry.py
Normal file
@@ -0,0 +1,69 @@
|
||||
import os
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.telemetry import Telemetry
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"env_var,value,expected_ready",
|
||||
[
|
||||
("OTEL_SDK_DISABLED", "true", False),
|
||||
("OTEL_SDK_DISABLED", "TRUE", False),
|
||||
("CREWAI_DISABLE_TELEMETRY", "true", False),
|
||||
("CREWAI_DISABLE_TELEMETRY", "TRUE", False),
|
||||
("OTEL_SDK_DISABLED", "false", True),
|
||||
("CREWAI_DISABLE_TELEMETRY", "false", True),
|
||||
],
|
||||
)
|
||||
def test_telemetry_environment_variables(env_var, value, expected_ready):
|
||||
"""Test telemetry state with different environment variable configurations."""
|
||||
with patch.dict(os.environ, {env_var: value}):
|
||||
with patch("crewai.telemetry.telemetry.TracerProvider"):
|
||||
telemetry = Telemetry()
|
||||
assert telemetry.ready is expected_ready
|
||||
|
||||
|
||||
def test_telemetry_enabled_by_default():
|
||||
"""Test that telemetry is enabled by default."""
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
with patch("crewai.telemetry.telemetry.TracerProvider"):
|
||||
telemetry = Telemetry()
|
||||
assert telemetry.ready is True
|
||||
|
||||
|
||||
from opentelemetry import trace
|
||||
|
||||
|
||||
@patch("crewai.telemetry.telemetry.logger.error")
|
||||
@patch(
|
||||
"opentelemetry.exporter.otlp.proto.http.trace_exporter.OTLPSpanExporter.export",
|
||||
side_effect=Exception("Test exception"),
|
||||
)
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_telemetry_fails_due_connect_timeout(export_mock, logger_mock):
|
||||
error = Exception("Test exception")
|
||||
export_mock.side_effect = error
|
||||
|
||||
tracer = trace.get_tracer(__name__)
|
||||
with tracer.start_as_current_span("test-span"):
|
||||
agent = Agent(
|
||||
role="agent",
|
||||
llm="gpt-4o-mini",
|
||||
goal="Just say hi",
|
||||
backstory="You are a helpful assistant that just says hi",
|
||||
)
|
||||
task = Task(
|
||||
description="Just say hi",
|
||||
expected_output="hi",
|
||||
agent=agent,
|
||||
)
|
||||
crew = Crew(agents=[agent], tasks=[task], name="TestCrew")
|
||||
crew.kickoff()
|
||||
|
||||
trace.get_tracer_provider().force_flush()
|
||||
|
||||
export_mock.assert_called_once()
|
||||
logger_mock.assert_called_once_with(error)
|
||||
@@ -1,13 +1,16 @@
|
||||
import asyncio
|
||||
from typing import cast
|
||||
from unittest.mock import Mock
|
||||
|
||||
import pytest
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai import LLM, Agent
|
||||
from crewai.flow import Flow, start
|
||||
from crewai.lite_agent import LiteAgent, LiteAgentOutput
|
||||
from crewai.tools import BaseTool
|
||||
from crewai.utilities.events import crewai_event_bus
|
||||
from crewai.utilities.events.agent_events import LiteAgentExecutionStartedEvent
|
||||
from crewai.utilities.events.tool_usage_events import ToolUsageStartedEvent
|
||||
|
||||
|
||||
@@ -255,3 +258,60 @@ async def test_lite_agent_returns_usage_metrics_async():
|
||||
assert "21 million" in result.raw or "37 million" in result.raw
|
||||
assert result.usage_metrics is not None
|
||||
assert result.usage_metrics["total_tokens"] > 0
|
||||
|
||||
|
||||
class TestFlow(Flow):
|
||||
"""A test flow that creates and runs an agent."""
|
||||
|
||||
def __init__(self, llm, tools):
|
||||
self.llm = llm
|
||||
self.tools = tools
|
||||
super().__init__()
|
||||
|
||||
@start()
|
||||
def start(self):
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test Goal",
|
||||
backstory="Test Backstory",
|
||||
llm=self.llm,
|
||||
tools=self.tools,
|
||||
)
|
||||
return agent.kickoff("Test query")
|
||||
|
||||
|
||||
def verify_agent_parent_flow(result, agent, flow):
|
||||
"""Verify that both the result and agent have the correct parent flow."""
|
||||
assert result.parent_flow is flow
|
||||
assert agent is not None
|
||||
assert agent.parent_flow is flow
|
||||
|
||||
|
||||
def test_sets_parent_flow_when_inside_flow():
|
||||
captured_agent = None
|
||||
|
||||
mock_llm = Mock(spec=LLM)
|
||||
mock_llm.call.return_value = "Test response"
|
||||
|
||||
class MyFlow(Flow):
|
||||
@start()
|
||||
def start(self):
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test Goal",
|
||||
backstory="Test Backstory",
|
||||
llm=mock_llm,
|
||||
tools=[WebSearchTool()],
|
||||
)
|
||||
return agent.kickoff("Test query")
|
||||
|
||||
flow = MyFlow()
|
||||
with crewai_event_bus.scoped_handlers():
|
||||
|
||||
@crewai_event_bus.on(LiteAgentExecutionStartedEvent)
|
||||
def capture_agent(source, event):
|
||||
nonlocal captured_agent
|
||||
captured_agent = source
|
||||
|
||||
result = flow.kickoff()
|
||||
assert captured_agent.parent_flow is flow
|
||||
|
||||
@@ -32,3 +32,16 @@ def test_wildcard_event_handler():
|
||||
crewai_event_bus.emit("source_object", event)
|
||||
|
||||
mock_handler.assert_called_once_with("source_object", event)
|
||||
|
||||
|
||||
def test_event_bus_error_handling(capfd):
|
||||
@crewai_event_bus.on(BaseEvent)
|
||||
def broken_handler(source, event):
|
||||
raise ValueError("Simulated handler failure")
|
||||
|
||||
event = TestEvent(type="test_event")
|
||||
crewai_event_bus.emit("source_object", event)
|
||||
|
||||
out, err = capfd.readouterr()
|
||||
assert "Simulated handler failure" in out
|
||||
assert "Handler 'broken_handler' failed" in out
|
||||
|
||||
@@ -357,14 +357,7 @@ def test_convert_with_instructions():
|
||||
assert output.age == 30
|
||||
|
||||
|
||||
# Skip tests that call external APIs when running in CI/CD
|
||||
skip_external_api = pytest.mark.skipif(
|
||||
os.getenv("CI") is not None, reason="Skipping tests that call external API in CI/CD"
|
||||
)
|
||||
|
||||
|
||||
@skip_external_api
|
||||
@pytest.mark.vcr(filter_headers=["authorization"], record_mode="once")
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_converter_with_llama3_2_model():
|
||||
llm = LLM(model="ollama/llama3.2:3b", base_url="http://localhost:11434")
|
||||
sample_text = "Name: Alice Llama, Age: 30"
|
||||
@@ -381,8 +374,7 @@ def test_converter_with_llama3_2_model():
|
||||
assert output.age == 30
|
||||
|
||||
|
||||
@skip_external_api
|
||||
@pytest.mark.vcr(filter_headers=["authorization"], record_mode="once")
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_converter_with_llama3_1_model():
|
||||
llm = LLM(model="ollama/llama3.1", base_url="http://localhost:11434")
|
||||
sample_text = "Name: Alice Llama, Age: 30"
|
||||
@@ -399,13 +391,6 @@ def test_converter_with_llama3_1_model():
|
||||
assert output.age == 30
|
||||
|
||||
|
||||
# Skip tests that call external APIs when running in CI/CD
|
||||
skip_external_api = pytest.mark.skipif(
|
||||
os.getenv("CI") is not None, reason="Skipping tests that call external API in CI/CD"
|
||||
)
|
||||
|
||||
|
||||
@skip_external_api
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_converter_with_nested_model():
|
||||
llm = LLM(model="gpt-4o-mini")
|
||||
|
||||
46
uv.lock
generated
@@ -738,7 +738,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "crewai"
|
||||
version = "0.118.0"
|
||||
version = "0.120.0"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "appdirs" },
|
||||
@@ -811,8 +811,10 @@ dev = [
|
||||
{ name = "pre-commit" },
|
||||
{ name = "pytest" },
|
||||
{ name = "pytest-asyncio" },
|
||||
{ name = "pytest-randomly" },
|
||||
{ name = "pytest-recording" },
|
||||
{ name = "pytest-subprocess" },
|
||||
{ name = "pytest-timeout" },
|
||||
{ name = "python-dotenv" },
|
||||
{ name = "ruff" },
|
||||
]
|
||||
@@ -826,14 +828,14 @@ requires-dist = [
|
||||
{ name = "blinker", specifier = ">=1.9.0" },
|
||||
{ name = "chromadb", specifier = ">=0.5.23" },
|
||||
{ name = "click", specifier = ">=8.1.7" },
|
||||
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = "~=0.42.2" },
|
||||
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = "~=0.45.0" },
|
||||
{ name = "docling", marker = "extra == 'docling'", specifier = ">=2.12.0" },
|
||||
{ name = "fastembed", marker = "extra == 'fastembed'", specifier = ">=0.4.1" },
|
||||
{ name = "instructor", specifier = ">=1.3.3" },
|
||||
{ name = "json-repair", specifier = ">=0.25.2" },
|
||||
{ name = "json5", specifier = ">=0.10.0" },
|
||||
{ name = "jsonref", specifier = ">=1.1.0" },
|
||||
{ name = "litellm", specifier = "==1.67.1" },
|
||||
{ name = "litellm", specifier = "==1.68.0" },
|
||||
{ name = "mem0ai", marker = "extra == 'mem0'", specifier = ">=0.1.94" },
|
||||
{ name = "openai", specifier = ">=1.13.3" },
|
||||
{ name = "openpyxl", specifier = ">=3.1.5" },
|
||||
@@ -867,15 +869,17 @@ dev = [
|
||||
{ name = "pre-commit", specifier = ">=3.6.0" },
|
||||
{ name = "pytest", specifier = ">=8.0.0" },
|
||||
{ name = "pytest-asyncio", specifier = ">=0.23.7" },
|
||||
{ name = "pytest-randomly", specifier = ">=3.16.0" },
|
||||
{ name = "pytest-recording", specifier = ">=0.13.2" },
|
||||
{ name = "pytest-subprocess", specifier = ">=1.5.2" },
|
||||
{ name = "pytest-timeout", specifier = ">=2.3.1" },
|
||||
{ name = "python-dotenv", specifier = ">=1.0.0" },
|
||||
{ name = "ruff", specifier = ">=0.8.2" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "crewai-tools"
|
||||
version = "0.42.2"
|
||||
version = "0.45.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "chromadb" },
|
||||
@@ -890,9 +894,9 @@ dependencies = [
|
||||
{ name = "pytube" },
|
||||
{ name = "requests" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/17/34/9e63e2db53d8f5c30353f271a3240687a48e55204bbd176a057c0b7658c8/crewai_tools-0.42.2.tar.gz", hash = "sha256:69365ffb168cccfea970e09b308905aa5007cfec60024d731ffac1362a0153c0", size = 754967 }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/e9/3a/7070dcacef56702c5d83ad1a87021b1666ff1850ff80b3aa7540892406e7/crewai_tools-0.45.0.tar.gz", hash = "sha256:1b2e4eff3f928ce5fac308d6e648719a0e4718a1228ae98980aa0d74fc16bfc7", size = 909723 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/4e/43/0f70b95350084e5cb1e1d74e9acb9e18a89ba675b1d579c787c2662baba7/crewai_tools-0.42.2-py3-none-any.whl", hash = "sha256:13727fb68f0efefd21edeb281be3d66ff2f5a3b5029d4e6adef388b11fd5846a", size = 583933 },
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/72/db45626973027c992df75cbc7ef391f18393d631be3bceb6388c1b9f01e1/crewai_tools-0.45.0-py3-none-any.whl", hash = "sha256:9dd34e4792c075ee7a72134aedaab268e78d0e350114fd7fe2426e691c5f52a3", size = 602659 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -2383,7 +2387,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "litellm"
|
||||
version = "1.67.1"
|
||||
version = "1.68.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "aiohttp" },
|
||||
@@ -2398,9 +2402,9 @@ dependencies = [
|
||||
{ name = "tiktoken" },
|
||||
{ name = "tokenizers" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/54/a4/bb3e9ae59e5a9857443448de7c04752630dc84cddcbd8cee037c0976f44f/litellm-1.67.1.tar.gz", hash = "sha256:78eab1bd3d759ec13aa4a05864356a4a4725634e78501db609d451bf72150ee7", size = 7242044 }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/ba/22/138545b646303ca3f4841b69613c697b9d696322a1386083bb70bcbba60b/litellm-1.68.0.tar.gz", hash = "sha256:9fb24643db84dfda339b64bafca505a2eef857477afbc6e98fb56512c24dbbfa", size = 7314051 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/88/86/c14d3c24ae13c08296d068e6f79fd4bd17a0a07bddbda94990b87c35d20e/litellm-1.67.1-py3-none-any.whl", hash = "sha256:8fff5b2a16b63bb594b94d6c071ad0f27d3d8cd4348bd5acea2fd40c8e0c11e8", size = 7607266 },
|
||||
{ url = "https://files.pythonhosted.org/packages/10/af/1e344bc8aee41445272e677d802b774b1f8b34bdc3bb5697ba30f0fb5d52/litellm-1.68.0-py3-none-any.whl", hash = "sha256:3bca38848b1a5236b11aa6b70afa4393b60880198c939e582273f51a542d4759", size = 7684460 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -4228,6 +4232,18 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/96/31/6607dab48616902f76885dfcf62c08d929796fc3b2d2318faf9fd54dbed9/pytest_asyncio-0.24.0-py3-none-any.whl", hash = "sha256:a811296ed596b69bf0b6f3dc40f83bcaf341b155a269052d82efa2b25ac7037b", size = 18024 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pytest-randomly"
|
||||
version = "3.16.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "pytest" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/c0/68/d221ed7f4a2a49a664da721b8e87b52af6dd317af2a6cb51549cf17ac4b8/pytest_randomly-3.16.0.tar.gz", hash = "sha256:11bf4d23a26484de7860d82f726c0629837cf4064b79157bd18ec9d41d7feb26", size = 13367 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/22/70/b31577d7c46d8e2f9baccfed5067dd8475262a2331ffb0bfdf19361c9bde/pytest_randomly-3.16.0-py3-none-any.whl", hash = "sha256:8633d332635a1a0983d3bba19342196807f6afb17c3eef78e02c2f85dade45d6", size = 8396 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pytest-recording"
|
||||
version = "0.13.2"
|
||||
@@ -4254,6 +4270,18 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/10/77/a80e8f9126b95ffd5ad4d04bd14005c68dcbf0d88f53b2b14893f6cc7232/pytest_subprocess-1.5.2-py3-none-any.whl", hash = "sha256:23ac7732aa8bd45f1757265b1316eb72a7f55b41fb21e2ca22e149ba3629fa46", size = 20886 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pytest-timeout"
|
||||
version = "2.3.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "pytest" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/93/0d/04719abc7a4bdb3a7a1f968f24b0f5253d698c9cc94975330e9d3145befb/pytest-timeout-2.3.1.tar.gz", hash = "sha256:12397729125c6ecbdaca01035b9e5239d4db97352320af155b3f5de1ba5165d9", size = 17697 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/03/27/14af9ef8321f5edc7527e47def2a21d8118c6f329a9342cc61387a0c0599/pytest_timeout-2.3.1-py3-none-any.whl", hash = "sha256:68188cb703edfc6a18fad98dc25a3c61e9f24d644b0b70f33af545219fc7813e", size = 14148 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "python-bidi"
|
||||
version = "0.6.3"
|
||||
|
||||