Compare commits

...

22 Commits

Author SHA1 Message Date
Devin AI
0360988835 Skip test_gemma3 on Python 3.11 due to segmentation fault
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-04-29 21:35:49 +00:00
Devin AI
fa39ce9db2 Address PR review comments: improve validation, error handling, and add tests
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-04-29 21:30:04 +00:00
Devin AI
2f66aa0efc Fix issue #2724: Allow specifying trained data file for run command
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-04-29 21:20:58 +00:00
Greyson LaLonde
25c8155609 chore: add missing __init__.py files (#2719)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Add `__init__.py` files to 20 directories to conform with Python package standards. This ensures directories are properly recognized as packages, enabling cleaner imports.
2025-04-29 07:35:26 -07:00
Vini Brasil
55b07506c2 Remove logging setting from global context (#2720)
This commit fixes a bug where changing logging level would be overriden
by `src/crewai/project/crew_base.py`. For example, the following snippet
on top of a crew or flow would not work:

```python
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
```

Crews and flows should be able to set their own log level, without being
overriden by CrewAI library code.
2025-04-29 11:21:41 -03:00
Vidit Ostwal
59f34d900a Fixes missing prompt template or system template (#2408)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Fix issue #2402: Handle missing templates gracefully

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import sorting in test files

Co-Authored-By: Joe Moura <joao@crewai.com>

* Bluit in top of devin-ai integration

* Fixed test cases

* Fixed test cases

* fixed linting issue

* Added docs

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2025-04-28 14:04:32 -04:00
João Moura
4f6054d439 new version
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-04-28 07:39:38 -07:00
Dev Khant
a86a1213c7 Fix Mem0 OSS (#2604)
* Fix Mem0 OSS

* add test

* fix lint and tests

* fix

* add tests

* drop test

* changed to class comparision

* fixed test cases

* Update src/crewai/memory/storage/mem0_storage.py

* Update src/crewai/memory/storage/mem0_storage.py

* fix

* fix lock file

---------

Co-authored-by: Vidit-Ostwal <viditostwal@gmail.com>
2025-04-28 10:37:31 -04:00
Lucas Gomide
566935fb94 upgrade liteLLM to latest version (#2684)
* build(litellm): upgrade LiteLLM to latest version

* fix: update filtered logs from LiteLLM

* Fix for a missing backtick

---------

Co-authored-by: Mike Plachta <mike@crewai.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-28 09:46:40 -04:00
Lucas Gomide
3a66746a99 build: upgrade crewai-tools (#2705)
* build: upgrade crewai-tools

* build: prepare new version
2025-04-28 06:38:56 -07:00
João Moura
337a6d5719 preparing new version
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-04-27 23:56:22 -07:00
Tony Kipkemboi
51eb5e9998 docs: add CrewAI Enterprise docs (#2691)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Add enterprise deployment documentation to CLI docs

* Update CrewAI Enterprise documentation with comprehensive guides for Traces, Tool Repository, Webhook Streaming, and FAQ structure

* Add Enterprise documentation images

* Update Enterprise introduction with visual CardGroups and Steps components
2025-04-25 13:59:44 -07:00
Lucas Gomide
b2969e9441 style: fix linter issue (#2686)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
2025-04-25 09:34:00 -04:00
João Moura
5b9606e8b6 fix contenxt windown 2025-04-24 23:09:23 -07:00
Kunal Lunia
685d20f46c added gpt-4.1 models and gemini-2.0 and 2.5 pro models (#2609)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* added gpt4.1 models and gemini 2.0 and 2.5 models

* added flash model

* Updated test fun to all models

* Added Gemma3 test cases and passed all google test case

* added gemini 2.5 flash

* added gpt4.1 models and gemini 2.0 and 2.5 models

* added flash model

* Updated test fun to all models

* Added Gemma3 test cases and passed all google test case

* added gemini 2.5 flash

* added gpt4.1 models and gemini 2.0 and 2.5 models

* added flash model

* Updated test fun to all models

* Added Gemma3 test cases and passed all google test case

* added gemini 2.5 flash

* test: add missing cassettes

* test: ignore authorization key from gemini/gemma3 request

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-23 11:20:32 -07:00
Lucas Gomide
9ebf3aa043 docs(CodeInterpreterTool): update docs (#2675) 2025-04-23 10:27:25 -07:00
Tony Kipkemboi
2e4c97661a Add enterprise deployment documentation to CLI docs (#2670)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
2025-04-22 13:27:58 -07:00
Tony Kipkemboi
16eb4df556 docs: update docs.json with contextual options, SEO, and 404 redirect (#2654)
* docs: 0.114.0 release notes, navigation restructure, new guides, deploy video, and cleanup

- Add v0.114.0 release notes with highlights image and doc links
- Restructure docs navigation (Strategy group, Releases tab, navbar links)
- Update quickstart with deployment video and clearer instructions
- Add/rename guides (Custom Manager Agent, Custom LLM)
- Remove legacy concept/tool docs
- Add new images and tool docs
- Minor formatting and content improvements throughout

* docs: update docs.json with contextual options, SEO indexing, and 404 redirect settings
2025-04-22 09:52:27 -07:00
Vini Brasil
3d9000495c Change CLI tool publish message (#2662) 2025-04-22 13:09:30 -03:00
Tony Kipkemboi
6d0039b117 docs: 0.114.0 release notes, navigation restructure, new guides, deploy video, and cleanup (#2653)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
- Add v0.114.0 release notes with highlights image and doc links
- Restructure docs navigation (Strategy group, Releases tab, navbar links)
- Update quickstart with deployment video and clearer instructions
- Add/rename guides (Custom Manager Agent, Custom LLM)
- Remove legacy concept/tool docs
- Add new images and tool docs
- Minor formatting and content improvements throughout
2025-04-21 19:18:21 -04:00
Lorenze Jay
311a078ca6 Enhance knowledge management in CrewAI (#2637)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Enhance knowledge management in CrewAI

- Added `KnowledgeConfig` class to configure knowledge retrieval parameters such as `limit` and `score_threshold`.
- Updated `Agent` and `Crew` classes to utilize the new knowledge configuration for querying knowledge sources.
- Enhanced documentation to clarify the addition of knowledge sources at both agent and crew levels.
- Introduced new tips in documentation to guide users on knowledge source management and configuration.

* Refactor knowledge configuration parameters in CrewAI

- Renamed `limit` to `results_limit` in `KnowledgeConfig`, `query_knowledge`, and `query` methods for consistency and clarity.
- Updated related documentation to reflect the new parameter name, ensuring users understand the configuration options for knowledge retrieval.

* Refactor agent tests to utilize mock knowledge storage

- Updated test cases in `agent_test.py` to use `KnowledgeStorage` for mocking knowledge sources, enhancing test reliability and clarity.
- Renamed `limit` to `results_limit` in `KnowledgeConfig` for consistency with recent changes.
- Ensured that knowledge queries are properly mocked to return expected results during tests.

* Add VCR support for agent tests with query limits and score thresholds

- Introduced `@pytest.mark.vcr` decorator in `agent_test.py` for tests involving knowledge sources, ensuring consistent recording of HTTP interactions.
- Added new YAML cassette files for `test_agent_with_knowledge_sources_with_query_limit_and_score_threshold` and `test_agent_with_knowledge_sources_with_query_limit_and_score_threshold_default`, capturing the expected API responses for these tests.
- Enhanced test reliability by utilizing VCR to manage external API calls during testing.

* Update documentation to format parameter names in code style

- Changed the formatting of `results_limit` and `score_threshold` in the documentation to use code style for better clarity and emphasis.
- Ensured consistency in documentation presentation to enhance user understanding of configuration options.

* Enhance KnowledgeConfig with field descriptions

- Updated `results_limit` and `score_threshold` in `KnowledgeConfig` to use Pydantic's `Field` for improved documentation and clarity.
- Added descriptions to both parameters to provide better context for their usage in knowledge retrieval configuration.

* docstrings added
2025-04-18 18:33:04 -07:00
Vidit Ostwal
371f19f3cd Support set max_execution_time to Agent (#2610)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
* Fixed fake max_execution_time paramenter
---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-04-17 16:03:00 -04:00
111 changed files with 7454 additions and 2493 deletions

View File

@@ -4,6 +4,36 @@ description: View the latest updates and changes to CrewAI
icon: timeline
---
<Update label="2025-04-07" description="v0.114.0">
## Release Highlights
<Frame>
<img src="/images/v01140.png" />
</Frame>
**New Features & Enhancements**
- Agents as an atomic unit. (`Agent(...).kickoff()`)
- Support for [Custom LLM implementations](https://docs.crewai.com/guides/advanced/custom-llm).
- Integrated External Memory and [Opik observability](https://docs.crewai.com/how-to/opik-observability).
- Enhanced YAML extraction.
- Multimodal agent validation.
- Added Secure fingerprints for agents and crews.
**Core Improvements & Fixes**
- Improved serialization, agent copying, and Python compatibility.
- Added wildcard support to `emit()`
- Added support for additional router calls and context window adjustments.
- Fixed typing issues, validation, and import statements.
- Improved method performance.
- Enhanced agent task handling, event emissions, and memory management.
- Fixed CLI issues, conditional tasks, cloning behavior, and tool outputs.
**Documentation & Guides**
- Improved documentation structure, theme, and organization.
- Added guides for Local NVIDIA NIM with WSL2, W&B Weave, and Arize Phoenix.
- Updated tool configuration examples, prompts, and observability docs.
- Guide on using singular agents within Flows.
</Update>
<Update label="2025-03-17" description="v0.108.0">
**Features**
- Converted tabs to spaces in `crew.py` template

View File

@@ -255,7 +255,11 @@ custom_agent = Agent(
- `response_template`: Formats agent responses
<Note>
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{input}` in your templates. These will be automatically populated during execution.
When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting.
</Note>
<Note>
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution.
</Note>
## Agent Tools

View File

@@ -179,7 +179,78 @@ def crew(self) -> Crew:
```
</Note>
### 10. API Keys
### 10. Deploy
Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
- **Authentication**: You need to be authenticated to deploy to CrewAI Enterprise.
```shell Terminal
crewai signup
```
If you already have an account, you can login with:
```shell Terminal
crewai login
```
- **Create a deployment**: Once you are authenticated, you can create a deployment for your crew or flow from the root of your localproject.
```shell Terminal
crewai deploy create
```
- Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI Enterprise.
```shell Terminal
crewai deploy push
```
- Initiates the deployment process on the CrewAI Enterprise platform.
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
- **Deployment Status**: You can check the status of your deployment with:
```shell Terminal
crewai deploy status
```
This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`).
- **Deployment Logs**: You can check the logs of your deployment with:
```shell Terminal
crewai deploy logs
```
This streams the deployment logs to your terminal.
- **List deployments**: You can list all your deployments with:
```shell Terminal
crewai deploy list
```
This lists all your deployments.
- **Delete a deployment**: You can delete a deployment with:
```shell Terminal
crewai deploy remove
```
This deletes the deployment from the CrewAI Enterprise platform.
- **Help Command**: You can get help with the CLI with:
```shell Terminal
crewai deploy --help
```
This shows the help message for the CrewAI Deploy CLI.
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI Enterprise](http://app.crewai.com) using the CLI.
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="CrewAI Deployment Guide"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
### 11. API Keys
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.

View File

@@ -42,6 +42,16 @@ CrewAI supports various types of knowledge sources out of the box:
| `collection_name` | **str** | No | Name of the collection where the knowledge will be stored. Used to identify different sets of knowledge. Defaults to "knowledge" if not provided. |
| `storage` | **Optional[KnowledgeStorage]** | No | Custom storage configuration for managing how the knowledge is stored and retrieved. If not provided, a default storage will be created. |
<Tip>
Unlike retrieval from a vector database using a tool, agents preloaded with knowledge will not need a retrieval persona or task.
Simply add the relevant knowledge sources your agent or crew needs to function.
Knowledge sources can be added at the agent or crew level.
Crew level knowledge sources will be used by **all agents** in the crew.
Agent level knowledge sources will be used by the **specific agent** that is preloaded with the knowledge.
</Tip>
## Quickstart Example
<Tip>
@@ -146,6 +156,26 @@ result = crew.kickoff(
)
```
## Knowledge Configuration
You can configure the knowledge configuration for the crew or agent.
```python Code
from crewai.knowledge.knowledge_config import KnowledgeConfig
knowledge_config = KnowledgeConfig(results_limit=10, score_threshold=0.5)
agent = Agent(
...
knowledge_config=knowledge_config
)
```
<Tip>
`results_limit`: is the number of relevant documents to return. Default is 3.
`score_threshold`: is the minimum score for a document to be considered relevant. Default is 0.35.
</Tip>
## More Examples
Here are examples of how to use different types of knowledge sources:

View File

@@ -1,71 +0,0 @@
---
title: Using LlamaIndex Tools
description: Learn how to integrate LlamaIndex tools with CrewAI agents to enhance search-based queries and more.
icon: toolbox
---
## Using LlamaIndex Tools
<Info>
CrewAI seamlessly integrates with LlamaIndexs comprehensive toolkit for RAG (Retrieval-Augmented Generation) and agentic pipelines, enabling advanced search-based queries and more.
</Info>
Here are the available built-in tools offered by LlamaIndex.
```python Code
from crewai import Agent
from crewai_tools import LlamaIndexTool
# Example 1: Initialize from FunctionTool
from llama_index.core.tools import FunctionTool
your_python_function = lambda ...: ...
og_tool = FunctionTool.from_defaults(
your_python_function,
name="<name>",
description='<description>'
)
tool = LlamaIndexTool.from_tool(og_tool)
# Example 2: Initialize from LlamaHub Tools
from llama_index.tools.wolfram_alpha import WolframAlphaToolSpec
wolfram_spec = WolframAlphaToolSpec(app_id="<app_id>")
wolfram_tools = wolfram_spec.to_tool_list()
tools = [LlamaIndexTool.from_tool(t) for t in wolfram_tools]
# Example 3: Initialize Tool from a LlamaIndex Query Engine
query_engine = index.as_query_engine()
query_tool = LlamaIndexTool.from_query_engine(
query_engine,
name="Uber 2019 10K Query Tool",
description="Use this tool to lookup the 2019 Uber 10K Annual Report"
)
# Create and assign the tools to an agent
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[tool, *tools, query_tool]
)
# rest of the code ...
```
## Steps to Get Started
To effectively use the LlamaIndexTool, follow these steps:
<Steps>
<Step title="Package Installation">
Make sure that `crewai[tools]` package is installed in your Python environment:
<CodeGroup>
```shell Terminal
pip install 'crewai[tools]'
```
</CodeGroup>
</Step>
<Step title="Install and Use LlamaIndex">
Follow the LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.
</Step>
</Steps>

View File

@@ -8,25 +8,27 @@
"dark": "#C94C3C"
},
"favicon": "favicon.svg",
"contextual": {
"options": ["copy", "view", "chatgpt", "claude"]
},
"navigation": {
"tabs": [
{
"tab": "Get Started",
"tab": "Documentation",
"groups": [
{
"group": "Get Started",
"pages": [
"introduction",
"installation",
"quickstart",
"changelog"
"quickstart"
]
},
{
"group": "Guides",
"pages": [
{
"group": "Concepts",
"group": "Strategy",
"pages": [
"guides/concepts/evaluating-use-cases"
]
@@ -79,41 +81,6 @@
"concepts/event-listener"
]
},
{
"group": "How to Guides",
"pages": [
"how-to/create-custom-tools",
"how-to/sequential-process",
"how-to/hierarchical-process",
"how-to/custom-manager-agent",
"how-to/llm-connections",
"how-to/customizing-agents",
"how-to/multimodal-agents",
"how-to/coding-agents",
"how-to/force-tool-output-as-result",
"how-to/human-input-on-execution",
"how-to/kickoff-async",
"how-to/kickoff-for-each",
"how-to/replay-tasks-from-latest-crew-kickoff",
"how-to/conditional-tasks",
"how-to/langchain-tools",
"how-to/llamaindex-tools"
]
},
{
"group": "Agent Monitoring & Observability",
"pages": [
"how-to/agentops-observability",
"how-to/arize-phoenix-observability",
"how-to/langfuse-observability",
"how-to/langtrace-observability",
"how-to/mlflow-observability",
"how-to/openlit-observability",
"how-to/opik-observability",
"how-to/portkey-observability",
"how-to/weave-integration"
]
},
{
"group": "Tools",
"pages": [
@@ -141,6 +108,7 @@
"tools/hyperbrowserloadtool",
"tools/linkupsearchtool",
"tools/llamaindextool",
"tools/langchaintool",
"tools/serperdevtool",
"tools/s3readertool",
"tools/s3writertool",
@@ -170,6 +138,40 @@
"tools/youtubevideosearchtool"
]
},
{
"group": "Agent Monitoring & Observability",
"pages": [
"how-to/agentops-observability",
"how-to/arize-phoenix-observability",
"how-to/langfuse-observability",
"how-to/langtrace-observability",
"how-to/mlflow-observability",
"how-to/openlit-observability",
"how-to/opik-observability",
"how-to/portkey-observability",
"how-to/weave-integration"
]
},
{
"group": "Learn",
"pages": [
"how-to/conditional-tasks",
"how-to/coding-agents",
"how-to/create-custom-tools",
"how-to/custom-llm",
"how-to/custom-manager-agent",
"how-to/customizing-agents",
"how-to/force-tool-output-as-result",
"how-to/hierarchical-process",
"how-to/human-input-on-execution",
"how-to/kickoff-async",
"how-to/kickoff-for-each",
"how-to/llm-connections",
"how-to/multimodal-agents",
"how-to/replay-tasks-from-latest-crew-kickoff",
"how-to/sequential-process"
]
},
{
"group": "Telemetry",
"pages": [
@@ -178,6 +180,42 @@
}
]
},
{
"tab": "Enterprise",
"groups": [
{
"group": "Getting Started",
"pages": [
"enterprise/introduction"
]
},
{
"group": "How-To Guides",
"pages": [
"enterprise/guides/build-crew",
"enterprise/guides/deploy-crew",
"enterprise/guides/kickoff-crew",
"enterprise/guides/update-crew",
"enterprise/guides/use-crew-api",
"enterprise/guides/enable-crew-studio"
]
},
{
"group": "Features",
"pages": [
"enterprise/features/tool-repository",
"enterprise/features/webhook-streaming",
"enterprise/features/traces"
]
},
{
"group": "Resources",
"pages": [
"enterprise/resources/frequently-asked-questions"
]
}
]
},
{
"tab": "Examples",
"groups": [
@@ -188,19 +226,35 @@
]
}
]
},
{
"tab": "Releases",
"groups": [
{
"group": "Releases",
"pages": [
"changelog"
]
}
]
}
],
"global": {
"anchors": [
{
"anchor": "Community",
"anchor": "Website",
"href": "https://crewai.com",
"icon": "globe"
},
{
"anchor": "Forum",
"href": "https://community.crewai.com",
"icon": "discourse"
},
{
"anchor": "Tutorials",
"href": "https://www.youtube.com/@crewAIInc",
"icon": "youtube"
"anchor": "Get Help",
"href": "mailto:support@crewai.com",
"icon": "headset"
}
]
}
@@ -214,6 +268,12 @@
"strict": false
},
"navbar": {
"links": [
{
"label": "Start Free Trial",
"href": "https://app.crewai.com"
}
],
"primary": {
"type": "github",
"href": "https://github.com/crewAIInc/crewAI"
@@ -223,7 +283,12 @@
"prompt": "Search CrewAI docs"
},
"seo": {
"indexing": "navigable"
"indexing": "all"
},
"errors": {
"404": {
"redirect": true
}
},
"footer": {
"socials": {

View File

@@ -0,0 +1,106 @@
---
title: Tool Repository
description: "Using the Tool Repository to manage your tools"
icon: "toolbox"
---
## Overview
The Tool Repository is a package manager for CrewAI tools. It allows users to publish, install, and manage tools that integrate with CrewAI crews and flows.
Tools can be:
- **Private**: accessible only within your organization (default)
- **Public**: accessible to all CrewAI users if published with the `--public` flag
The repository is not a version control system. Use Git to track code changes and enable collaboration.
## Prerequisites
Before using the Tool Repository, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account
- [CrewAI CLI](https://docs.crewai.com/concepts/cli#cli) installed
- [Git](https://git-scm.com) installed and configured
- Access permissions to publish or install tools in your CrewAI Enterprise organization
## Installing Tools
To install a tool:
```bash
crewai tool install <tool-name>
```
This installs the tool and adds it to `pyproject.toml`.
## Creating and Publishing Tools
To create a new tool project:
```bash
crewai tool create <tool-name>
```
This generates a scaffolded tool project locally.
After making changes, initialize a Git repository and commit the code:
```bash
git init
git add .
git commit -m "Initial version"
```
To publish the tool:
```bash
crewai tool publish
```
By default, tools are published as private. To make a tool public:
```bash
crewai tool publish --public
```
For more details on how to build tools, see [Creating your own tools](https://docs.crewai.com/concepts/tools#creating-your-own-tools).
## Updating Tools
To update a published tool:
1. Modify the tool locally
2. Update the version in `pyproject.toml` (e.g., from `0.1.0` to `0.1.1`)
3. Commit the changes and publish
```bash
git commit -m "Update version to 0.1.1"
crewai tool publish
```
## Deleting Tools
To delete a tool:
1. Go to [CrewAI Enterprise](https://app.crewai.com)
2. Navigate to **Tools**
3. Select the tool
4. Click **Delete**
<Warning>
Deletion is permanent. Deleted tools cannot be restored or re-installed.
</Warning>
## Security Checks
Every published version undergoes automated security checks, and are only available to install after they pass.
You can check the security check status of a tool at:
`CrewAI Enterprise > Tools > Your Tool > Versions`
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with API integration or troubleshooting.
</Card>

View File

@@ -0,0 +1,146 @@
---
title: Traces
description: "Using Traces to monitor your Crews"
icon: "timeline"
---
## Overview
Traces provide comprehensive visibility into your crew executions, helping you monitor performance, debug issues, and optimize your AI agent workflows.
## What are Traces?
Traces in CrewAI Enterprise are detailed execution records that capture every aspect of your crew's operation, from initial inputs to final outputs. They record:
- Agent thoughts and reasoning
- Task execution details
- Tool usage and outputs
- Token consumption metrics
- Execution times
- Cost estimates
<Frame>
![Traces Overview](/images/enterprise/traces-overview.png)
</Frame>
## Accessing Traces
<Steps>
<Step title="Navigate to the Traces Tab">
Once in your CrewAI Enterprise dashboard, click on the **Traces** to view all execution records.
</Step>
<Step title="Select an Execution">
You'll see a list of all crew executions, sorted by date. Click on any execution to view its detailed trace.
</Step>
</Steps>
## Understanding the Trace Interface
The trace interface is divided into several sections, each providing different insights into your crew's execution:
### 1. Execution Summary
The top section displays high-level metrics about the execution:
- **Total Tokens**: Number of tokens consumed across all tasks
- **Prompt Tokens**: Tokens used in prompts to the LLM
- **Completion Tokens**: Tokens generated in LLM responses
- **Requests**: Number of API calls made
- **Execution Time**: Total duration of the crew run
- **Estimated Cost**: Approximate cost based on token usage
<Frame>
![Execution Summary](/images/enterprise/trace-summary.png)
</Frame>
### 2. Tasks & Agents
This section shows all tasks and agents that were part of the crew execution:
- Task name and agent assignment
- Agents and LLMs used for each task
- Status (completed/failed)
- Individual execution time of the task
<Frame>
![Task List](/images/enterprise/trace-tasks.png)
</Frame>
### 3. Final Output
Displays the final result produced by the crew after all tasks are completed.
<Frame>
![Final Output](/images/enterprise/final-output.png)
</Frame>
### 4. Execution Timeline
A visual representation of when each task started and ended, helping you identify bottlenecks or parallel execution patterns.
<Frame>
![Execution Timeline](/images/enterprise/trace-timeline.png)
</Frame>
### 5. Detailed Task View
When you click on a specific task in the timeline or task list, you'll see:
<Frame>
![Detailed Task View](/images/enterprise/trace-detailed-task.png)
</Frame>
- **Task Key**: Unique identifier for the task
- **Task ID**: Technical identifier in the system
- **Status**: Current state (completed/running/failed)
- **Agent**: Which agent performed the task
- **LLM**: Language model used for this task
- **Start/End Time**: When the task began and completed
- **Execution Time**: Duration of this specific task
- **Task Description**: What the agent was instructed to do
- **Expected Output**: What output format was requested
- **Input**: Any input provided to this task from previous tasks
- **Output**: The actual result produced by the agent
## Using Traces for Debugging
Traces are invaluable for troubleshooting issues with your crews:
<Steps>
<Step title="Identify Failure Points">
When a crew execution doesn't produce the expected results, examine the trace to find where things went wrong. Look for:
- Failed tasks
- Unexpected agent decisions
- Tool usage errors
- Misinterpreted instructions
<Frame>
![Failure Points](/images/enterprise/failure.png)
</Frame>
</Step>
<Step title="Optimize Performance">
Use execution metrics to identify performance bottlenecks:
- Tasks that took longer than expected
- Excessive token usage
- Redundant tool operations
- Unnecessary API calls
</Step>
<Step title="Improve Cost Efficiency">
Analyze token usage and cost estimates to optimize your crew's efficiency:
- Consider using smaller models for simpler tasks
- Refine prompts to be more concise
- Cache frequently accessed information
- Structure tasks to minimize redundant operations
</Step>
</Steps>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with trace analysis or any other CrewAI Enterprise features.
</Card>

View File

@@ -0,0 +1,82 @@
---
title: Webhook Streaming
description: "Using Webhook Streaming to stream events to your webhook"
icon: "webhook"
---
## Overview
Enterprise Event Streaming lets you receive real-time webhook updates about your crews and flows deployed to
CrewAI Enterprise, such as model calls, tool usage, and flow steps.
## Usage
When using the Kickoff API, include a `webhooks` object to your request, for example:
```json
{
"inputs": {"foo": "bar"},
"webhooks": {
"events": ["crew_kickoff_started", "llm_call_started"],
"url": "https://your.endpoint/webhook",
"realtime": false,
"authentication": {
"strategy": "bearer",
"token": "my-secret-token"
}
}
}
```
If `realtime` is set to `true`, each event is delivered individually and immediately, at the cost of crew/flow performance.
## Webhook Format
Each webhook sends a list of events:
```json
{
"events": [
{
"id": "event-id",
"execution_id": "crew-run-id",
"timestamp": "2025-02-16T10:58:44.965Z",
"type": "llm_call_started",
"data": {
"model": "gpt-4",
"messages": [
{"role": "system", "content": "You are an assistant."},
{"role": "user", "content": "Summarize this article."}
]
}
}
]
}
```
The `data` object structure varies by event type. Refer to the [event list](https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events) on GitHub.
As requests are sent over HTTP, the order of events can't be guaranteed. If you need ordering, use the `timestamp` field.
## Supported Events
CrewAI supports both system events and custom events in Enterprise Event Streaming. These events are sent to your configured webhook endpoint during crew and flow execution.
- `crew_kickoff_started`
- `crew_step_started`
- `crew_step_completed`
- `crew_execution_completed`
- `llm_call_started`
- `llm_call_completed`
- `tool_usage_started`
- `tool_usage_completed`
- `crew_test_failed`
- *...and others*
Event names match the internal event bus. See [GitHub source](https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events) for the full list.
You can emit your own custom events, and they will be delivered through the webhook stream alongside system events.
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with webhook integration or troubleshooting.
</Card>

View File

@@ -0,0 +1,43 @@
---
title: "Build Crew"
description: "A Crew is a group of agents that work together to complete a task."
icon: "people-arrows"
---
<Tip>
[CrewAI Enterprise](https://app.crewai.com) streamlines the process of **creating**, **deploying**, and **managing** your AI agents in production environments.
</Tip>
## Getting Started
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/d1Yp8eeknDk?si=tIxnTRI5UlyCp3z_"
title="Building Crews with CrewAI CLI"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
### Installation and Setup
<Card title="Follow Standard Installation" icon="wrench" href="/installation">
Follow our standard installation guide to set up CrewAI CLI and create your first project.
</Card>
### Building Your Crew
<Card title="Quickstart Tutorial" icon="rocket" href="/quickstart">
Follow our quickstart guide to create your first agent crew using YAML configuration.
</Card>
## Support and Resources
For Enterprise-specific support or questions, contact our dedicated support team at [support@crewai.com](mailto:support@crewai.com).
<Card title="Schedule a Demo" icon="calendar" href="mailto:support@crewai.com">
Book time with our team to learn more about Enterprise features and how they can benefit your organization.
</Card>

View File

@@ -0,0 +1,216 @@
---
title: "Deploy Crew"
description: "Deploy your local CrewAI project to the Enterprise platform"
icon: "cloud-arrow-up"
---
## Option 1: CLI Deployment
<Tip>
This video tutorial walks you through the process of deploying your locally developed CrewAI project to the CrewAI Enterprise platform,
transforming it into a production-ready API endpoint.
</Tip>
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="Deploying a Crew to CrewAI Enterprise"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
## Prerequisites
Before starting the deployment process, make sure you have:
- A CrewAI project built locally ([follow our quickstart guide](/quickstart) if you haven't created one yet)
- Your code pushed to a GitHub repository
- The latest version of the CrewAI CLI installed (`uv tool install crewai`)
<Note>
For a quick reference project, you can clone our example repository at [github.com/tonykipkemboi/crewai-latest-ai-development](https://github.com/tonykipkemboi/crewai-latest-ai-development).
</Note>
### Step 1: Authenticate with the Enterprise Platform
First, you need to authenticate your CLI with the CrewAI Enterprise platform:
```bash
# If you already have a CrewAI Enterprise account
crewai login
# If you're creating a new account
crewai signup
```
When you run either command, the CLI will:
1. Display a URL and a unique device code
2. Open your browser to the authentication page
3. Prompt you to confirm the device
4. Complete the authentication process
Upon successful authentication, you'll see a confirmation message in your terminal!
### Step 2: Create a Deployment
From your project directory, run:
```bash
crewai deploy create
```
This command will:
1. Detect your GitHub repository information
2. Identify environment variables in your local `.env` file
3. Securely transfer these variables to the Enterprise platform
4. Create a new deployment with a unique identifier
On successful creation, you'll see a message like:
```shell
Deployment created successfully!
Name: your_project_name
Deployment ID: 01234567-89ab-cdef-0123-456789abcdef
Current Status: Deploy Enqueued
```
### Step 3: Monitor Deployment Progress
Track the deployment status with:
```bash
crewai deploy status
```
For detailed logs of the build process:
```bash
crewai deploy logs
```
<Tip>
The first deployment typically takes 10-15 minutes as it builds the container images. Subsequent deployments are much faster.
</Tip>
### Additional CLI Commands
The CrewAI CLI offers several commands to manage your deployments:
```bash
# List all your deployments
crewai deploy list
# Get the status of your deployment
crewai deploy status
# View the logs of your deployment
crewai deploy logs
# Push updates after code changes
crewai deploy push
# Remove a deployment
crewai deploy remove <deployment_id>
```
## Option 2: Deploy Directly via Web Interface
You can also deploy your crews directly through the CrewAI Enterprise web interface by connecting your GitHub account. This approach doesn't require using the CLI on your local machine.
### Step 1: Pushing to GitHub
First, you need to push your crew to a GitHub repository. If you haven't created a crew yet, you can [follow this tutorial](/quickstart).
### Step 2: Connecting GitHub to CrewAI Enterprise
1. Log in to [CrewAI Enterprise](https://app.crewai.com)
2. Click on the button "Connect GitHub"
<Frame>
![Connect GitHub Button](/images/enterprise/connect-github.png)
</Frame>
### Step 3: Select the Repository
After connecting your GitHub account, you'll be able to select which repository to deploy:
<Frame>
![Select Repository](/images/enterprise/select-repo.png)
</Frame>
### Step 4: Set Environment Variables
Before deploying, you'll need to set up your environment variables to connect to your LLM provider or other services:
1. You can add variables individually or in bulk
2. Enter your environment variables in `KEY=VALUE` format (one per line)
<Frame>
![Set Environment Variables](/images/enterprise/set-env-variables.png)
</Frame>
### Step 5: Deploy Your Crew
1. Click the "Deploy" button to start the deployment process
2. You can monitor the progress through the progress bar
3. The first deployment typically takes around 10-15 minutes; subsequent deployments will be faster
<Frame>
![Deploy Progress](/images/enterprise/deploy-progress.png)
</Frame>
Once deployment is complete, you'll see:
- Your crew's unique URL
- A Bearer token to protect your crew API
- A "Delete" button if you need to remove the deployment
### Interact with Your Deployed Crew
Once deployment is complete, you can access your crew through:
1. **REST API**: The platform generates a unique HTTPS endpoint with these key routes:
- `/inputs`: Lists the required input parameters
- `/kickoff`: Initiates an execution with provided inputs
- `/status/{kickoff_id}`: Checks the execution status
2. **Web Interface**: Visit [app.crewai.com](https://app.crewai.com) to access:
- **Status tab**: View deployment information, API endpoint details, and authentication token
- **Run tab**: Visual representation of your crew's structure
- **Executions tab**: History of all executions
- **Metrics tab**: Performance analytics
- **Traces tab**: Detailed execution insights
### Trigger an Execution
From the Enterprise dashboard, you can:
1. Click on your crew's name to open its details
2. Select "Trigger Crew" from the management interface
3. Enter the required inputs in the modal that appears
4. Monitor progress as the execution moves through the pipeline
## Monitoring and Analytics
The Enterprise platform provides comprehensive observability features:
- **Execution Management**: Track active and completed runs
- **Traces**: Detailed breakdowns of each execution
- **Metrics**: Token usage, execution times, and costs
- **Timeline View**: Visual representation of task sequences
## Advanced Features
The Enterprise platform also offers:
- **Environment Variables Management**: Securely store and manage API keys
- **LLM Connections**: Configure integrations with various LLM providers
- **Custom Tools Repository**: Create, share, and install tools
- **Crew Studio**: Build crews through a chat interface without writing code
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with deployment issues or questions about the Enterprise platform.
</Card>

View File

@@ -0,0 +1,166 @@
---
title: "Enable Crew Studio"
description: "Enabling Crew Studio on CrewAI Enterprise"
icon: "comments"
---
<Tip>
Crew Studio is a powerful **no-code/low-code** tool that allows you to quickly scaffold or build Crews through a conversational interface.
</Tip>
## What is Crew Studio?
Crew Studio is an innovative way to create AI agent crews without writing code.
<Frame>
![Crew Studio Interface](/images/enterprise/crew-studio-interface.png)
</Frame>
With Crew Studio, you can:
- Chat with the Crew Assistant to describe your problem
- Automatically generate agents and tasks
- Select appropriate tools
- Configure necessary inputs
- Generate downloadable code for customization
- Deploy directly to the CrewAI Enterprise platform
## Configuration Steps
Before you can start using Crew Studio, you need to configure your LLM connections:
<Steps>
<Step title="Set Up LLM Connection">
Go to the **LLM Connections** tab in your CrewAI Enterprise dashboard and create a new LLM connection.
<Note>
Feel free to use any LLM provider you want that is supported by CrewAI.
</Note>
Configure your LLM connection:
- Enter a `Connection Name` (e.g., `OpenAI`)
- Select your model provider: `openai` or `azure`
- Select models you'd like to use in your Studio-generated Crews
- We recommend at least `gpt-4o`, `o1-mini`, and `gpt-4o-mini`
- Add your API key as an environment variable:
- For OpenAI: Add `OPENAI_API_KEY` with your API key
- For Azure OpenAI: Refer to [this article](https://blog.crewai.com/configuring-azure-openai-with-crewai-a-comprehensive-guide/) for configuration details
- Click `Add Connection` to save your configuration
<Frame>
![LLM Connection Configuration](/images/enterprise/llm-connection-config.png)
</Frame>
</Step>
<Step title="Verify Connection Added">
Once you complete the setup, you'll see your new connection added to the list of available connections.
<Frame>
![Connection Added](/images/enterprise/connection-added.png)
</Frame>
</Step>
<Step title="Configure LLM Defaults">
In the main menu, go to **Settings → Defaults** and configure the LLM Defaults settings:
- Select default models for agents and other components
- Set default configurations for Crew Studio
Click `Save Settings` to apply your changes.
<Frame>
![LLM Defaults Configuration](/images/enterprise/llm-defaults.png)
</Frame>
</Step>
</Steps>
## Using Crew Studio
Now that you've configured your LLM connection and default settings, you're ready to start using Crew Studio!
<Steps>
<Step title="Access Studio">
Navigate to the **Studio** section in your CrewAI Enterprise dashboard.
</Step>
<Step title="Start a Conversation">
Start a conversation with the Crew Assistant by describing the problem you want to solve:
```md
I need a crew that can research the latest AI developments and create a summary report.
```
The Crew Assistant will ask clarifying questions to better understand your requirements.
</Step>
<Step title="Review Generated Crew">
Review the generated crew configuration, including:
- Agents and their roles
- Tasks to be performed
- Required inputs
- Tools to be used
This is your opportunity to refine the configuration before proceeding.
</Step>
<Step title="Deploy or Download">
Once you're satisfied with the configuration, you can:
- Download the generated code for local customization
- Deploy the crew directly to the CrewAI Enterprise platform
- Modify the configuration and regenerate the crew
</Step>
<Step title="Test Your Crew">
After deployment, test your crew with sample inputs to ensure it performs as expected.
</Step>
</Steps>
<Tip>
For best results, provide clear, detailed descriptions of what you want your crew to accomplish. Include specific inputs and expected outputs in your description.
</Tip>
## Example Workflow
Here's a typical workflow for creating a crew with Crew Studio:
<Steps>
<Step title="Describe Your Problem">
Start by describing your problem:
```md
I need a crew that can analyze financial news and provide investment recommendations
```
</Step>
<Step title="Answer Questions">
Respond to clarifying questions from the Crew Assistant to refine your requirements.
</Step>
<Step title="Review the Plan">
Review the generated crew plan, which might include:
- A Research Agent to gather financial news
- An Analysis Agent to interpret the data
- A Recommendations Agent to provide investment advice
</Step>
<Step title="Approve or Modify">
Approve the plan or request changes if necessary.
</Step>
<Step title="Download or Deploy">
Download the code for customization or deploy directly to the platform.
</Step>
<Step title="Test and Refine">
Test your crew with sample inputs and refine as needed.
</Step>
</Steps>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Crew Studio or any other CrewAI Enterprise features.
</Card>

View File

@@ -0,0 +1,186 @@
---
title: "Kickoff Crew"
description: "Kickoff a Crew on CrewAI Enterprise"
icon: "flag-checkered"
---
# Kickoff a Crew on CrewAI Enterprise
Once you've deployed your crew to the CrewAI Enterprise platform, you can kickoff executions through the web interface or the API. This guide covers both approaches.
## Method 1: Using the Web Interface
### Step 1: Navigate to Your Deployed Crew
1. Log in to [CrewAI Enterprise](https://app.crewai.com)
2. Click on the crew name from your projects list
3. You'll be taken to the crew's detail page
<Frame>
![Crew Dashboard](/images/enterprise/crew-dashboard.png)
</Frame>
### Step 2: Initiate Execution
From your crew's detail page, you have two options to kickoff an execution:
#### Option A: Quick Kickoff
1. Click the `Kickoff` link in the Test Endpoints section
2. Enter the required input parameters for your crew in the JSON editor
3. Click the `Send Request` button
<Frame>
![Kickoff Endpoint](/images/enterprise/kickoff-endpoint.png)
</Frame>
#### Option B: Using the Visual Interface
1. Click the `Run` tab in the crew detail page
2. Enter the required inputs in the form fields
3. Click the `Run Crew` button
<Frame>
![Run Crew](/images/enterprise/run-crew.png)
</Frame>
### Step 3: Monitor Execution Progress
After initiating the execution:
1. You'll receive a response containing a `kickoff_id` - **copy this ID**
2. This ID is essential for tracking your execution
<Frame>
![Copy Task ID](/images/enterprise/copy-task-id.png)
</Frame>
### Step 4: Check Execution Status
To monitor the progress of your execution:
1. Click the "Status" endpoint in the Test Endpoints section
2. Paste the `kickoff_id` into the designated field
3. Click the "Get Status" button
<Frame>
![Get Status](/images/enterprise/get-status.png)
</Frame>
The status response will show:
- Current execution state (`running`, `completed`, etc.)
- Details about which tasks are in progress
- Any outputs produced so far
### Step 5: View Final Results
Once execution is complete:
1. The status will change to `completed`
2. You can view the full execution results and outputs
3. For a more detailed view, check the `Executions` tab in the crew detail page
## Method 2: Using the API
You can also kickoff crews programmatically using the CrewAI Enterprise REST API.
### Authentication
All API requests require a bearer token for authentication:
```bash
curl -H "Authorization: Bearer YOUR_CREW_TOKEN" https://your-crew-url.crewai.com
```
Your bearer token is available on the Status tab of your crew's detail page.
### Checking Crew Health
Before executing operations, you can verify that your crew is running properly:
```bash
curl -H "Authorization: Bearer YOUR_CREW_TOKEN" https://your-crew-url.crewai.com
```
A successful response will return a message indicating the crew is operational:
```
Healthy%
```
### Step 1: Retrieve Required Inputs
First, determine what inputs your crew requires:
```bash
curl -X GET \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/inputs
```
The response will be a JSON object containing an array of required input parameters, for example:
```json
{"inputs":["topic","current_year"]}
```
This example shows that this particular crew requires two inputs: `topic` and `current_year`.
### Step 2: Kickoff Execution
Initiate execution by providing the required inputs:
```bash
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
-d '{"inputs": {"topic": "AI Agent Frameworks", "current_year": "2025"}}' \
https://your-crew-url.crewai.com/kickoff
```
The response will include a `kickoff_id` that you'll need for tracking:
```json
{"kickoff_id":"abcd1234-5678-90ef-ghij-klmnopqrstuv"}
```
### Step 3: Check Execution Status
Monitor the execution progress using the kickoff_id:
```bash
curl -X GET \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/status/abcd1234-5678-90ef-ghij-klmnopqrstuv
```
## Handling Executions
### Long-Running Executions
For executions that may take a long time:
1. Consider implementing a polling mechanism to check status periodically
2. Use webhooks (if available) for notification when execution completes
3. Implement error handling for potential timeouts
### Execution Context
The execution context includes:
- Inputs provided at kickoff
- Environment variables configured during deployment
- Any state maintained between tasks
### Debugging Failed Executions
If an execution fails:
1. Check the "Executions" tab for detailed logs
2. Review the "Traces" tab for step-by-step execution details
3. Look for LLM responses and tool usage in the trace details
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with execution issues or questions about the Enterprise platform.
</Card>

View File

@@ -0,0 +1,89 @@
---
title: "Update Crew"
description: "Updating a Crew on CrewAI Enterprise"
icon: "pencil"
---
<Note>
After deploying your crew to CrewAI Enterprise, you may need to make updates to the code, security settings, or configuration.
This guide explains how to perform these common update operations.
</Note>
## Why Update Your Crew?
CrewAI won't automatically pick up GitHub updates by default, so you'll need to manually trigger updates, unless you checked the `Auto-update` option when deploying your crew.
There are several reasons you might want to update your crew deployment:
- You want to update the code with a latest commit you pushed to GitHub
- You want to reset the bearer token for security reasons
- You want to update environment variables
## 1. Updating Your Crew Code for a Latest Commit
When you've pushed new commits to your GitHub repository and want to update your deployment:
1. Navigate to your crew in the CrewAI Enterprise platform
2. Click on the `Re-deploy` button on your crew details page
<Frame>
![Re-deploy Button](/images/enterprise/redeploy-button.png)
</Frame>
This will trigger an update that you can track using the progress bar. The system will pull the latest code from your repository and rebuild your deployment.
## 2. Resetting Bearer Token
If you need to generate a new bearer token (for example, if you suspect the current token might have been compromised):
1. Navigate to your crew in the CrewAI Enterprise platform
2. Find the `Bearer Token` section
3. Click the `Reset` button next to your current token
<Frame>
![Reset Token](/images/enterprise/reset-token.png)
</Frame>
<Warning>
Resetting your bearer token will invalidate the previous token immediately. Make sure to update any applications or scripts that are using the old token.
</Warning>
## 3. Updating Environment Variables
To update the environment variables for your crew:
1. First access the deployment page by clicking on your crew's name
<Frame>
![Environment Variables Button](/images/enterprise/env-vars-button.png)
</Frame>
2. Locate the `Environment Variables` section (you will need to click the `Settings` icon to access it)
3. Edit the existing variables or add new ones in the fields provided
4. Click the `Update` button next to each variable you modify
<Frame>
![Update Environment Variables](/images/enterprise/update-env-vars.png)
</Frame>
5. Finally, click the `Update Deployment` button at the bottom of the page to apply the changes
<Note>
Updating environment variables will trigger a new deployment, but this will only update the environment configuration and not the code itself.
</Note>
## After Updating
After performing any update:
1. The system will rebuild and redeploy your crew
2. You can monitor the deployment progress in real-time
3. Once complete, test your crew to ensure the changes are working as expected
<Tip>
If you encounter any issues after updating, you can view deployment logs in the platform or contact support for assistance.
</Tip>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with updating your crew or troubleshooting deployment issues.
</Card>

View File

@@ -0,0 +1,319 @@
---
title: "Trigger Deployed Crew API"
description: "Using your deployed crew's API on CrewAI Enterprise"
icon: "arrow-up-right-from-square"
---
Once you have deployed your crew to CrewAI Enterprise, it automatically becomes available as a REST API. This guide explains how to interact with your crew programmatically.
## API Basics
Your deployed crew exposes several endpoints that allow you to:
1. Discover required inputs
2. Start crew executions
3. Monitor execution status
4. Receive results
### Authentication
All API requests require a bearer token for authentication, which is generated when you deploy your crew:
```bash
curl -H "Authorization: Bearer YOUR_CREW_TOKEN" https://your-crew-url.crewai.com/...
```
<Tip>
You can find your bearer token in the Status tab of your crew's detail page in the CrewAI Enterprise dashboard.
</Tip>
<Frame>
![Bearer Token](/images/enterprise/bearer-token.png)
</Frame>
## Available Endpoints
Your crew API provides three main endpoints:
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/inputs` | GET | Lists all required inputs for crew execution |
| `/kickoff` | POST | Starts a crew execution with provided inputs |
| `/status/{kickoff_id}` | GET | Retrieves the status and results of an execution |
## GET /inputs
The inputs endpoint allows you to discover what parameters your crew requires:
```bash
curl -X GET \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/inputs
```
### Response
```json
{
"inputs": ["budget", "interests", "duration", "age"]
}
```
This response indicates that your crew expects four input parameters: `budget`, `interests`, `duration`, and `age`.
## POST /kickoff
The kickoff endpoint starts a new crew execution:
```bash
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
-d '{
"inputs": {
"budget": "1000 USD",
"interests": "games, tech, ai, relaxing hikes, amazing food",
"duration": "7 days",
"age": "35"
}
}' \
https://your-crew-url.crewai.com/kickoff
```
### Request Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `inputs` | Object | Yes | Key-value pairs of all required inputs |
| `meta` | Object | No | Additional metadata to pass to the crew |
| `taskWebhookUrl` | String | No | Callback URL executed after each task |
| `stepWebhookUrl` | String | No | Callback URL executed after each agent thought |
| `crewWebhookUrl` | String | No | Callback URL executed when the crew finishes |
### Example with Webhooks
```json
{
"inputs": {
"budget": "1000 USD",
"interests": "games, tech, ai, relaxing hikes, amazing food",
"duration": "7 days",
"age": "35"
},
"meta": {
"requestId": "user-request-12345",
"source": "mobile-app"
},
"taskWebhookUrl": "https://your-server.com/webhooks/task",
"stepWebhookUrl": "https://your-server.com/webhooks/step",
"crewWebhookUrl": "https://your-server.com/webhooks/crew"
}
```
### Response
```json
{
"kickoff_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv"
}
```
The `kickoff_id` is used to track and retrieve the execution results.
## GET /status/{kickoff_id}
The status endpoint allows you to check the progress and results of a crew execution:
```bash
curl -X GET \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/status/abcd1234-5678-90ef-ghij-klmnopqrstuv
```
### Response Structure
The response structure will vary depending on the execution state:
#### In Progress
```json
{
"status": "running",
"current_task": "research_task",
"progress": {
"completed_tasks": 0,
"total_tasks": 2
}
}
```
#### Completed
```json
{
"status": "completed",
"result": {
"output": "Comprehensive travel itinerary...",
"tasks": [
{
"task_id": "research_task",
"output": "Research findings...",
"agent": "Researcher",
"execution_time": 45.2
},
{
"task_id": "planning_task",
"output": "7-day itinerary plan...",
"agent": "Trip Planner",
"execution_time": 62.8
}
]
},
"execution_time": 108.5
}
```
## Webhook Integration
When you provide webhook URLs in your kickoff request, the system will make POST requests to those URLs at specific points in the execution:
### taskWebhookUrl
Called when each task completes:
```json
{
"kickoff_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
"task_id": "research_task",
"status": "completed",
"output": "Research findings...",
"agent": "Researcher",
"execution_time": 45.2
}
```
### stepWebhookUrl
Called after each agent thought or action:
```json
{
"kickoff_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
"task_id": "research_task",
"agent": "Researcher",
"step_type": "thought",
"content": "I should first search for popular destinations that match these interests..."
}
```
### crewWebhookUrl
Called when the entire crew execution completes:
```json
{
"kickoff_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
"status": "completed",
"result": {
"output": "Comprehensive travel itinerary...",
"tasks": [
{
"task_id": "research_task",
"output": "Research findings...",
"agent": "Researcher",
"execution_time": 45.2
},
{
"task_id": "planning_task",
"output": "7-day itinerary plan...",
"agent": "Trip Planner",
"execution_time": 62.8
}
]
},
"execution_time": 108.5,
"meta": {
"requestId": "user-request-12345",
"source": "mobile-app"
}
}
```
## Best Practices
### Handling Long-Running Executions
Crew executions can take anywhere from seconds to minutes depending on their complexity. Consider these approaches:
1. **Webhooks (Recommended)**: Set up webhook endpoints to receive notifications when the execution completes
2. **Polling**: Implement a polling mechanism with exponential backoff
3. **Client-Side Timeout**: Set appropriate timeouts for your API requests
### Error Handling
The API may return various error codes:
| Code | Description | Recommended Action |
|------|-------------|-------------------|
| 401 | Unauthorized | Check your bearer token |
| 404 | Not Found | Verify your crew URL and kickoff_id |
| 422 | Validation Error | Ensure all required inputs are provided |
| 500 | Server Error | Contact support with the error details |
### Sample Code
Here's a complete Python example for interacting with your crew API:
```python
import requests
import time
# Configuration
CREW_URL = "https://your-crew-url.crewai.com"
BEARER_TOKEN = "your-crew-token"
HEADERS = {
"Authorization": f"Bearer {BEARER_TOKEN}",
"Content-Type": "application/json"
}
# 1. Get required inputs
response = requests.get(f"{CREW_URL}/inputs", headers=HEADERS)
required_inputs = response.json()["inputs"]
print(f"Required inputs: {required_inputs}")
# 2. Start crew execution
payload = {
"inputs": {
"budget": "1000 USD",
"interests": "games, tech, ai, relaxing hikes, amazing food",
"duration": "7 days",
"age": "35"
}
}
response = requests.post(f"{CREW_URL}/kickoff", headers=HEADERS, json=payload)
kickoff_id = response.json()["kickoff_id"]
print(f"Execution started with ID: {kickoff_id}")
# 3. Poll for results
MAX_RETRIES = 30
POLL_INTERVAL = 10 # seconds
for i in range(MAX_RETRIES):
print(f"Checking status (attempt {i+1}/{MAX_RETRIES})...")
response = requests.get(f"{CREW_URL}/status/{kickoff_id}", headers=HEADERS)
data = response.json()
if data["status"] == "completed":
print("Execution completed!")
print(f"Result: {data['result']['output']}")
break
elif data["status"] == "error":
print(f"Execution failed: {data.get('error', 'Unknown error')}")
break
else:
print(f"Status: {data['status']}, waiting {POLL_INTERVAL} seconds...")
time.sleep(POLL_INTERVAL)
```
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with API integration or troubleshooting.
</Card>

View File

@@ -0,0 +1,67 @@
---
title: "CrewAI Enterprise"
description: "Deploy, monitor, and scale your AI agent workflows"
icon: "globe"
---
## Introduction
CrewAI Enterprise provides a platform for deploying, monitoring, and scaling your crews and agents in a production environment.
CrewAI Enterprise extends the power of the open-source framework with features designed for production deployments, collaboration, and scalability. Deploy your crews to a managed infrastructure and monitor their execution in real-time.
## Key Features
<CardGroup cols={2}>
<Card title="Crew Deployments" icon="rocket">
Deploy your crews to a managed infrastructure with a few clicks
</Card>
<Card title="API Access" icon="code">
Access your deployed crews via REST API for integration with existing systems
</Card>
<Card title="Observability" icon="chart-line">
Monitor your crews with detailed execution traces and logs
</Card>
<Card title="Tool Repository" icon="toolbox">
Publish and install tools to enhance your crews' capabilities
</Card>
<Card title="Webhook Streaming" icon="webhook">
Stream real-time events and updates to your systems
</Card>
<Card title="Crew Studio" icon="paintbrush">
Create and customize crews using a no-code/low-code interface
</Card>
</CardGroup>
## Deployment Options
<CardGroup cols={3}>
<Card title="GitHub Integration" icon="github">
Connect directly to your GitHub repositories to deploy code
</Card>
<Card title="Crew Studio" icon="palette">
Deploy crews created through the no-code Crew Studio interface
</Card>
<Card title="CLI Deployment" icon="terminal">
Use the CrewAI CLI for more advanced deployment workflows
</Card>
</CardGroup>
## Getting Started
<Steps>
<Step title="Sign up for an account">
Create your account at [app.crewai.com](https://app.crewai.com)
</Step>
<Step title="Create your first crew">
Use code or Crew Studio to create your crew
</Step>
<Step title="Deploy your crew">
Deploy your crew to the Enterprise platform
</Step>
<Step title="Access your crew">
Integrate with your crew via the generated API endpoints
</Step>
</Steps>
For detailed instructions, check out our [deployment guide](/enterprise/guides/deploy-crew) or click the button below to get started.

View File

@@ -0,0 +1,181 @@
---
title: FAQs
description: "Frequently asked questions about CrewAI Enterprise"
icon: "code"
---
<AccordionGroup>
<Accordion title="How is task execution handled in the hierarchical process?">
In the hierarchical process, a manager agent is automatically created and coordinates the workflow, delegating tasks and validating outcomes for
streamlined and effective execution. The manager agent utilizes tools to facilitate task delegation and execution by agents under the manager's guidance.
The manager LLM is crucial for the hierarchical process and must be set up correctly for proper function.
</Accordion>
<Accordion title="Where can I get the latest CrewAI documentation?">
The most up-to-date documentation for CrewAI is available on our official documentation website; https://docs.crewai.com/
<Card href="https://docs.crewai.com/" icon="books">CrewAI Docs</Card>
</Accordion>
<Accordion title="What are the key differences between Hierarchical and Sequential Processes in CrewAI?">
#### Hierarchical Process:
Tasks are delegated and executed based on a structured chain of command.
A manager language model (`manager_llm`) must be specified for the manager agent.
Manager agent oversees task execution, planning, delegation, and validation.
Tasks are not pre-assigned; the manager allocates tasks to agents based on their capabilities.
#### Sequential Process:
Tasks are executed one after another, ensuring tasks are completed in an orderly progression.
Output of one task serves as context for the next.
Task execution follows the predefined order in the task list.
#### Which Process is Better Suited for Complex Projects?
The hierarchical process is better suited for complex projects because it allows for:
- **Dynamic task allocation and delegation**: Manager agent can assign tasks based on agent capabilities, allowing for efficient resource utilization.
- **Structured validation and oversight**: Manager agent reviews task outputs and ensures task completion, increasing reliability and accuracy.
- **Complex task management**: Assigning tools at the agent level allows for precise control over tool availability, facilitating the execution of intricate tasks.
</Accordion>
<Accordion title="What are the benefits of using memory in the CrewAI framework?">
- **Adaptive Learning**: Crews become more efficient over time, adapting to new information and refining their approach to tasks.
- **Enhanced Personalization**: Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
- **Improved Problem Solving**: Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.
</Accordion>
<Accordion title="What is the purpose of setting a maximum RPM limit for an agent?">
Setting a maximum RPM limit for an agent prevents the agent from making too many requests to external services, which can help to avoid rate limits and improve performance.
</Accordion>
<Accordion title="What role does human input play in the execution of tasks within a CrewAI crew?">
It allows agents to request additional information or clarification when necessary.
This feature is crucial in complex decision-making processes or when agents require more details to complete a task effectively.
To integrate human input into agent execution, set the `human_input` flag in the task definition. When enabled, the agent prompts the user for input before delivering its final answer.
This input can provide extra context, clarify ambiguities, or validate the agent's output.
</Accordion>
<Accordion title="What advanced customization options are available for tailoring and enhancing agent behavior and capabilities in CrewAI?">
CrewAI provides a range of advanced customization options to tailor and enhance agent behavior and capabilities:
- **Language Model Customization**: Agents can be customized with specific language models (`llm`) and function-calling language models (`function_calling_llm`), offering advanced control over their processing and decision-making abilities.
- **Performance and Debugging Settings**: Adjust an agent's performance and monitor its operations for efficient task execution.
- **Verbose Mode**: Enables detailed logging of an agent's actions, useful for debugging and optimization.
- **RPM Limit**: Sets the maximum number of requests per minute (`max_rpm`).
- **Maximum Iterations for Task Execution**: The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions.
- **Delegation and Autonomy**: Control an agent's ability to delegate or ask questions, tailoring its autonomy and collaborative dynamics within the CrewAI framework. By default, the `allow_delegation` attribute is set to True, enabling agents to seek assistance or delegate tasks as needed. This default behavior promotes collaborative problem-solving and efficiency within the CrewAI ecosystem. If needed, delegation can be disabled to suit specific operational requirements.
- **Human Input in Agent Execution**: Human input is critical in several agent execution scenarios, allowing agents to request additional information or clarification when necessary. This feature is especially useful in complex decision-making processes or when agents require more details to complete a task effectively.
</Accordion>
<Accordion title="In what scenarios is human input particularly useful in agent execution?">
Human input is particularly useful in agent execution when:
- **Agents require additional information or clarification**: When agents encounter ambiguity or incomplete data, human input can provide the necessary context to complete the task effectively.
- **Agents need to make complex or sensitive decisions**: Human input can assist agents in ethical or nuanced decision-making, ensuring responsible and informed outcomes.
- **Oversight and validation of agent output**: Human input can help validate the results generated by agents, ensuring accuracy and preventing any misinterpretation or errors.
- **Customizing agent behavior**: Human input can provide feedback on agent responses, allowing users to refine the agent's behavior and responses over time.
- **Identifying and resolving errors or limitations**: Human input can help identify and address any errors or limitations in the agent's capabilities, enabling continuous improvement and optimization.
</Accordion>
<Accordion title="What are the different types of memory that are available in crewAI?">
The different types of memory available in CrewAI are:
- `short-term memory`
- `long-term memory`
- `entity memory`
- `contextual memory`
Learn more about the different types of memory here:
<Card href="https://docs.crewai.com/concepts/memory" icon="brain">CrewAI Memory</Card>
</Accordion>
<Accordion title="How can I create custom tools for my CrewAI agents?">
You can create custom tools by subclassing the `BaseTool` class provided by CrewAI or by using the tool decorator. Subclassing involves defining a new class that inherits from `BaseTool`, specifying the name, description, and the `_run` method for operational logic. The tool decorator allows you to create a `Tool` object directly with the required attributes and a functional logic.
Click here for more details:
<Card href="https://docs.crewai.com/how-to/create-custom-tools" icon="code">CrewAI Tools</Card>
</Accordion>
<Accordion title="How do I use Output Pydantic in a Task?">
To use Output Pydantic in a task, you need to define the expected output of the task as a Pydantic model. Here's an example:
<Steps>
<Step title="Define a Pydantic model">
First, you need to define a Pydantic model. For instance, let's create a simple model for a user:
```python
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
```
</Step>
<Step title="Then, when creating a task, specify the expected output as this Pydantic model:">
```python
from crewai import Task, Crew, Agent
# Import the User model
from my_models import User
# Create a task with Output Pydantic
task = Task(
description="Create a user with the provided name and age",
expected_output=User, # This is the Pydantic model
agent=agent,
tools=[tool1, tool2]
)
```
</Step>
<Step title="In your agent, make sure to set the output_pydantic attribute to the Pydantic model you're using:">
```python
from crewai import Agent
# Import the User model
from my_models import User
# Create an agent with Output Pydantic
agent = Agent(
role='User Creator',
goal='Create users',
backstory='I am skilled in creating user accounts',
tools=[tool1, tool2],
output_pydantic=User
)
```
</Step>
<Step title="When executing the crew, the output of the task will be a User object:">
```python
from crewai import Crew
# Create a crew with the agent and task
crew = Crew(agents=[agent], tasks=[task])
# Kick off the crew
result = crew.kickoff()
# The output of the task will be a User object
print(result.tasks[0].output)
```
</Step>
</Steps>
Here's a tutorial on how to consistently get structured outputs from your agents:
<Frame>
<iframe
height="400"
width="100%"
src="https://www.youtube.com/embed/dNpKQk5uxHw"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen></iframe>
</Frame>
</Accordion>
</AccordionGroup>

View File

@@ -1,9 +1,13 @@
# Custom LLM Implementations
---
title: Custom LLM Implementation
description: Learn how to create custom LLM implementations in CrewAI.
icon: code
---
## Custom LLM Implementations
CrewAI now supports custom LLM implementations through the `BaseLLM` abstract base class. This allows you to create your own LLM implementations that don't rely on litellm's authentication mechanism.
## Using Custom LLM Implementations
To create a custom LLM implementation, you need to:
1. Inherit from the `BaseLLM` abstract base class

View File

@@ -1,5 +1,5 @@
---
title: Create Your Own Manager Agent
title: Custom Manager Agent
description: Learn how to set a custom agent as the manager in CrewAI, providing more control over task management and coordination.
icon: user-shield
---

View File

@@ -20,10 +20,8 @@ Here's an example of how to replay from a task:
To use the replay feature, follow these steps:
<Steps>
<Step title="Open your terminal or command prompt.">
</Step>
<Step title="Navigate to the directory where your CrewAI project is located.">
</Step>
<Step title="Open your terminal or command prompt."></Step>
<Step title="Navigate to the directory where your CrewAI project is located."></Step>
<Step title="Run the following commands:">
To view the latest kickoff task_ids use:

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 143 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 705 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 258 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 547 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 332 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 249 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 348 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 333 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 145 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 358 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 259 KiB

BIN
docs/images/v01140.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

View File

@@ -336,9 +336,22 @@ email_summarizer_task:
- research_task
```
## Deploying Your Project
## Deploying Your Crew
The easiest way to deploy your crew is through [CrewAI Enterprise](http://app.crewai.com), where you can deploy your crew in a few clicks.
The easiest way to deploy your crew to production is through [CrewAI Enterprise](http://app.crewai.com).
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI Enterprise](http://app.crewai.com) using the CLI.
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="CrewAI Deployment Guide"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
<CardGroup cols={2}>
<Card

View File

@@ -8,11 +8,29 @@ icon: code-simple
## Description
The `CodeInterpreterTool` enables CrewAI agents to execute Python 3 code that they generate autonomously. The code is run in a secure, isolated Docker container, ensuring safety regardless of the content. This functionality is particularly valuable as it allows agents to create code, execute it, obtain the results, and utilize that information to inform subsequent decisions and actions.
The `CodeInterpreterTool` enables CrewAI agents to execute Python 3 code that they generate autonomously. This functionality is particularly valuable as it allows agents to create code, execute it, obtain the results, and utilize that information to inform subsequent decisions and actions.
## Requirements
There are several ways to use this tool:
### Docker Container (Recommended)
This is the primary option. The code runs in a secure, isolated Docker container, ensuring safety regardless of its content.
Make sure Docker is installed and running on your system. If you dont have it, you can install it from [here](https://docs.docker.com/get-docker/).
### Sandbox environment
If Docker is unavailable — either not installed or not accessible for any reason — the code will be executed in a restricted Python environment - called sandbox.
This environment is very limited, with strict restrictions on many modules and built-in functions.
### Unsafe Execution
**NOT RECOMMENDED FOR PRODUCTION**
This mode allows execution of any Python code, including dangerous calls to `sys, os..` and similar modules. [Check out](/tools/codeinterpretertool#enabling-unsafe-mode) how to enable this mode
## Logging
The `CodeInterpreterTool` logs the selected execution strategy to STDOUT
- Docker must be installed and running on your system. If you don't have it, you can install it from [here](https://docs.docker.com/get-docker/).
## Installation
@@ -74,18 +92,32 @@ programmer_agent = Agent(
)
```
### Enabling `unsafe_mode`
```python Code
from crewai_tools import CodeInterpreterTool
code = """
import os
os.system("ls -la")
"""
CodeInterpreterTool(unsafe_mode=True).run(code=code)
```
## Parameters
The `CodeInterpreterTool` accepts the following parameters during initialization:
- **user_dockerfile_path**: Optional. Path to a custom Dockerfile to use for the code interpreter container.
- **user_docker_base_url**: Optional. URL to the Docker daemon to use for running the container.
- **unsafe_mode**: Optional. Whether to run code directly on the host machine instead of in a Docker container. Default is `False`. Use with caution!
- **unsafe_mode**: Optional. Whether to run code directly on the host machine instead of in a Docker container or sandbox. Default is `False`. Use with caution!
- **default_image_tag**: Optional. Default Docker image tag. Default is `code-interpreter:latest`
When using the tool with an agent, the agent will need to provide:
- **code**: Required. The Python 3 code to execute.
- **libraries_used**: Required. A list of libraries used in the code that need to be installed.
- **libraries_used**: Optional. A list of libraries used in the code that need to be installed. Default is `[]`
## Agent Integration Example
@@ -152,7 +184,7 @@ class CodeInterpreterTool(BaseTool):
if self.unsafe_mode:
return self.run_code_unsafe(code, libraries_used)
else:
return self.run_code_in_docker(code, libraries_used)
return self.run_code_safety(code, libraries_used)
```
The tool performs the following steps:
@@ -168,8 +200,9 @@ The tool performs the following steps:
By default, the `CodeInterpreterTool` runs code in an isolated Docker container, which provides a layer of security. However, there are still some security considerations to keep in mind:
1. The Docker container has access to the current working directory, so sensitive files could potentially be accessed.
2. The `unsafe_mode` parameter allows code to be executed directly on the host machine, which should only be used in trusted environments.
3. Be cautious when allowing agents to install arbitrary libraries, as they could potentially include malicious code.
2. If the Docker container is unavailable and the code needs to run safely, it will be executed in a sandbox environment. For security reasons, installing arbitrary libraries is not allowed
3. The `unsafe_mode` parameter allows code to be executed directly on the host machine, which should only be used in trusted environments.
4. Be cautious when allowing agents to install arbitrary libraries, as they could potentially include malicious code.
## Conclusion

View File

@@ -1,10 +1,10 @@
---
title: Using LangChain Tools
description: Learn how to integrate LangChain tools with CrewAI agents to enhance search-based queries and more.
title: LangChain Tool
description: The `LangChainTool` is a wrapper for LangChain tools and query engines.
icon: link
---
## Using LangChain Tools
## `LangChainTool`
<Info>
CrewAI seamlessly integrates with LangChain's comprehensive [list of tools](https://python.langchain.com/docs/integrations/tools/), all of which can be used with CrewAI.

View File

@@ -1,6 +1,6 @@
[project]
name = "crewai"
version = "0.114.0"
version = "0.117.1"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<3.13"
@@ -11,7 +11,7 @@ dependencies = [
# Core Dependencies
"pydantic>=2.4.2",
"openai>=1.13.3",
"litellm==1.60.2",
"litellm==1.67.2",
"instructor>=1.3.3",
# Text Processing
"pdfplumber>=0.11.4",
@@ -45,7 +45,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools~=0.40.1"]
tools = ["crewai-tools~=0.42.2"]
embeddings = [
"tiktoken~=0.7.0"
]
@@ -60,7 +60,7 @@ pandas = [
openpyxl = [
"openpyxl>=3.1.5",
]
mem0 = ["mem0ai>=0.1.29"]
mem0 = ["mem0ai>=0.1.94"]
docling = [
"docling>=2.12.0",
]

View File

@@ -17,7 +17,7 @@ warnings.filterwarnings(
category=UserWarning,
module="pydantic.main",
)
__version__ = "0.114.0"
__version__ = "0.117.1"
__all__ = [
"Agent",
"Crew",

View File

@@ -114,6 +114,18 @@ class Agent(BaseAgent):
default=None,
description="Embedder configuration for the agent.",
)
agent_knowledge_context: Optional[str] = Field(
default=None,
description="Knowledge context for the agent.",
)
trained_data_file: Optional[str] = Field(
default=TRAINED_AGENTS_DATA_FILE,
description="Path to the trained data file to use for task prompts.",
)
crew_knowledge_context: Optional[str] = Field(
default=None,
description="Knowledge context for the crew.",
)
@model_validator(mode="after")
def post_init_setup(self):
@@ -177,7 +189,7 @@ class Agent(BaseAgent):
self,
task: Task,
context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None,
tools: Optional[List[BaseTool]] = None
) -> str:
"""Execute a task with the agent.
@@ -188,6 +200,11 @@ class Agent(BaseAgent):
Returns:
Output of the agent
Raises:
TimeoutError: If execution exceeds the maximum execution time.
ValueError: If the max execution time is not a positive integer.
RuntimeError: If the agent execution fails for other reasons.
"""
if self.tools_handler:
self.tools_handler.last_used_tool = {} # type: ignore # Incompatible types in assignment (expression has type "dict[Never, Never]", variable has type "ToolCalling")
@@ -229,22 +246,30 @@ class Agent(BaseAgent):
memory = contextual_memory.build_context_for_task(task, context)
if memory.strip() != "":
task_prompt += self.i18n.slice("memory").format(memory=memory)
knowledge_config = (
self.knowledge_config.model_dump() if self.knowledge_config else {}
)
if self.knowledge:
agent_knowledge_snippets = self.knowledge.query([task.prompt()])
agent_knowledge_snippets = self.knowledge.query(
[task.prompt()], **knowledge_config
)
if agent_knowledge_snippets:
agent_knowledge_context = extract_knowledge_context(
self.agent_knowledge_context = extract_knowledge_context(
agent_knowledge_snippets
)
if agent_knowledge_context:
task_prompt += agent_knowledge_context
if self.agent_knowledge_context:
task_prompt += self.agent_knowledge_context
if self.crew:
knowledge_snippets = self.crew.query_knowledge([task.prompt()])
knowledge_snippets = self.crew.query_knowledge(
[task.prompt()], **knowledge_config
)
if knowledge_snippets:
crew_knowledge_context = extract_knowledge_context(knowledge_snippets)
if crew_knowledge_context:
task_prompt += crew_knowledge_context
self.crew_knowledge_context = extract_knowledge_context(
knowledge_snippets
)
if self.crew_knowledge_context:
task_prompt += self.crew_knowledge_context
tools = tools or self.tools or []
self.create_agent_executor(tools=tools, task=task)
@@ -264,14 +289,26 @@ class Agent(BaseAgent):
task=task,
),
)
result = self.agent_executor.invoke(
{
"input": task_prompt,
"tool_names": self.agent_executor.tools_names,
"tools": self.agent_executor.tools_description,
"ask_for_human_input": task.human_input,
}
)["output"]
# Determine execution method based on timeout setting
if self.max_execution_time is not None:
if not isinstance(self.max_execution_time, int) or self.max_execution_time <= 0:
raise ValueError("Max Execution time must be a positive integer greater than zero")
result = self._execute_with_timeout(task_prompt, task, self.max_execution_time)
else:
result = self._execute_without_timeout(task_prompt, task)
except TimeoutError as e:
# Propagate TimeoutError without retry
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
agent=self,
task=task,
error=str(e),
),
)
raise e
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
@@ -312,6 +349,66 @@ class Agent(BaseAgent):
)
return result
def _execute_with_timeout(
self,
task_prompt: str,
task: Task,
timeout: int
) -> str:
"""Execute a task with a timeout.
Args:
task_prompt: The prompt to send to the agent.
task: The task being executed.
timeout: Maximum execution time in seconds.
Returns:
The output of the agent.
Raises:
TimeoutError: If execution exceeds the timeout.
RuntimeError: If execution fails for other reasons.
"""
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(
self._execute_without_timeout,
task_prompt=task_prompt,
task=task
)
try:
return future.result(timeout=timeout)
except concurrent.futures.TimeoutError:
future.cancel()
raise TimeoutError(f"Task '{task.description}' execution timed out after {timeout} seconds. Consider increasing max_execution_time or optimizing the task.")
except Exception as e:
future.cancel()
raise RuntimeError(f"Task execution failed: {str(e)}")
def _execute_without_timeout(
self,
task_prompt: str,
task: Task
) -> str:
"""Execute a task without a timeout.
Args:
task_prompt: The prompt to send to the agent.
task: The task being executed.
Returns:
The output of the agent.
"""
return self.agent_executor.invoke(
{
"input": task_prompt,
"tool_names": self.agent_executor.tools_names,
"tools": self.agent_executor.tools_description,
"ask_for_human_input": task.human_input,
}
)["output"]
def create_agent_executor(
self, tools: Optional[List[BaseTool]] = None, task=None
) -> None:
@@ -404,13 +501,24 @@ class Agent(BaseAgent):
return task_prompt
def _use_trained_data(self, task_prompt: str) -> str:
"""Use trained data for the agent task prompt to improve output."""
if data := CrewTrainingHandler(TRAINED_AGENTS_DATA_FILE).load():
"""
Use trained data from a specified file for the agent task prompt.
Uses the 'trained_data_file' attribute as the source of training instructions.
Args:
task_prompt: The original task prompt to enhance.
Returns:
Enhanced task prompt with training instructions if available.
"""
if self.trained_data_file and (data := CrewTrainingHandler(self.trained_data_file).load()):
if trained_data_output := data.get(self.role):
task_prompt += (
"\n\nYou MUST follow these instructions: \n - "
+ "\n - ".join(trained_data_output["suggestions"])
)
if "suggestions" in trained_data_output:
task_prompt += (
"\n\nYou MUST follow these instructions: \n - "
+ "\n - ".join(trained_data_output["suggestions"])
)
return task_prompt
def _render_text_description(self, tools: List[Any]) -> str:

View File

@@ -0,0 +1 @@
"""LangGraph adapter for crewAI."""

View File

@@ -0,0 +1 @@
"""OpenAI agent adapters for crewAI."""

View File

@@ -19,6 +19,7 @@ from crewai.agents.agent_builder.utilities.base_token_process import TokenProces
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.tools_handler import ToolsHandler
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.knowledge_config import KnowledgeConfig
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.security.security_config import SecurityConfig
from crewai.tools.base_tool import BaseTool, Tool
@@ -155,6 +156,10 @@ class BaseAgent(ABC, BaseModel):
adapted_agent: bool = Field(
default=False, description="Whether the agent is adapted"
)
knowledge_config: Optional[KnowledgeConfig] = Field(
default=None,
description="Knowledge configuration for the agent such as limits and threshold",
)
@model_validator(mode="before")
@classmethod

View File

@@ -201,9 +201,17 @@ def install(context):
@crewai.command()
def run():
@click.option(
"-f",
"--trained-data-file",
type=str,
default="trained_agents_data.pkl",
help="Path to a trained data file to use for agent task prompts",
)
def run(trained_data_file: str):
"""Run the Crew."""
run_crew()
click.echo(f"Running the Crew with agent training data file: {trained_data_file}")
run_crew(trained_data_file=trained_data_file)
@crewai.command()

View File

@@ -122,7 +122,16 @@ PROVIDERS = [
]
MODELS = {
"openai": ["gpt-4", "gpt-4o", "gpt-4o-mini", "o1-mini", "o1-preview"],
"openai": [
"gpt-4",
"gpt-4.1",
"gpt-4.1-mini-2025-04-14",
"gpt-4.1-nano-2025-04-14",
"gpt-4o",
"gpt-4o-mini",
"o1-mini",
"o1-preview",
],
"anthropic": [
"claude-3-5-sonnet-20240620",
"claude-3-sonnet-20240229",
@@ -132,8 +141,17 @@ MODELS = {
"gemini": [
"gemini/gemini-1.5-flash",
"gemini/gemini-1.5-pro",
"gemini/gemini-2.0-flash-lite-001",
"gemini/gemini-2.0-flash-001",
"gemini/gemini-2.0-flash-thinking-exp-01-21",
"gemini/gemini-2.5-flash-preview-04-17",
"gemini/gemini-2.5-pro-exp-03-25",
"gemini/gemini-gemma-2-9b-it",
"gemini/gemini-gemma-2-27b-it",
"gemini/gemma-3-1b-it",
"gemini/gemma-3-4b-it",
"gemini/gemma-3-12b-it",
"gemini/gemma-3-27b-it",
],
"nvidia_nim": [
"nvidia_nim/nvidia/mistral-nemo-minitron-8b-8k-instruct",

View File

@@ -1,3 +1,4 @@
import os
import subprocess
from enum import Enum
from typing import List, Optional
@@ -14,13 +15,16 @@ class CrewType(Enum):
FLOW = "flow"
def run_crew() -> None:
def run_crew(trained_data_file: Optional[str] = None) -> None:
"""
Run the crew or flow by running a command in the UV environment.
Starting from version 0.103.0, this command can be used to run both
standard crews and flows. For flows, it detects the type from pyproject.toml
and automatically runs the appropriate command.
Args:
trained_data_file: Optional path to a trained data file to use
"""
crewai_version = get_crewai_version()
min_required_version = "0.71.0"
@@ -44,17 +48,35 @@ def run_crew() -> None:
click.echo(f"Running the {'Flow' if is_flow else 'Crew'}")
# Execute the appropriate command
execute_command(crew_type)
execute_command(crew_type, trained_data_file)
def execute_command(crew_type: CrewType) -> None:
def execute_command(crew_type: CrewType, trained_data_file: Optional[str] = None) -> None:
"""
Execute the appropriate command based on crew type.
Args:
crew_type: The type of crew to run
trained_data_file: Optional path to a trained data file to use
"""
command = ["uv", "run", "kickoff" if crew_type == CrewType.FLOW else "run_crew"]
if trained_data_file and crew_type == CrewType.STANDARD:
if not trained_data_file.endswith('.pkl'):
click.secho(
f"Error: Trained data file '{trained_data_file}' must have a .pkl extension.",
fg="red",
)
return
if not os.path.exists(trained_data_file):
click.secho(
f"Error: Trained data file '{trained_data_file}' does not exist.",
fg="red",
)
return
command.extend(["--trained-data-file", trained_data_file])
try:
subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python
import argparse
import sys
import warnings
@@ -17,13 +18,23 @@ def run():
"""
Run the crew.
"""
parser = argparse.ArgumentParser(description="Run the crew")
parser.add_argument(
"--trained-data-file",
"-f",
type=str,
default="trained_agents_data.pkl",
help="Path to a trained data file to use for agent task prompts"
)
args, _ = parser.parse_known_args()
inputs = {
'topic': 'AI LLMs',
'current_year': str(datetime.now().year)
}
try:
{{crew_name}}().crew().kickoff(inputs=inputs)
{{crew_name}}().crew(trained_data_file=args.trained_data_file).kickoff(inputs=inputs)
except Exception as e:
raise Exception(f"An error occurred while running the crew: {e}")

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.114.0,<1.0.0"
"crewai[tools]>=0.117.1,<1.0.0"
]
[project.scripts]

View File

@@ -0,0 +1 @@
"""Poem crew template."""

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.114.0,<1.0.0",
"crewai[tools]>=0.117.1,<1.0.0",
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.114.0"
"crewai[tools]>=0.117.1"
]
[tool.crewai]

View File

@@ -117,7 +117,9 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
published_handle = publish_response.json()["handle"]
console.print(
f"Successfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}",
f"Successfully published `{published_handle}` ({project_version}).\n\n"
+ "⚠️ Security checks are running in the background. Your tool will be available once these are complete.\n"
+ f"You can monitor the status or access your tool here:\nhttps://app.crewai.com/crewai_plus/tools/{published_handle}",
style="bold green",
)
@@ -153,8 +155,12 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
login_response_json = login_response.json()
settings = Settings()
settings.tool_repository_username = login_response_json["credential"]["username"]
settings.tool_repository_password = login_response_json["credential"]["password"]
settings.tool_repository_username = login_response_json["credential"][
"username"
]
settings.tool_repository_password = login_response_json["credential"][
"password"
]
settings.dump()
console.print(
@@ -179,7 +185,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
capture_output=False,
env=self._build_env_with_credentials(repository_handle),
text=True,
check=True
check=True,
)
if add_package_result.stderr:
@@ -204,7 +210,11 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
settings = Settings()
env = os.environ.copy()
env[f"UV_INDEX_{repository_handle}_USERNAME"] = str(settings.tool_repository_username or "")
env[f"UV_INDEX_{repository_handle}_PASSWORD"] = str(settings.tool_repository_password or "")
env[f"UV_INDEX_{repository_handle}_USERNAME"] = str(
settings.tool_repository_username or ""
)
env[f"UV_INDEX_{repository_handle}_PASSWORD"] = str(
settings.tool_repository_password or ""
)
return env

View File

@@ -122,6 +122,10 @@ class Crew(BaseModel):
tasks: List[Task] = Field(default_factory=list)
agents: List[BaseAgent] = Field(default_factory=list)
process: Process = Field(default=Process.sequential)
trained_data_file: Optional[str] = Field(
default=None,
description="Path to the trained data file to use for agent task prompts.",
)
verbose: bool = Field(default=False)
memory: bool = Field(
default=False,
@@ -304,9 +308,7 @@ class Crew(BaseModel):
"""Initialize private memory attributes."""
self._external_memory = (
# External memory doesnt support a default value since it was designed to be managed entirely externally
self.external_memory.set_crew(self)
if self.external_memory
else None
self.external_memory.set_crew(self) if self.external_memory else None
)
self._long_term_memory = self.long_term_memory
@@ -1136,9 +1138,13 @@ class Crew(BaseModel):
result = self._execute_tasks(self.tasks, start_index, True)
return result
def query_knowledge(self, query: List[str]) -> Union[List[Dict[str, Any]], None]:
def query_knowledge(
self, query: List[str], results_limit: int = 3, score_threshold: float = 0.35
) -> Union[List[Dict[str, Any]], None]:
if self.knowledge:
return self.knowledge.query(query)
return self.knowledge.query(
query, results_limit=results_limit, score_threshold=score_threshold
)
return None
def fetch_inputs(self) -> Set[str]:
@@ -1194,7 +1200,12 @@ class Crew(BaseModel):
"manager_llm",
}
cloned_agents = [agent.copy() for agent in self.agents]
cloned_agents = []
for agent in self.agents:
cloned_agent = agent.copy()
if self.trained_data_file:
cloned_agent.trained_data_file = self.trained_data_file
cloned_agents.append(cloned_agent)
manager_agent = self.manager_agent.copy() if self.manager_agent else None
manager_llm = shallow_copy(self.manager_llm) if self.manager_llm else None
@@ -1220,9 +1231,13 @@ class Crew(BaseModel):
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
if self.short_term_memory:
copied_data["short_term_memory"] = self.short_term_memory.model_copy(deep=True)
copied_data["short_term_memory"] = self.short_term_memory.model_copy(
deep=True
)
if self.long_term_memory:
copied_data["long_term_memory"] = self.long_term_memory.model_copy(deep=True)
copied_data["long_term_memory"] = self.long_term_memory.model_copy(
deep=True
)
if self.entity_memory:
copied_data["entity_memory"] = self.entity_memory.model_copy(deep=True)
if self.external_memory:
@@ -1230,7 +1245,6 @@ class Crew(BaseModel):
if self.user_memory:
copied_data["user_memory"] = self.user_memory.model_copy(deep=True)
copied_data.pop("agents", None)
copied_data.pop("tasks", None)
@@ -1403,7 +1417,10 @@ class Crew(BaseModel):
"short": (getattr(self, "_short_term_memory", None), "short term"),
"entity": (getattr(self, "_entity_memory", None), "entity"),
"knowledge": (getattr(self, "knowledge", None), "knowledge"),
"kickoff_outputs": (getattr(self, "_task_output_handler", None), "task output"),
"kickoff_outputs": (
getattr(self, "_task_output_handler", None),
"task output",
),
"external": (getattr(self, "_external_memory", None), "external"),
}

View File

@@ -43,7 +43,9 @@ class Knowledge(BaseModel):
self.storage.initialize_knowledge_storage()
self._add_sources()
def query(self, query: List[str], limit: int = 3) -> List[Dict[str, Any]]:
def query(
self, query: List[str], results_limit: int = 3, score_threshold: float = 0.35
) -> List[Dict[str, Any]]:
"""
Query across all knowledge sources to find the most relevant information.
Returns the top_k most relevant chunks.
@@ -56,7 +58,8 @@ class Knowledge(BaseModel):
results = self.storage.search(
query,
limit,
limit=results_limit,
score_threshold=score_threshold,
)
return results

View File

@@ -0,0 +1,16 @@
from pydantic import BaseModel, Field
class KnowledgeConfig(BaseModel):
"""Configuration for knowledge retrieval.
Args:
results_limit (int): The number of relevant documents to return.
score_threshold (float): The minimum score for a document to be considered relevant.
"""
results_limit: int = Field(default=3, description="The number of results to return")
score_threshold: float = Field(
default=0.35,
description="The minimum score for a result to be considered relevant",
)

View File

@@ -4,7 +4,7 @@ import io
import logging
import os
import shutil
from typing import Any, Dict, List, Optional, Union, cast
from typing import Any, Dict, List, Optional, Union
import chromadb
import chromadb.errors

View File

@@ -0,0 +1 @@
"""Knowledge utilities for crewAI."""

View File

@@ -37,6 +37,7 @@ with warnings.catch_warnings():
warnings.simplefilter("ignore", UserWarning)
import litellm
from litellm import Choices
from litellm.exceptions import ContextWindowExceededError
from litellm.litellm_core_utils.get_supported_openai_params import (
get_supported_openai_params,
)
@@ -64,7 +65,7 @@ class FilteredStream:
if (
"Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new"
in s
or "LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True`"
or "LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()`"
in s
):
return 0
@@ -81,14 +82,26 @@ LLM_CONTEXT_WINDOW_SIZES = {
"gpt-4o": 128000,
"gpt-4o-mini": 128000,
"gpt-4-turbo": 128000,
"gpt-4.1": 1047576, # Based on official docs
"gpt-4.1-mini-2025-04-14": 1047576,
"gpt-4.1-nano-2025-04-14": 1047576,
"o1-preview": 128000,
"o1-mini": 128000,
"o3-mini": 200000, # Based on official o3-mini specifications
# gemini
"gemini-2.0-flash": 1048576,
"gemini-2.0-flash-thinking-exp-01-21": 32768,
"gemini-2.0-flash-lite-001": 1048576,
"gemini-2.0-flash-001": 1048576,
"gemini-2.5-flash-preview-04-17": 1048576,
"gemini-2.5-pro-exp-03-25": 1048576,
"gemini-1.5-pro": 2097152,
"gemini-1.5-flash": 1048576,
"gemini-1.5-flash-8b": 1048576,
"gemini/gemma-3-1b-it": 32000,
"gemini/gemma-3-4b-it": 128000,
"gemini/gemma-3-12b-it": 128000,
"gemini/gemma-3-27b-it": 128000,
# deepseek
"deepseek-chat": 128000,
# groq
@@ -585,6 +598,11 @@ class LLM(BaseLLM):
self._handle_emit_call_events(full_response, LLMCallType.LLM_CALL)
return full_response
except ContextWindowExceededError as e:
# Catch context window errors from litellm and convert them to our own exception type.
# This exception is handled by CrewAgentExecutor._invoke_loop() which can then
# decide whether to summarize the content or abort based on the respect_context_window flag.
raise LLMContextLengthExceededException(str(e))
except Exception as e:
logging.error(f"Error in streaming response: {str(e)}")
if full_response.strip():
@@ -699,7 +717,16 @@ class LLM(BaseLLM):
str: The response text
"""
# --- 1) Make the completion call
response = litellm.completion(**params)
try:
# Attempt to make the completion call, but catch context window errors
# and convert them to our own exception type for consistent handling
# across the codebase. This allows CrewAgentExecutor to handle context
# length issues appropriately.
response = litellm.completion(**params)
except ContextWindowExceededError as e:
# Convert litellm's context window error to our own exception type
# for consistent handling in the rest of the codebase
raise LLMContextLengthExceededException(str(e))
# --- 2) Extract response message and content
response_message = cast(Choices, cast(ModelResponse, response).choices)[
@@ -858,15 +885,17 @@ class LLM(BaseLLM):
params, callbacks, available_functions
)
except LLMContextLengthExceededException:
# Re-raise LLMContextLengthExceededException as it should be handled
# by the CrewAgentExecutor._invoke_loop method, which can then decide
# whether to summarize the content or abort based on the respect_context_window flag
raise
except Exception as e:
crewai_event_bus.emit(
self,
event=LLMCallFailedEvent(error=str(e)),
)
if not LLMContextLengthExceededException(
str(e)
)._is_context_limit_error(str(e)):
logging.error(f"LiteLLM call failed: {str(e)}")
logging.error(f"LiteLLM call failed: {str(e)}")
raise
def _handle_emit_call_events(self, response: Any, call_type: LLMCallType):

View File

@@ -0,0 +1 @@
"""LLM implementations for crewAI."""

View File

@@ -0,0 +1 @@
"""Third-party LLM implementations for crewAI."""

View File

@@ -0,0 +1 @@
"""Memory storage implementations for crewAI."""

View File

@@ -88,7 +88,9 @@ class Mem0Storage(Storage):
}
if params:
self.memory.add(value, **params | {"output_format": "v1.1"})
if isinstance(self.memory, MemoryClient):
params["output_format"] = "v1.1"
self.memory.add(value, **params)
def search(
self,
@@ -96,7 +98,7 @@ class Mem0Storage(Storage):
limit: int = 3,
score_threshold: float = 0.35,
) -> List[Any]:
params = {"query": query, "limit": limit}
params = {"query": query, "limit": limit, "output_format": "v1.1"}
if user_id := self._get_user_id():
params["user_id"] = user_id
@@ -116,8 +118,11 @@ class Mem0Storage(Storage):
# Discard the filters for now since we create the filters
# automatically when the crew is created.
if isinstance(self.memory, Memory):
del params["metadata"], params["output_format"]
results = self.memory.search(**params)
return [r for r in results if r["score"] >= score_threshold]
return [r for r in results["results"] if r["score"] >= score_threshold]
def _get_user_id(self) -> str:
return self._get_config().get("user_id", "")

View File

@@ -8,8 +8,6 @@ from dotenv import load_dotenv
load_dotenv()
logging.basicConfig(level=logging.WARNING)
T = TypeVar("T", bound=type)
"""Base decorator for creating crew classes with configuration and function management."""

View File

@@ -0,0 +1 @@
"""Agent tools for crewAI."""

View File

@@ -0,0 +1 @@
"""Evaluators for crewAI."""

View File

@@ -0,0 +1 @@
"""Event utilities for crewAI."""

View File

@@ -0,0 +1 @@
"""Exceptions for crewAI."""

View File

@@ -54,10 +54,12 @@ class Prompts(BaseModel):
response_template=None,
) -> str:
"""Constructs a prompt string from specified components."""
if not system_template and not prompt_template:
if not system_template or not prompt_template:
# If any of the required templates are missing, fall back to the default format
prompt_parts = [self.i18n.slice(component) for component in components]
prompt = "".join(prompt_parts)
else:
# All templates are provided, use them
prompt_parts = [
self.i18n.slice(component)
for component in components
@@ -67,8 +69,12 @@ class Prompts(BaseModel):
prompt = prompt_template.replace(
"{{ .Prompt }}", "".join(self.i18n.slice("task"))
)
response = response_template.split("{{ .Response }}")[0]
prompt = f"{system}\n{prompt}\n{response}"
# Handle missing response_template
if response_template:
response = response_template.split("{{ .Response }}")[0]
prompt = f"{system}\n{prompt}\n{response}"
else:
prompt = f"{system}\n{prompt}"
prompt = (
prompt.replace("{goal}", self.agent.goal)

View File

@@ -10,6 +10,8 @@ from crewai import Agent, Crew, Task
from crewai.agents.cache import CacheHandler
from crewai.agents.crew_agent_executor import AgentFinish, CrewAgentExecutor
from crewai.agents.parser import CrewAgentParser, OutputParserException
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.knowledge_config import KnowledgeConfig
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from crewai.llm import LLM
@@ -70,9 +72,54 @@ def test_agent_creation():
assert agent.role == "test role"
assert agent.goal == "test goal"
assert agent.backstory == "test backstory"
assert agent.tools == []
def test_agent_with_only_system_template():
"""Test that an agent with only system_template works without errors."""
agent = Agent(
role="Test Role",
goal="Test Goal",
backstory="Test Backstory",
allow_delegation=False,
system_template="You are a test agent...",
# prompt_template is intentionally missing
)
assert agent.role == "Test Role"
assert agent.goal == "Test Goal"
assert agent.backstory == "Test Backstory"
def test_agent_with_only_prompt_template():
"""Test that an agent with only system_template works without errors."""
agent = Agent(
role="Test Role",
goal="Test Goal",
backstory="Test Backstory",
allow_delegation=False,
prompt_template="You are a test agent...",
# prompt_template is intentionally missing
)
assert agent.role == "Test Role"
assert agent.goal == "Test Goal"
assert agent.backstory == "Test Backstory"
def test_agent_with_missing_response_template():
"""Test that an agent with system_template and prompt_template but no response_template works without errors."""
agent = Agent(
role="Test Role",
goal="Test Goal",
backstory="Test Backstory",
allow_delegation=False,
system_template="You are a test agent...",
prompt_template="This is a test prompt...",
# response_template is intentionally missing
)
assert agent.role == "Test Role"
assert agent.goal == "Test Goal"
assert agent.backstory == "Test Backstory"
def test_agent_default_values():
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
assert agent.llm.model == "gpt-4o-mini"
@@ -259,7 +306,9 @@ def test_cache_hitting():
def handle_tool_end(source, event):
received_events.append(event)
with (patch.object(CacheHandler, "read") as read,):
with (
patch.object(CacheHandler, "read") as read,
):
read.return_value = "0"
task = Task(
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool, you must use the tool.",
@@ -1147,6 +1196,102 @@ def test_agent_use_trained_data(crew_training_handler):
)
@patch("crewai.agent.CrewTrainingHandler")
def test_agent_use_custom_trained_data_file(crew_training_handler):
task_prompt = "What is 1 + 1?"
custom_file = "custom_trained_data.pkl"
agent = Agent(
role="researcher",
goal="test goal",
backstory="test backstory",
verbose=True,
trained_data_file=custom_file
)
crew_training_handler().load.return_value = {
agent.role: {
"suggestions": [
"The result of the math operation must be right.",
"Result must be better than 1.",
]
}
}
result = agent._use_trained_data(task_prompt=task_prompt)
assert (
result == "What is 1 + 1?\n\nYou MUST follow these instructions: \n"
" - The result of the math operation must be right.\n - Result must be better than 1."
)
crew_training_handler.assert_has_calls(
[mock.call(), mock.call(custom_file), mock.call().load()]
)
@patch("crewai.agent.CrewTrainingHandler")
def test_agent_with_none_trained_data_file(crew_training_handler):
task_prompt = "What is 1 + 1?"
agent = Agent(
role="researcher",
goal="test goal",
backstory="test backstory",
verbose=True,
trained_data_file=None
)
result = agent._use_trained_data(task_prompt=task_prompt)
assert result == task_prompt
crew_training_handler.assert_not_called()
@patch("crewai.agent.CrewTrainingHandler")
def test_agent_with_missing_role_in_trained_data(crew_training_handler):
task_prompt = "What is 1 + 1?"
agent = Agent(
role="researcher",
goal="test goal",
backstory="test backstory",
verbose=True,
trained_data_file="trained_agents_data.pkl"
)
crew_training_handler().load.return_value = {
"other_role": {
"suggestions": ["This should not be used."]
}
}
result = agent._use_trained_data(task_prompt=task_prompt)
assert result == task_prompt
crew_training_handler.assert_has_calls(
[mock.call(), mock.call("trained_agents_data.pkl"), mock.call().load()]
)
@patch("crewai.agent.CrewTrainingHandler")
def test_agent_with_missing_suggestions_in_trained_data(crew_training_handler):
task_prompt = "What is 1 + 1?"
agent = Agent(
role="researcher",
goal="test goal",
backstory="test backstory",
verbose=True,
trained_data_file="trained_agents_data.pkl"
)
crew_training_handler().load.return_value = {
"researcher": {
"other_key": ["This should not be used."]
}
}
result = agent._use_trained_data(task_prompt=task_prompt)
assert result == task_prompt
crew_training_handler.assert_has_calls(
[mock.call(), mock.call("trained_agents_data.pkl"), mock.call().load()]
)
def test_agent_max_retry_limit():
agent = Agent(
role="test role",
@@ -1611,6 +1756,78 @@ def test_agent_with_knowledge_sources():
assert "red" in result.raw.lower()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_with_knowledge_sources_with_query_limit_and_score_threshold():
content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content)
knowledge_config = KnowledgeConfig(results_limit=10, score_threshold=0.5)
with patch(
"crewai.knowledge.storage.knowledge_storage.KnowledgeStorage"
) as MockKnowledge:
mock_knowledge_instance = MockKnowledge.return_value
mock_knowledge_instance.sources = [string_source]
mock_knowledge_instance.query.return_value = [{"content": content}]
with patch.object(Knowledge, "query") as mock_knowledge_query:
agent = Agent(
role="Information Agent",
goal="Provide information based on knowledge sources",
backstory="You have access to specific knowledge sources.",
llm=LLM(model="gpt-4o-mini"),
knowledge_sources=[string_source],
knowledge_config=knowledge_config,
)
task = Task(
description="What is Brandon's favorite color?",
expected_output="Brandon's favorite color.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
crew.kickoff()
assert agent.knowledge is not None
mock_knowledge_query.assert_called_once_with(
[task.prompt()],
**knowledge_config.model_dump(),
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_with_knowledge_sources_with_query_limit_and_score_threshold_default():
content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content)
knowledge_config = KnowledgeConfig()
with patch(
"crewai.knowledge.storage.knowledge_storage.KnowledgeStorage"
) as MockKnowledge:
mock_knowledge_instance = MockKnowledge.return_value
mock_knowledge_instance.sources = [string_source]
mock_knowledge_instance.query.return_value = [{"content": content}]
with patch.object(Knowledge, "query") as mock_knowledge_query:
string_source = StringKnowledgeSource(content=content)
knowledge_config = KnowledgeConfig()
agent = Agent(
role="Information Agent",
goal="Provide information based on knowledge sources",
backstory="You have access to specific knowledge sources.",
llm=LLM(model="gpt-4o-mini"),
knowledge_sources=[string_source],
knowledge_config=knowledge_config,
)
task = Task(
description="What is Brandon's favorite color?",
expected_output="Brandon's favorite color.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
crew.kickoff()
assert agent.knowledge is not None
mock_knowledge_query.assert_called_once_with(
[task.prompt()],
**knowledge_config.model_dump(),
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_with_knowledge_sources_extensive_role():
content = "Brandon's favorite color is red and he likes Mexican food."

View File

@@ -0,0 +1 @@
"""Tests for agent adapters."""

View File

@@ -0,0 +1 @@
"""Tests for agent builder."""

View File

@@ -0,0 +1,330 @@
interactions:
- request:
body: '{"input": ["Brandon''s favorite color is red and he likes Mexican food."],
"model": "text-embedding-3-small", "encoding_format": "base64"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '137'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-read-timeout:
- '600'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/embeddings
response:
body:
string: !!binary |
H4sIAAAAAAAAA1SaWw+yPtfmz59Pced/yrwR2bV9zhAQ2UkRFHEymYAiOxHZtEDfvN99ovdkNicm
YiNpu1bXdf1W//Nff/7802V1fp/++feff17VOP3z377PHumU/vPvP//9X3/+/Pnzn7/P/29k3mb5
41G9i9/w34/V+5Ev//z7D/9/nvzfQf/+889JZZQeb+UOCJHjaQqtRQfv1mXUaf0OTfTuHjx+CvAU
CWC/KEik3pNeZScAwr5Zzkgne4Gqd6jX2+oDW3BxGx8nMkfrxaq0GM2PNaV5G6QZM0P1joRJl32W
BVuXtTPPo02jZhRX8gjWdj7MgDz2D+wRexhoUHsaTN9P0RfLw5itFrmbCgzCHVFOdx/wQte1qJvK
FxH556YeT0pqoJ0RTNhqPwiskhTe0T7qIpzrwS4arGS24D4uc6y90d4VpMq+wy7hntS8mG297p2B
QNrwZ5pV1p4RZ6vPEHEPDtuWVA0L/w451CylgPUZBhFvXaQWWqXwwKobdNmavrcCyvDk4aRDezby
2c5Hym33obtXgLKlioYGKUfZovrOBtkSu6cOQbHw6POcJoyE+vmMxtegUst+HNh2ZImhpFTNsJ4V
73p1XlWO8NsSscGvH7asThqCqtoe6anKt+6SsI0K91Ef0UtGS5dIB8DD3QV2vtwaybB8lksCj5K7
pTjWccSrU5igG+hfVOuSRl+4sPGhtJ4Qtfr4w4g0EQcaRfCgnrKxXVFD8hnSZ6jTk5PN9ax4JxU5
T/5MsnTwskU6ARNOHnCJYHpsoPbgFfDzsRYaLe9dPbfLoiDTls800h+6vjruXYDSGiGs3fJFXw5Z
2APLfh+ok3y2YDvrfQvLm3/x5fPervlV0QUUF5c33aFAzQRua0JwHeCKb5/FGITQVzsgc1KIg6vA
Mmo+Xg5CiY/80LSf2UqSFwdOKDHoMVyBvk6jq0DzTRTs1uUHLFui9ciAyYGq8fHOhDqzzoCpgkE9
Ta/q9aW/VnRO/IR6zaVjs8DvQ9Q/byvOji+YzUV7ztGHji4+b4t7LZqHuwO9+NRR9aZ0YFFWKQF8
oXu+5D2rgfee2RlmG3/CO+mFXbq0SgJ35cGm+JxKjEnnY6vsjlOI9+aSsPUqdSNMj3FBD3qdMHHA
UgCXpjj46xaeI6GBnAZtrfZJOLMKLKg6VrDCz5jaF6K6/LjWGpwt36FHS430JTwlHtrvYIL3q03d
JUkWDWVaEtKLvglq9sj4HiRDHtFo3yjuyGe2D/nPY8ER4xwmtq/AQtvoptF7rdiAX9jdgRtn7+Bn
tJEZJc2GgD4yNZz3YBjWu3ziAaFrhXenYwImk8wNpOrj/M13t+Y7wz/D3/57Lz0b2K1kHixsVceP
ztUy8VYyH90e/AbvXU+sGZ8aDnKhc6LWu9nVPJp3CYqztfBbYCn6DDf9CG16+VBf9VswK6seI7h7
nqirt5POrDOr4LC/ExoMssgmx6EzlCfv+s2fnS5e3vIM9VeX0YDfaUxs04EHhXb/4KPzJoC1ac2j
jeltac5xgc5gd++B3PYI22Bmw1g9KgFZx+cdGynQMv7ilz7yDuFKkP351LMfTiYsJQCpLhIpWguN
76FsiVd6cLRJX4LU5qEFOUQdt3OBGCfJHX7PD/x7H+9L1xThHJQYF+HiUrWYe6DvogzvTlKvM6OP
EijuT5i01wcF7Pg6QvgK4pZ6vlzo82dzCmBAD4zuU7MfFvypHbRLREZku91ma5+PBfjGjz8e7mbN
sFByKDkcInrs5Iv+oVDhoTppJTYI17LVHD4qvBBo4NB9lmyRDuYMt0bqkQWbJZteYsaBUpZNvFf7
Su+cVQ5gr6YI+7u3A0Sev83KJowLXyzuwGV7N5jR68Zc6iWPoB71O63gwiqEtU9sR3wq2yG0nPZE
ja7fuKuzmQ10PnUctriPysROHCtogQ3D+JPs3O1a1AGULo+SauHJ0XlpX8ZI350yeqCtmjFn4T0w
SHyDz/xqg+2xnU3UZWJP7VURXHaDlgBHg8P00NhvfTy2rgqPRM1pvKIjmPwodNDU8neq2ydTZ2O7
C1F5qFKKX8PFHRZvXGFV81eaBMou4wcOjzLwbk/qpxe1/uyMoEAmCc4Yl4GQrZzstsgq+Qe9D4cZ
LFkbJCg6NjG9Wy8pWzj/3cBY7VSaymuf0eK2jnBXmRE+bZkJlqdp+NAf6gFbh9saPQ/d7Q6siTtg
HG1ubDtLnA8/1fuK/TpfXVbO7gijGR7pQ3e2EVPA2qJ170v+FosWWEPf6iBRgI6dku7Z0ianM2Jl
scWq4R/Z9qkd2l98YU3isD7z7UuBtw4V9Hbef+pZu2wEEE/3DCdFdXDF6j7NMOmftg99uXDFME1T
MCX6SA/OvAGTcm4tlJ1uFoHzsdQZvxHu8BQ+eurtkiXrcuobMLzW2Tf+TCZka+8or+FpYqypDhBU
5eahY3a54ftLBzULDTuF0f3t4X2N7+4QGrsUifHG8We0fdXzbCoOQHjzJlwbKFk/umYDMQsqmunt
0RUDj2ko+/QP6ofSW19xoDVQUHHuw289YJf3MiM+vwQ+f5Xf9Xw3OwIT3eNwcu59sDruWUBjOQ/f
/FPd+V0FDZKlWPrG71Nf1UJvYOyh2R+rUxQxL88D0KnUoeq8sfTV3+cBME6Pid5uPnDXZBOk6Fyf
ZRxe02wQd6LdweOknzHu4zYj42WV0P5QddSbjzv3qydMEMutia3sHgOmk0CDnZ5gjO1HEwm9kUsg
7R47cu6j7bBWwVlDfsy9sNHXls5jXV+R6sYt3hdSyJZQdAo4dvGeWtHJGeYUpzEUkpzhg3UT6l5c
+hVAw4rxJeUfmejpQYAe7+6CdUM4ZGPt2wW4RvRBTtK2rCdD9yE8lUGK3fdJjNa++ljoKDsbsnkN
F53pZyeF72U806h3OnfVpwqiOx8+fbEH2F1lv65QNhUrPp6FYGChsUuQ44QG+XyWpu72qOHhmi4W
tqSLrI8lfOXwu/4EWcF+WPwK3OHNF2y8++UP90gq6LuCgdXyqkXisl5N+DS3D38cDjOb2LrpQYap
R+1XvGckOz8lKB7gGVtNyA9MLeYO1uSe/I1/sm/kM4Q0FPAh4LfR3JwXH+7ulYbNfa6yZdqPBZyK
k0b4Q5zrM4hUAscBB/7mUXRgtsU2hJvLcaDhV691b0uDCJ2G2lcS2YqEX/1Q93NMFi0wIuZTJYQf
73yhd1Tm7Lee8Fffn2G0YyR5OTMEiiz81cNzNXkWKOjJwLio9IHXmm6Gh06JCVceLPbyo9BCttTY
9EGHhz562zGGhnn1iOQLA1jj65BCw5NaelNnkTGzLGcknMDNp+c0ASztKgWN+mZPeLtvXbYTdx0q
J5Zh4/OqgNhc5xWl92tMHca9I4L7xEPcqzDopdvfwGLnxxGQ15nhOMWSPj60xoROc8uwNRhwWC9Z
pwFXwzufhMctYKdJilFrhyN1JbQOS+ExCV5KMaA4JUUtkkYkcHekId1J23KYPlc8Q6F+nbHpiAYT
/H0cgqbY7nF2/SzDeidTAcXus6exvKsznvVVBWzcaVitLu+azP2OQ3nQ3vFR35rDSuswAN/zi6bc
0XZnTnvwEOXrlSyXThyW4v7wIS5eDwJvAj98/VGOXpVWUvVkDBnLrZhAr4MTTQ63NXv1JTThZz8D
ejZOlc7sgFNgs9SCr4xwBMOK3AQw8bDHTnhQ3fmcOxVQRVvyW+cdZrNw23RQTsuQXvdvEnXCTezg
V/9jXyyFYU7eUwC+649x0A/Z2F6bEH52UovvrviKmGQ0AZQjJPn0W8+WROs0eNPGJ/7uV0ZO4baH
7MnLOEorLZtruvGgbGw0uie3sl5W89Er6fFc0KvCDpHYEHuGj6zoaWC/FzDSOMqha60ZvcX3ldEw
TRN48Z83ImSSP7ztfeijDzftyLwXm2hpa0GFtVFb1Lrtp3q2XC+AN5+38a1Gu2HN1spCfpC5fu1F
Klubq05A9zIdvFPfE2P35xIgpQptvAOew8b+Up1RoEUJWVq31L/+xwKdOjk+SIcxG7/+BT6FmMM7
3f/UyzbaqlAe8RG7toOG1RtTC87BpyXKc5gzej51K5DTOiTrz3+MrtlC7Xzf0MygOViFneEgTWMG
EU/PdqDQxyr4xruvnP2rK1YCSGDPVkbk1dOjNS83IXynuUV2TjYPrBJYgooP4b/1XAbjwB0IspIX
wvjrJxfjcW+g/RjI3/zeEsG1ABUqTNAeopppx5CgQx2bRA53Z33xo9SCX/9CQ6fh9NkQtg5U+qnx
pcVadfbYvVV4lOwtdV3dHOayuvLwr5442DAah3uoAilsYuqm2Y0xv0g4aDTWilWeXob+5yerbRHS
3OMXtvRjl8DoNu588P0/oVlOZyTprMWHT5jUc1xsW8QB80z3NYb6Kk2tBQzz4vmgrZuaeGNooe/+
4Hs7hvo69zsICkuT8eGCz7qwSS0Ic1F442P08CIWJ0n+mx/OT3iumXB8KNAk4Zna25HLuqYJKyV/
uRXWb29uIHxnqYg7jDmOv/p5xYHTgs/HWbB52PYubV2nAr1xCjEOoxIsFSYQds1B9cEwCvqH65Cn
tLkAqblbeMZW2ZxBe3R3hKvzVV/VxvLh6SjVON9fpYg8thcffuOF7le+yJazUUL4He+DMDjWq9AK
Ofz6bep/kBmtgfoJwM+P//QSAcPowcpUF2oNZ6zz2+nSwmePI+p847G/eKIJvzzG59zsOKzFsYOQ
K7dbrOW2Fq3skRH48rueujh/RYufcwJMQ2nxV0n9AJ7TLgI8vh8G9oZr/eUxZwOs9v74t77MQWMb
cFrhAf/4AlnDdwJrkifY9rpZnwu1smBwMix6KRxVF+JFVmFgCasv3MpXtBYPJ4Bf/uR/84P1nlZ7
sDrZIba//n972BYB7ByXYrtGZT13ecYB4zNp1N33VT3xaOBgkjCf2l//skRPysHPTmmpcxzLaN4e
hgA+5DjA9qrE7rLDyAdAgwrWpdsH9N94Adc0EPGVxRMYXwKRBN1PMVXdwIrGSpzv6PIIbGorh4e+
3MP1DrlPevOVIi7ZOp24EH6su4rdp+nqc7o1eoBs74AzdXPK1s/q3OGzmAm9G45c08tbXuGrkUey
kblRX81LoMIg2kRkMa4gm8N+nuFq+hrWw/RabzNPWxEHjDPeqWKRfeshgXypxv6WB9eMNRFvIWJ0
2pff2GB94EpBT7uoaGRvJDAE6NSDn5/An6TUZ/JZUmgZqkj9n59Mt0YHvTeqMW5QFTFOnnv0KE3g
S9JFdsku6EOIL3dINet4ZswwPsbPj1Kz6gX2V/898tsWm5F20lnwVjsISTQS5uAh+/kLkBi+hfeL
a4P1Jdkqsh8f8ounaO2yJYeeax+wh5ZTti2ESoDILDVqA61gLdjLCrw/1AjnXaNFQmqqFpJkofzq
rXMtPLJDAwf7qpNNGJVs1Squh0shSr5eTWRgnZYoP/5Hf/V9eegnBzk93JJlSSx3+3EH86/+TJxs
0sePw2k/v4F97TW7rDrZBuTX8UT98HgBf+P9wlcXapy2a0Q9PQhhnM0FtnxhYKstqykiUS8RepR0
ff6ej+jnv9WrEGWdwB9D+OODNnu9o+X1MM8wf9kVVu3rJluaUuNQuz1wVP2spfv14xU0p8ih2lfv
j/1pGuH7cYmw7WIXTOZ+10FWVlsif/VmZa3JqHz1IE6Q2mdTzXcK+PmPmzpfmdjbaovW07mmh1sf
slwB+wKi+pLjQ35p3JHgYYXvQr3R5+x8shliwwBBWkzU6U6gZkpshYB1Jv7LE8lbEO+wjo869SWv
05dVkRQYO+aLajei12xrEg+ixEM07tddLcZJkENjy084v6V3MJfVk4dKTxu6S/U4E19GKoDlnYXY
GODodibRq59/95tOWQZi9887hMcW+BsBnrKuv1Qxep42nE8bpGX0OSUdsvqT9NX7N3196dMK3jGT
8Y8v0vvF4+DcKcDnC/rK2NVYC9gmUYXNuW3qsTnLPuT4QvnL36gZWjkgVnHF8VDw0ZJ06Rn89OXh
y5/nYotm8KwH2R+Ss66z80hWeLZvnb9aj0O09pJigVbYW2TzfG/qL//t4fvVfDCWudFlfc8F4Fuv
/RmUj+FvfP/48zGTXH3eo0aA3SFofHFNunruPlao6PcbwCY7qpFgrQGBJ0f18cXakujzOy/uZPTp
MVVeTOQbKf/xJh8Yu1GfnFtC/uphR65EMCpVpaLwg2O6D8I3G7n7cIfh5xhT9cu/tk2pwb/n704o
5ZrlUOb++m9POIvscxpzD/C6N/scM69s9kdlhPFbcrGfjL07J+uphfkwQiIAS3HZrAQxOoXPHtuq
ttdZ6ZxS2NCtgs3XfaOz1H+EEE/jiUZZsNX71wOkcHFaAWP+1X39b5LA5elCbHCtNyyf10xgfUQT
tsL2FgkuYHf441Xu+3TNVuUSeSDMjS3+zg9M9ypa4Tk71PjohoW7dNtUhd4hWKm1nwFgr7zI4U9f
eN/6Roqu5cBV0yvy+vLHJXx/PAhvpMV6L6VsNHZlBcUYOfiIPM9d3pakKi+xWagfd+eBedIgKXUV
E+zue60WY1LeUX1fH4Trmiqa72OmAtDGM7WGjZqtvixX8DQQ++cHojZLzhWyLmSlh52oAvLVI1Ai
RPXnznZqwQTOGZJBOn737wAYy9UZXomI/K6Sg4wqh2sMPvsV+Eyyxay5pHsJSmEb00MPqD5yZnqH
h13n+gVYAp0gq+GQ8nzr2KrkOXsqr7KFocZdsEfXSm+8/XGEsekH2NIClQnNoUrhV/9hz8nL7Dt/
Dfn87OIo0P1sCuaUh6Of8Pi51FM04scpgYdOinE4LZa7ts1gQOsyrth5C3B4r9E5h29gyT48qaK+
wh0KfvoIm6P7AsupWCowjKNMjU451bM1cw388lYfda4W8T//8tO3qL40NZ3C4QzltkMkXa5DvV7e
xgjD7WVHrbCVo/Vz5Xl4dYMdPnz546/fBNVrfiLsNDZsLTTY/+XBzlu418365DyABK32pQDPOuOC
HUGPdRWwffOBvn75E7D3fuRvP2TOft+V3/ltPOOdvrinZw7H10elKh7e+qwr8vyXJ+jLuxx+fAea
B9X20bc/Jy7r0wT1W+KoSUI4rHC3DWBQrTr2PmQEM51LE3KPgFDHPmn1ll7VCpW9EhDu64f7Je16
CPJDSv/y09/+QXIa8U14V+53fh3gdq5B/Uxn2boeNPJ3v3+8aL7HjgbsVvH9zVqGNVPNWkWFC3yM
maeDWZ3SBB57y8YPN1RdsT4ad3iuY5kweZ3YlHnajMjj8PDrxlwZKVgWKN/8JLydsmGcH64FvvyG
2l7yqIXJPDXw16+5uJ445C03E7Sl1egPYVHojNJGQy3sK7LVYkVfvL0yg/FWX+h+cT9gDo6z+ePZ
1PsQD/A7fbgrWtH09FLWjT7LmRKD24frCT9fB33clpICnVug4PjC37J5rG0Cw3MXUHVBU0buZKqg
9pAbIn/95dZpUAKNIvz2x6gwEJl/8Art04akzwyB1VkOFij8JvKRKZbRXDMmwFLv9vj2kMdo2Vyu
IXwKZw5rEoEDO5eage7lOyVojsmwGn53/ulzwmXRC3Q/fbbi7o3d8fqu57MUWCiWGxMb57cyrPHE
J3Cjdi+q7kUjEz+N3/x4Az7qzjZjiHox9Iz3RL/nq1v20tGAuXUx8SGWnJpwsttA8VE1VK0uh0E0
9WsFke0fiJArn+jbLwgB7qwdVvFwcLfpLr//rV93NfV0QZ96CF3zEGMHZCoQxtkq4MtZL1g7GLo7
f/UUcJos80EkGy7jnrWKmizkCbt4G7AU94uHCpBSAvNyD2axGTiYB82dgAFag6iatQY3jZZh/Cgs
9u0H8gicu+2X56g1PxmbHF6r4ogvQjOBbhaDFZqmusEnutHB+uuH3o2Vo4ar9+58wKqCEsOzsMtX
jM03q+pQrfYMHxVniZaW0zuknfPNr75m7CxGBrSLM4+vn2pwl3Qrj8BiRYKdahrc6ZhgCwqaUmNL
/zRg+PYf//qLxxBw0fzrXzqiUdCQShyjXz+tpM6QUe/L9+kkXToI1nuGL1qc6uu9VgVk1G6HHRO/
f/mz/vgNdpX247L+eWrR1WogVpOyAV9+3EKBozw2/L0BmKy+O0iiTsJJHL0yui0YD6XLsyTAdh71
Mj9mH/7zuxXwX//68+d//G4YtN0jf30vBkz5Mv3H/7kq8B/if4xt+nr9vYZAxrTI//n3/76B8M9n
6NrP9D+nrsnf4z///rMV/t41+GfqpvT1/z7/1/dV//Wv/wUAAP//AwBcfFVx4CAAAA==
headers:
CF-RAY:
- 931fcf607c16eb34-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 17 Apr 2025 23:47:53 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=CncSMPCav.9EJL3emmM0sTqugx5GN6_Oy8JPFBssVho-1744933673-1.0.1.1-Q1XMvHbQdrfEWkkCYeeNHwFdZ1NpjAGJ_0jOUYIk_APelFe7nCanjW_xlOj12b3JQql9.iWQDiHvCeYJDTWkdxnNiMQOEiFMYHX5YZXUuJs;
path=/; expires=Fri, 18-Apr-25 00:17:53 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=unfPTYCpF5COtm5PuZDuaJqlhefP0iibfjsXHc9lKq0-1744933673515-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-allow-origin:
- '*'
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-model:
- text-embedding-3-small
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '75'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
via:
- envoy-router-8687b6cbdb-4qpmr
x-envoy-upstream-service-time:
- '46'
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '10000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '9999986'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_b8c884a7fe2bd4732903ecbdc632576d
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Information Agent.
You have access to specific knowledge sources.\nYour personal goal is: Provide
information based on knowledge sources\nTo give my best complete final answer
to the task respond using the exact following format:\n\nThought: I now can
give a great answer\nFinal Answer: Your final answer must be the great and the
most complete as possible, it must be outcome described.\n\nI MUST use these
formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent Task:
What is Brandon''s favorite color?\n\nThis is the expected criteria for your
final answer: Brandon''s favorite color.\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nBegin! This is VERY important
to you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '926'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//jJNNi9swEIbv+RWDLr0ki/NBvm5NYUsplFK29NAuZiKNnWlkjVaSk02X
/PdiJxtn2y30YrCeecfvvCM/9QAUG7UEpTeYdOXtYPXp7vbLw7evm4ePmObvt/jrsVgVn9/5dbhj
1W8Usv5JOj2rbrRU3lJicSesA2GiputwNpksxuPpbNyCSgzZRlb6NJjIoGLHg1E2mgyy2WA4P6s3
wpqiWsL3HgDAU/tsfDpDj2oJWf/5pKIYsSS1vBQBqCC2OVEYI8eELql+B7W4RK61/gGc7EGjg5J3
BAhlYxvQxT0FgB/ulh1aeNu+L2EV0BlxbyIUuJPAiUCLlQAcwUkCX68ta3sAI7quyCUywA72bMge
AHfIFteWYOtkb8mUBFHqoCneXPsLVNQRm4xcbe0VQOckYZNxm8z9mRwvWVgpfZB1/EOqCnYcN3kg
jOKauWMSr1p67AHct5nXL2JUPkjlU55kS+3nhtP5qZ/qVt3R0fQMkyS0V6rFpP9Kv9xQQrbxamtK
o96Q6aTdirE2LFegdzX1325e632anF35P+07oDX5RCb3gQzrlxN3ZYGaP+FfZZeUW8MqUtixpjwx
hWYThgqs7el+qniIiaq8YFdS8IFPl7TweTZejOajUbbIVO/Y+w0AAP//AwA4a1/QsgMAAA==
headers:
CF-RAY:
- 931fcf649bdbed40-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 17 Apr 2025 23:47:54 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=8vySwO0xqpm0u93_1_rXQwTeEIWa2ei3_CD5sAdoo3o-1744933674-1.0.1.1-iqZDpH5poOUp4Rcnhfrb0N2Z0c2662RBiPEcx_gefNW.m3tBA3qyFa8tmFv7PitH8u9vyYK7jxUwy4lPiSi830QWNbTMgCMTbrJ7iaUV7hY;
path=/; expires=Fri, 18-Apr-25 00:17:54 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=IXAyT8eWpFERM53ngcYNmaqhocfGbOHWSoe7SFNdoGI-1744933674288-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '489'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999801'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_151f9d0b786f2022f249ee20ea108b43
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,330 @@
interactions:
- request:
body: '{"input": ["Brandon''s favorite color is red and he likes Mexican food."],
"model": "text-embedding-3-small", "encoding_format": "base64"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '137'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-read-timeout:
- '600'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/embeddings
response:
body:
string: !!binary |
H4sIAAAAAAAAA1SaWw+yPtfmz59Pced/yrwR2bV9zhAQ2UkRFHEymYAiOxHZtEDfvN99ovdkNicm
YiNpu1bXdf1W//Nff/7802V1fp/++feff17VOP3z377PHumU/vPvP//9X3/+/Pnzn7/P/29k3mb5
41G9i9/w34/V+5Ev//z7D/9/nvzfQf/+889JZZQeb+UOCJHjaQqtRQfv1mXUaf0OTfTuHjx+CvAU
CWC/KEik3pNeZScAwr5Zzkgne4Gqd6jX2+oDW3BxGx8nMkfrxaq0GM2PNaV5G6QZM0P1joRJl32W
BVuXtTPPo02jZhRX8gjWdj7MgDz2D+wRexhoUHsaTN9P0RfLw5itFrmbCgzCHVFOdx/wQte1qJvK
FxH556YeT0pqoJ0RTNhqPwiskhTe0T7qIpzrwS4arGS24D4uc6y90d4VpMq+wy7hntS8mG297p2B
QNrwZ5pV1p4RZ6vPEHEPDtuWVA0L/w451CylgPUZBhFvXaQWWqXwwKobdNmavrcCyvDk4aRDezby
2c5Hym33obtXgLKlioYGKUfZovrOBtkSu6cOQbHw6POcJoyE+vmMxtegUst+HNh2ZImhpFTNsJ4V
73p1XlWO8NsSscGvH7asThqCqtoe6anKt+6SsI0K91Ef0UtGS5dIB8DD3QV2vtwaybB8lksCj5K7
pTjWccSrU5igG+hfVOuSRl+4sPGhtJ4Qtfr4w4g0EQcaRfCgnrKxXVFD8hnSZ6jTk5PN9ax4JxU5
T/5MsnTwskU6ARNOHnCJYHpsoPbgFfDzsRYaLe9dPbfLoiDTls800h+6vjruXYDSGiGs3fJFXw5Z
2APLfh+ok3y2YDvrfQvLm3/x5fPervlV0QUUF5c33aFAzQRua0JwHeCKb5/FGITQVzsgc1KIg6vA
Mmo+Xg5CiY/80LSf2UqSFwdOKDHoMVyBvk6jq0DzTRTs1uUHLFui9ciAyYGq8fHOhDqzzoCpgkE9
Ta/q9aW/VnRO/IR6zaVjs8DvQ9Q/byvOji+YzUV7ztGHji4+b4t7LZqHuwO9+NRR9aZ0YFFWKQF8
oXu+5D2rgfee2RlmG3/CO+mFXbq0SgJ35cGm+JxKjEnnY6vsjlOI9+aSsPUqdSNMj3FBD3qdMHHA
UgCXpjj46xaeI6GBnAZtrfZJOLMKLKg6VrDCz5jaF6K6/LjWGpwt36FHS430JTwlHtrvYIL3q03d
JUkWDWVaEtKLvglq9sj4HiRDHtFo3yjuyGe2D/nPY8ER4xwmtq/AQtvoptF7rdiAX9jdgRtn7+Bn
tJEZJc2GgD4yNZz3YBjWu3ziAaFrhXenYwImk8wNpOrj/M13t+Y7wz/D3/57Lz0b2K1kHixsVceP
ztUy8VYyH90e/AbvXU+sGZ8aDnKhc6LWu9nVPJp3CYqztfBbYCn6DDf9CG16+VBf9VswK6seI7h7
nqirt5POrDOr4LC/ExoMssgmx6EzlCfv+s2fnS5e3vIM9VeX0YDfaUxs04EHhXb/4KPzJoC1ac2j
jeltac5xgc5gd++B3PYI22Bmw1g9KgFZx+cdGynQMv7ilz7yDuFKkP351LMfTiYsJQCpLhIpWguN
76FsiVd6cLRJX4LU5qEFOUQdt3OBGCfJHX7PD/x7H+9L1xThHJQYF+HiUrWYe6DvogzvTlKvM6OP
EijuT5i01wcF7Pg6QvgK4pZ6vlzo82dzCmBAD4zuU7MfFvypHbRLREZku91ma5+PBfjGjz8e7mbN
sFByKDkcInrs5Iv+oVDhoTppJTYI17LVHD4qvBBo4NB9lmyRDuYMt0bqkQWbJZteYsaBUpZNvFf7
Su+cVQ5gr6YI+7u3A0Sev83KJowLXyzuwGV7N5jR68Zc6iWPoB71O63gwiqEtU9sR3wq2yG0nPZE
ja7fuKuzmQ10PnUctriPysROHCtogQ3D+JPs3O1a1AGULo+SauHJ0XlpX8ZI350yeqCtmjFn4T0w
SHyDz/xqg+2xnU3UZWJP7VURXHaDlgBHg8P00NhvfTy2rgqPRM1pvKIjmPwodNDU8neq2ydTZ2O7
C1F5qFKKX8PFHRZvXGFV81eaBMou4wcOjzLwbk/qpxe1/uyMoEAmCc4Yl4GQrZzstsgq+Qe9D4cZ
LFkbJCg6NjG9Wy8pWzj/3cBY7VSaymuf0eK2jnBXmRE+bZkJlqdp+NAf6gFbh9saPQ/d7Q6siTtg
HG1ubDtLnA8/1fuK/TpfXVbO7gijGR7pQ3e2EVPA2qJ170v+FosWWEPf6iBRgI6dku7Z0ianM2Jl
scWq4R/Z9qkd2l98YU3isD7z7UuBtw4V9Hbef+pZu2wEEE/3DCdFdXDF6j7NMOmftg99uXDFME1T
MCX6SA/OvAGTcm4tlJ1uFoHzsdQZvxHu8BQ+eurtkiXrcuobMLzW2Tf+TCZka+8or+FpYqypDhBU
5eahY3a54ftLBzULDTuF0f3t4X2N7+4QGrsUifHG8We0fdXzbCoOQHjzJlwbKFk/umYDMQsqmunt
0RUDj2ko+/QP6ofSW19xoDVQUHHuw289YJf3MiM+vwQ+f5Xf9Xw3OwIT3eNwcu59sDruWUBjOQ/f
/FPd+V0FDZKlWPrG71Nf1UJvYOyh2R+rUxQxL88D0KnUoeq8sfTV3+cBME6Pid5uPnDXZBOk6Fyf
ZRxe02wQd6LdweOknzHu4zYj42WV0P5QddSbjzv3qydMEMutia3sHgOmk0CDnZ5gjO1HEwm9kUsg
7R47cu6j7bBWwVlDfsy9sNHXls5jXV+R6sYt3hdSyJZQdAo4dvGeWtHJGeYUpzEUkpzhg3UT6l5c
+hVAw4rxJeUfmejpQYAe7+6CdUM4ZGPt2wW4RvRBTtK2rCdD9yE8lUGK3fdJjNa++ljoKDsbsnkN
F53pZyeF72U806h3OnfVpwqiOx8+fbEH2F1lv65QNhUrPp6FYGChsUuQ44QG+XyWpu72qOHhmi4W
tqSLrI8lfOXwu/4EWcF+WPwK3OHNF2y8++UP90gq6LuCgdXyqkXisl5N+DS3D38cDjOb2LrpQYap
R+1XvGckOz8lKB7gGVtNyA9MLeYO1uSe/I1/sm/kM4Q0FPAh4LfR3JwXH+7ulYbNfa6yZdqPBZyK
k0b4Q5zrM4hUAscBB/7mUXRgtsU2hJvLcaDhV691b0uDCJ2G2lcS2YqEX/1Q93NMFi0wIuZTJYQf
73yhd1Tm7Lee8Fffn2G0YyR5OTMEiiz81cNzNXkWKOjJwLio9IHXmm6Gh06JCVceLPbyo9BCttTY
9EGHhz562zGGhnn1iOQLA1jj65BCw5NaelNnkTGzLGcknMDNp+c0ASztKgWN+mZPeLtvXbYTdx0q
J5Zh4/OqgNhc5xWl92tMHca9I4L7xEPcqzDopdvfwGLnxxGQ15nhOMWSPj60xoROc8uwNRhwWC9Z
pwFXwzufhMctYKdJilFrhyN1JbQOS+ExCV5KMaA4JUUtkkYkcHekId1J23KYPlc8Q6F+nbHpiAYT
/H0cgqbY7nF2/SzDeidTAcXus6exvKsznvVVBWzcaVitLu+azP2OQ3nQ3vFR35rDSuswAN/zi6bc
0XZnTnvwEOXrlSyXThyW4v7wIS5eDwJvAj98/VGOXpVWUvVkDBnLrZhAr4MTTQ63NXv1JTThZz8D
ejZOlc7sgFNgs9SCr4xwBMOK3AQw8bDHTnhQ3fmcOxVQRVvyW+cdZrNw23RQTsuQXvdvEnXCTezg
V/9jXyyFYU7eUwC+649x0A/Z2F6bEH52UovvrviKmGQ0AZQjJPn0W8+WROs0eNPGJ/7uV0ZO4baH
7MnLOEorLZtruvGgbGw0uie3sl5W89Er6fFc0KvCDpHYEHuGj6zoaWC/FzDSOMqha60ZvcX3ldEw
TRN48Z83ImSSP7ztfeijDzftyLwXm2hpa0GFtVFb1Lrtp3q2XC+AN5+38a1Gu2HN1spCfpC5fu1F
Klubq05A9zIdvFPfE2P35xIgpQptvAOew8b+Up1RoEUJWVq31L/+xwKdOjk+SIcxG7/+BT6FmMM7
3f/UyzbaqlAe8RG7toOG1RtTC87BpyXKc5gzej51K5DTOiTrz3+MrtlC7Xzf0MygOViFneEgTWMG
EU/PdqDQxyr4xruvnP2rK1YCSGDPVkbk1dOjNS83IXynuUV2TjYPrBJYgooP4b/1XAbjwB0IspIX
wvjrJxfjcW+g/RjI3/zeEsG1ABUqTNAeopppx5CgQx2bRA53Z33xo9SCX/9CQ6fh9NkQtg5U+qnx
pcVadfbYvVV4lOwtdV3dHOayuvLwr5442DAah3uoAilsYuqm2Y0xv0g4aDTWilWeXob+5yerbRHS
3OMXtvRjl8DoNu588P0/oVlOZyTprMWHT5jUc1xsW8QB80z3NYb6Kk2tBQzz4vmgrZuaeGNooe/+
4Hs7hvo69zsICkuT8eGCz7qwSS0Ic1F442P08CIWJ0n+mx/OT3iumXB8KNAk4Zna25HLuqYJKyV/
uRXWb29uIHxnqYg7jDmOv/p5xYHTgs/HWbB52PYubV2nAr1xCjEOoxIsFSYQds1B9cEwCvqH65Cn
tLkAqblbeMZW2ZxBe3R3hKvzVV/VxvLh6SjVON9fpYg8thcffuOF7le+yJazUUL4He+DMDjWq9AK
Ofz6bep/kBmtgfoJwM+P//QSAcPowcpUF2oNZ6zz2+nSwmePI+p847G/eKIJvzzG59zsOKzFsYOQ
K7dbrOW2Fq3skRH48rueujh/RYufcwJMQ2nxV0n9AJ7TLgI8vh8G9oZr/eUxZwOs9v74t77MQWMb
cFrhAf/4AlnDdwJrkifY9rpZnwu1smBwMix6KRxVF+JFVmFgCasv3MpXtBYPJ4Bf/uR/84P1nlZ7
sDrZIba//n972BYB7ByXYrtGZT13ecYB4zNp1N33VT3xaOBgkjCf2l//skRPysHPTmmpcxzLaN4e
hgA+5DjA9qrE7rLDyAdAgwrWpdsH9N94Adc0EPGVxRMYXwKRBN1PMVXdwIrGSpzv6PIIbGorh4e+
3MP1DrlPevOVIi7ZOp24EH6su4rdp+nqc7o1eoBs74AzdXPK1s/q3OGzmAm9G45c08tbXuGrkUey
kblRX81LoMIg2kRkMa4gm8N+nuFq+hrWw/RabzNPWxEHjDPeqWKRfeshgXypxv6WB9eMNRFvIWJ0
2pff2GB94EpBT7uoaGRvJDAE6NSDn5/An6TUZ/JZUmgZqkj9n59Mt0YHvTeqMW5QFTFOnnv0KE3g
S9JFdsku6EOIL3dINet4ZswwPsbPj1Kz6gX2V/898tsWm5F20lnwVjsISTQS5uAh+/kLkBi+hfeL
a4P1Jdkqsh8f8ounaO2yJYeeax+wh5ZTti2ESoDILDVqA61gLdjLCrw/1AjnXaNFQmqqFpJkofzq
rXMtPLJDAwf7qpNNGJVs1Squh0shSr5eTWRgnZYoP/5Hf/V9eegnBzk93JJlSSx3+3EH86/+TJxs
0sePw2k/v4F97TW7rDrZBuTX8UT98HgBf+P9wlcXapy2a0Q9PQhhnM0FtnxhYKstqykiUS8RepR0
ff6ej+jnv9WrEGWdwB9D+OODNnu9o+X1MM8wf9kVVu3rJluaUuNQuz1wVP2spfv14xU0p8ih2lfv
j/1pGuH7cYmw7WIXTOZ+10FWVlsif/VmZa3JqHz1IE6Q2mdTzXcK+PmPmzpfmdjbaovW07mmh1sf
slwB+wKi+pLjQ35p3JHgYYXvQr3R5+x8shliwwBBWkzU6U6gZkpshYB1Jv7LE8lbEO+wjo869SWv
05dVkRQYO+aLajei12xrEg+ixEM07tddLcZJkENjy084v6V3MJfVk4dKTxu6S/U4E19GKoDlnYXY
GODodibRq59/95tOWQZi9887hMcW+BsBnrKuv1Qxep42nE8bpGX0OSUdsvqT9NX7N3196dMK3jGT
8Y8v0vvF4+DcKcDnC/rK2NVYC9gmUYXNuW3qsTnLPuT4QvnL36gZWjkgVnHF8VDw0ZJ06Rn89OXh
y5/nYotm8KwH2R+Ss66z80hWeLZvnb9aj0O09pJigVbYW2TzfG/qL//t4fvVfDCWudFlfc8F4Fuv
/RmUj+FvfP/48zGTXH3eo0aA3SFofHFNunruPlao6PcbwCY7qpFgrQGBJ0f18cXakujzOy/uZPTp
MVVeTOQbKf/xJh8Yu1GfnFtC/uphR65EMCpVpaLwg2O6D8I3G7n7cIfh5xhT9cu/tk2pwb/n704o
5ZrlUOb++m9POIvscxpzD/C6N/scM69s9kdlhPFbcrGfjL07J+uphfkwQiIAS3HZrAQxOoXPHtuq
ttdZ6ZxS2NCtgs3XfaOz1H+EEE/jiUZZsNX71wOkcHFaAWP+1X39b5LA5elCbHCtNyyf10xgfUQT
tsL2FgkuYHf441Xu+3TNVuUSeSDMjS3+zg9M9ypa4Tk71PjohoW7dNtUhd4hWKm1nwFgr7zI4U9f
eN/6Roqu5cBV0yvy+vLHJXx/PAhvpMV6L6VsNHZlBcUYOfiIPM9d3pakKi+xWagfd+eBedIgKXUV
E+zue60WY1LeUX1fH4Trmiqa72OmAtDGM7WGjZqtvixX8DQQ++cHojZLzhWyLmSlh52oAvLVI1Ai
RPXnznZqwQTOGZJBOn737wAYy9UZXomI/K6Sg4wqh2sMPvsV+Eyyxay5pHsJSmEb00MPqD5yZnqH
h13n+gVYAp0gq+GQ8nzr2KrkOXsqr7KFocZdsEfXSm+8/XGEsekH2NIClQnNoUrhV/9hz8nL7Dt/
Dfn87OIo0P1sCuaUh6Of8Pi51FM04scpgYdOinE4LZa7ts1gQOsyrth5C3B4r9E5h29gyT48qaK+
wh0KfvoIm6P7AsupWCowjKNMjU451bM1cw388lYfda4W8T//8tO3qL40NZ3C4QzltkMkXa5DvV7e
xgjD7WVHrbCVo/Vz5Xl4dYMdPnz546/fBNVrfiLsNDZsLTTY/+XBzlu418365DyABK32pQDPOuOC
HUGPdRWwffOBvn75E7D3fuRvP2TOft+V3/ltPOOdvrinZw7H10elKh7e+qwr8vyXJ+jLuxx+fAea
B9X20bc/Jy7r0wT1W+KoSUI4rHC3DWBQrTr2PmQEM51LE3KPgFDHPmn1ll7VCpW9EhDu64f7Je16
CPJDSv/y09/+QXIa8U14V+53fh3gdq5B/Uxn2boeNPJ3v3+8aL7HjgbsVvH9zVqGNVPNWkWFC3yM
maeDWZ3SBB57y8YPN1RdsT4ad3iuY5kweZ3YlHnajMjj8PDrxlwZKVgWKN/8JLydsmGcH64FvvyG
2l7yqIXJPDXw16+5uJ445C03E7Sl1egPYVHojNJGQy3sK7LVYkVfvL0yg/FWX+h+cT9gDo6z+ePZ
1PsQD/A7fbgrWtH09FLWjT7LmRKD24frCT9fB33clpICnVug4PjC37J5rG0Cw3MXUHVBU0buZKqg
9pAbIn/95dZpUAKNIvz2x6gwEJl/8Art04akzwyB1VkOFij8JvKRKZbRXDMmwFLv9vj2kMdo2Vyu
IXwKZw5rEoEDO5eage7lOyVojsmwGn53/ulzwmXRC3Q/fbbi7o3d8fqu57MUWCiWGxMb57cyrPHE
J3Cjdi+q7kUjEz+N3/x4Az7qzjZjiHox9Iz3RL/nq1v20tGAuXUx8SGWnJpwsttA8VE1VK0uh0E0
9WsFke0fiJArn+jbLwgB7qwdVvFwcLfpLr//rV93NfV0QZ96CF3zEGMHZCoQxtkq4MtZL1g7GLo7
f/UUcJos80EkGy7jnrWKmizkCbt4G7AU94uHCpBSAvNyD2axGTiYB82dgAFag6iatQY3jZZh/Cgs
9u0H8gicu+2X56g1PxmbHF6r4ogvQjOBbhaDFZqmusEnutHB+uuH3o2Vo4ar9+58wKqCEsOzsMtX
jM03q+pQrfYMHxVniZaW0zuknfPNr75m7CxGBrSLM4+vn2pwl3Qrj8BiRYKdahrc6ZhgCwqaUmNL
/zRg+PYf//qLxxBw0fzrXzqiUdCQShyjXz+tpM6QUe/L9+kkXToI1nuGL1qc6uu9VgVk1G6HHRO/
f/mz/vgNdpX247L+eWrR1WogVpOyAV9+3EKBozw2/L0BmKy+O0iiTsJJHL0yui0YD6XLsyTAdh71
Mj9mH/7zuxXwX//68+d//G4YtN0jf30vBkz5Mv3H/7kq8B/if4xt+nr9vYZAxrTI//n3/76B8M9n
6NrP9D+nrsnf4z///rMV/t41+GfqpvT1/z7/1/dV//Wv/wUAAP//AwBcfFVx4CAAAA==
headers:
CF-RAY:
- 931fceef786ded38-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 17 Apr 2025 23:47:35 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=fj4RMXSXRDQjE2CFC6CGC3dVcJ8cl2Cbu8alijwMHA8-1744933655-1.0.1.1-M3c3AI4XQa.0GJoanNACuOm2aEL4xjqHR1grxIP3olFvq3e0eFHwQTvCF20YwR_OLiMJUH87eNUwgziawMccsxjR9OVZyDr5._5Wts6CrqA;
path=/; expires=Fri, 18-Apr-25 00:17:35 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=MSkpJsQZtdyIGvrl2mIwy0a_We8H6CIrS7etFgRBl2Y-1744933655703-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-allow-origin:
- '*'
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-model:
- text-embedding-3-small
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '140'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
via:
- envoy-router-84959bbcd5-rzqvq
x-envoy-upstream-service-time:
- '110'
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '10000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '9999986'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_dd3ef61c4765b46ed7db80ddfe261f41
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Information Agent.
You have access to specific knowledge sources.\nYour personal goal is: Provide
information based on knowledge sources\nTo give my best complete final answer
to the task respond using the exact following format:\n\nThought: I now can
give a great answer\nFinal Answer: Your final answer must be the great and the
most complete as possible, it must be outcome described.\n\nI MUST use these
formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent Task:
What is Brandon''s favorite color?\n\nThis is the expected criteria for your
final answer: Brandon''s favorite color.\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nBegin! This is VERY important
to you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '926'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//jFLBbtswDL37KwhdeokLx0mbOLd2WIAC63YZdthWGIpMO9xkUZDkpEWR
fx/kpLG7dUAvBszHR733yOcEQFAlViDUVgbVWp3efv66LvKPRd7Q/f57vtaf7m6ePjze774sv23E
JDJ48wtVeGFdKm6txkBsjrByKAPGqdPFfF7MZtdXVz3QcoU60hob0jmnLRlK8yyfp9kinS5P7C2T
Qi9W8CMBAHjuv1GnqfBRrCCbvFRa9F42KFbnJgDhWMeKkN6TD9IEMRlAxSag6aXfgeE9KGmgoR2C
hCbKBmn8Hh3AT7MmIzXc9P8ruHXSVGwuPNRyx44CgmLNDsjDRnd4OX7GYd15Ga2aTusRII3hIGNU
vcGHE3I4W9LcWMcb/xdV1GTIb0uH0rOJ8n1gK3r0kAA89NF1r9IQ1nFrQxn4N/bPTa+Xx3li2NgI
LU5g4CD1qL5cTN6YV1YYJGk/Cl8oqbZYDdRhU7KriEdAMnL9r5q3Zh+dk2neM34AlEIbsCqtw4rU
a8dDm8N40P9rO6fcCxYe3Y4UloHQxU1UWMtOH89M+CcfsC1rMg066+h4a7Uts1mRL/M8KzKRHJI/
AAAA//8DALRhJdF5AwAA
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 931fcef51a67f947-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 17 Apr 2025 23:47:36 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=7agwu5JV1OJvEFNhvfvqdgWf.HoMyIni9D85soRl3WE-1744933656-1.0.1.1-dKUwAZnjjuuiswFKWGsxpwHNBJUpjhYlZvfZpyNQIejxEJrXMCppgPvtQ9wa4SKezLmKqftvn_H.bAx_AEFJD2EWm5V6R_uK8.odneErR6A;
path=/; expires=Fri, 18-Apr-25 00:17:36 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=LdTrzwZYrB6ZyQLY7NdaaHVpDVFvIjYm3arSpNy87wU-1744933656504-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '540'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999802'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_8837be6510731522fd5ac4b75c11d486
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,59 @@
interactions:
- request:
body: '{"contents": [{"role": "user", "parts": [{"text": "What is the capital
of France?"}]}], "generationConfig": {"stop_sequences": []}}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '131'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
user-agent:
- litellm/1.60.2
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash-001:generateContent
response:
body:
string: !!binary |
H4sIAAAAAAAC/62RTU+EMBCG7/0VTY9kIQUT1vXqx0njRokxUQ8jDNAILaFdoyH8dwssbNGrTdo0
807nnT7TEUpZCjITGRjU7IK+2Ail3XgOmpIGpbHCHLLBBlpzyp1W59xtisGv4RFLSqQpNMJARVVO
b1qQKVKhqeftoRXa84JXyZy3/XJ/25wcW1XhUK5WGVZzej8nsFxIocsHBK3kkPaY3O/ZosJncauK
plXvQ9M+D3gUxiE/tzs6i+JtyHdkth5N2UFDgXdowFKB5e/Mlqgbk6gPlJfqMFLZTi4Ow5W8O8pG
WQArJYw3f4rqK2spKhetQ93+HSphvkes188Jc/iYVU8zH+Jg/N3hP3nt1l7kOJVpUE/YajFNpMDa
zsiPAu7nFejS5zxkpCc/6so6tIECAAA=
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Encoding:
- gzip
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 22 Apr 2025 14:25:05 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=1219
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- nosniff
X-Frame-Options:
- SAMEORIGIN
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,59 @@
interactions:
- request:
body: '{"contents": [{"role": "user", "parts": [{"text": "What is the capital
of France?"}]}], "generationConfig": {"stop_sequences": []}}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '131'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
user-agent:
- litellm/1.60.2
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash-lite-001:generateContent
response:
body:
string: !!binary |
H4sIAAAAAAAC/61RTU+EMBC98yuanhdSiMjq1Y+Txo0SY6J7GGGARmhJ2zUawn+3wMIWvdpDM5n3
Zt7Mm84jhGYgcp6DQU0vyavNENKN/4BJYVAYC8wpm2xBmRN3ep0TW4rBr6GIphWSDFpuoCayILcK
RIaEa7IDxXXwJqhT1y/xfnNSU7LGoVUjc6xnej8TaMEF19UjgpZioD2lDzu6oPBZ3smyVfJ9GNhn
AQuTMIpjFl/EZyxKwuTcm6VHUXrQUOI9GrCOwLI3tS2a1qTyA8WVPIyOJJOK498K3h5hI+3yKySM
N3+a6msryWvXVsdxuzvU3HyPlt68pNTxx6xmmv3xHBt/T/hPWtu1lne8ynSoZ1SaTxcpsbE38qOA
+UUNuvJtd/QZC6nXez+t5iqFggIAAA==
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Encoding:
- gzip
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 22 Apr 2025 14:25:06 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=1090
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- nosniff
X-Frame-Options:
- SAMEORIGIN
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,58 @@
interactions:
- request:
body: '{"contents": [{"role": "user", "parts": [{"text": "What is the capital
of France?"}]}], "generationConfig": {"stop_sequences": []}}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '131'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
user-agent:
- litellm/1.60.2
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash-thinking-exp-01-21:generateContent
response:
body:
string: !!binary |
H4sIAAAAAAAC/22QQWuEMBCF7/6KkKNsFt1DKb2221vp0koplD0M66jDxkSSWbCI/71Rq+vS5pCE
eW9meF8XCSFPYHLKgdHLB/EVKkJ04z1o1jAaDsJcCsUGHF+90+lW/2BhbIcmmVUoTtAQgxa2EM8O
zAkFeRHHB3Dk43grV5398j9urvuc1TgMq22Oerb3s0EWZMhXbwjemsH2nr0e5KKSybEN5SSaF4yj
5cVDiS/IEJLDkk82ztYNZ/aM5tFexuT306wVp39ltiHkjZLebf4M9U9hJek1vhXZkBA08feIbv+Z
yRUFvlk6UxjfY/TLY0L0gc7TxKLEOtBRu22iCg2+UlyROZMpFbaNSlK1S2XURz/Fy1P+CAIAAA==
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Encoding:
- gzip
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 22 Apr 2025 14:25:04 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=764
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- nosniff
X-Frame-Options:
- SAMEORIGIN
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,59 @@
interactions:
- request:
body: '{"contents": [{"role": "user", "parts": [{"text": "What is the capital
of France?"}]}], "generationConfig": {"stop_sequences": []}}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '131'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
user-agent:
- litellm/1.60.2
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent
response:
body:
string: !!binary |
H4sIAAAAAAAC/2WQT0+EMBDF7/0UTY9k2ewaDerVPzfjRokxMR4mMEBjaUk76BrCd7fAslvcHppm
5s2bvl/HOBcZ6FzmQOjELf/wFc678R56RhNq8o255IsNWDppp9MFby8h3A9DIq2QZ9BIAsVNwR8t
6Ay5dDyKdmCli6K1CCb74/tzddpnjcLBrDY5qlnezwJRSC1d9YLgjB5kr+nzThy7Uue49+UNmxeM
1qJ1UOITEvjkcMwnGmvqhlLzhfrOtGPy68kr4LRoJzeHPhmfcjmZrM5c3b3fKVXIL0DrI4KS9Duy
e3hPRYCBFtYzBhbQElSZtqzo3we37IBrIviG1skJVYm1hxdfrK/iQoGr4sbit8SfeHMZbxPBevYH
O2bXiSICAAA=
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Encoding:
- gzip
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 22 Apr 2025 14:25:28 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=20971
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- nosniff
X-Frame-Options:
- SAMEORIGIN
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,59 @@
interactions:
- request:
body: '{"contents": [{"role": "user", "parts": [{"text": "What is the capital
of France?"}]}], "generationConfig": {"stop_sequences": []}}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '131'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
user-agent:
- litellm/1.60.2
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-pro-exp-03-25:generateContent
response:
body:
string: !!binary |
H4sIAAAAAAAC/12QT2uEMBDF7/kUIUepi2tZaHttu7fSpZVSKD0MOquhmkgygkX87o26cWM9SJj3
5s/7DYxzkYMqZAGEVjzwL1fhfJj/k6YVoSIn+JIrtmDo6l2+IXg7C2E/NYmsQp5DKwlqrs/8aEDl
yKXlUXQCI20U7UTQOa7v75vrPqNrnIY1usDa20dvEGeppK3eEKxWk+09ez2JVZWqwN6VE+YXzKNF
Z6HEFyRwyWHNJ1qjm5Yy/YPqUXdz8rtlVsBpI++T5GIg7WL+03xzMNc+ua2yDgkGcF1IqCX9zvSe
PzMRgKDNWR4EC3gJqnRXVrQ98T5lF2ALww80Vi6wSmwcvjjdHWJ3Yox9Gye3cXoQbGR/TedYqx4C
AAA=
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Encoding:
- gzip
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 22 Apr 2025 14:25:30 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=2418
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- nosniff
X-Frame-Options:
- SAMEORIGIN
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,59 @@
interactions:
- request:
body: '{"contents": [{"role": "user", "parts": [{"text": "What is the capital
of France?"}]}], "generationConfig": {"stop_sequences": []}}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '131'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
user-agent:
- litellm/1.60.2
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemma-3-12b-it:generateContent
response:
body:
string: !!binary |
H4sIAAAAAAAC/2WRTWvDMAyG7/kVwpdBSEvX7jB23QfsMFa2MAZbD2qipGaOFWwFWkr/+5ykaVPq
gGP0SvLrR/sIQGVoc52jkFcP8BMiAPtubzW2QlaCMIRCsEYn59x+7UfnkCK0bYtUuiHIsNaCBriA
F4c2I9Ae4niJTvs4nv7a/nuVGw9oPIOEIoOuJC+QadmBtkNlsAoIpeF1aJgFZ+SgYAfBUQIF+o1m
m0CJXhxbrnZJV5E1RhpHUzUyeTidV8n5aY4Ntb4rzskM6YchQRXaar/5IPRs27TP9H2pTqq2OW1D
eBYNF3StVeOxpDcSDJDxhFLVjqtaUv4j+8hNB/m+7zUayYW8mB914QD0QrqbJVdd/VO4U5vxqEZT
DE9EE+h2Y3r+TtUIg1yYGjB0/1V0BNIz+iLndQ+jpKrCyWJyO19PtKjoEP0DlZtdIF8CAAA=
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Encoding:
- gzip
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 22 Apr 2025 14:25:39 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=3835
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- nosniff
X-Frame-Options:
- SAMEORIGIN
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,60 @@
interactions:
- request:
body: '{"contents": [{"role": "user", "parts": [{"text": "What is the capital
of France?"}]}], "generationConfig": {"stop_sequences": []}}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '131'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
user-agent:
- litellm/1.60.2
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemma-3-1b-it:generateContent
response:
body:
string: !!binary |
H4sIAAAAAAAC/2VRy07DQAy85yusPUZtBSoIxJWHxAFRQYSQKAc3cVqL7DrKuqKlqsRv8Ht8CZuk
aVOxh314xuP1eBMBmBRdxhkqeXMFbyECsGn2GhOn5DQAXSgES6z0wG3XpncPFKVVnWSSBUGKJSsW
IDncVehSAvYQxxOs2MfxCKZu6u719/vHA0Iq1ooDyz6UTqlUDi9doELDr1M1aMbiinXcSQ9gtlTg
VqOGJc855VAztAZWvMInZ1SsoaJU5o6/KANxNDK9X2/39/fBoddKCqobsRLyO/q2I5icHfvFE6EX
V9Oek8eJ2aPsMlqF8EnUFWikzdLjnB5IMbiOe29NWYktNZEPcteybFy/bLV6MzqCxxc7XCXYcASd
nQ/+qfqbUJOL/ux6Yw0tYsG6buZ2+5qYng169KnOhuZ8j3aGtB69UOW5NWNO1uJwPDydDVlNtI3+
AD6XWQdvAgAA
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Encoding:
- gzip
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 22 Apr 2025 14:25:32 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=1535
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- nosniff
X-Frame-Options:
- SAMEORIGIN
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,60 @@
interactions:
- request:
body: '{"contents": [{"role": "user", "parts": [{"text": "What is the capital
of France?"}]}], "generationConfig": {"stop_sequences": []}}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '131'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
user-agent:
- litellm/1.60.2
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemma-3-27b-it:generateContent
response:
body:
string: !!binary |
H4sIAAAAAAAC/2VRXUvDMBR976+45EUo3RDnUHwTnSA4HFpEcHuI7e16aZqU5NZNxv67abtuHTbQ
hHPu5zm7AEAkUqeUSkYn7uDLIwC79t9wRjNq9kQPebCSlk+x3bcbvH0I47ZJEnGOkMiKWCowGTxZ
qRMEchCGC2nJheEYlnqpn/nCQaHNRkNmLJDvSwkoP1kpbeFAUYHAvtiMsgwVxGaDNmqRF1P/WIR5
7bAuI/ApLXxvE0gRYkumrHL0hIMNKtXcxA4y6XIyOoKkJkcau8ykVlxbHDczNUcM1tof36voJIY1
CptNS5Oi6sP3fYDISJPL31A6o5uw9/h1IY4s6RS3Hr4M+gZtaVE7ucY5svS2yKP4orJ+F45NgfrB
1K0tt12tgYln9PX0wLPxFpxR00n0r6p79D1JDc0d+O5XlIr4tzV29hmLgQx8NlQvQ3uvgoMgnUYf
aB11YqyxLOVoMrq6+R4Ri2Af/AEDrXcbkQIAAA==
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Encoding:
- gzip
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 22 Apr 2025 14:25:41 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=2447
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- nosniff
X-Frame-Options:
- SAMEORIGIN
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,60 @@
interactions:
- request:
body: '{"contents": [{"role": "user", "parts": [{"text": "What is the capital
of France?"}]}], "generationConfig": {"stop_sequences": []}}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '131'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
user-agent:
- litellm/1.60.2
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemma-3-4b-it:generateContent
response:
body:
string: !!binary |
H4sIAAAAAAAC/2WRzUrDQBDH73mKYY+lLUKLiBcPfoAHsWhQwXqYJtNk6WYn7E6woRQ8+wR68t18
Ah/BbWraFPeQXeb/z3z8ZhUBqARtqlMU8uoUnkMEYNV8NxpbIStBaEMhWKKTvXd7Vp13sAgtNz+p
OCdIsNSCBngOVw5tQqA99HoTdNr3ekOY2qm9lu+3Tw8ImeFZ8CahKDmYs4NQrA9z9Llm24cMvTi2
XNQQ2oakMlI5GsLP18d7k+mRK5NCzRUYvSAQhoXl12CuJdc2g4IdAc64Emg6OFOdzte790t/P69j
Q5thCk7JtPZ1a1BzbbXP7wg9243tPr6dqJ2qbUrLED6K2gJNalV5zOiGBAN53PFVpeOilJgXZM+5
asifbHN19nQgj1pdOFA+kMbH/X9Z/UWoqU13f53VhhHRaKmb3V0+xaqDQQ6aajE090v0B2TL6IGc
11sYGRUFDkaD8WygRUXr6BcxmBLccwIAAA==
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Encoding:
- gzip
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 22 Apr 2025 14:25:35 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=2349
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- nosniff
X-Frame-Options:
- SAMEORIGIN
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,101 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "What is the capital of France?"}],
"model": "gpt-4.1-mini-2025-04-14", "stop": []}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '125'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.12
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//jJJPb9swDMXv/hSCznEQe+7S5LgAGzDs0P3paSgMRqJtZrIkSPTQoch3
H2Snsbt1wC4+8MdHv0fxKRNCkpZ7IVUHrHpv8nd3t4eIH77w4ePX+/b+8XP56bTdHE7dgb2Sq6Rw
xxMqflatleu9QSZnJ6wCAmOaWmyrmzflrqrejqB3Gk2StZ7zal3kPVnKy015k2+qvKgu8s6Rwij3
4nsmhBBP4zcZtRof5V5sVs+VHmOEFuX+2iSEDM6kioQYKTJYlqsZKmcZ7ej9W4dCgScGI1wj3gew
CgVFcQeB4nqpCtgMEZJ1OxizAGCtY0jRR78PF3K+OjSu9cEd4x9S2ZCl2NUBITqb3ER2Xo70nAnx
MG5ieBFO+uB6zzW7Hzj+rqimcXJ+gBneXhg7BjOXy3L1yrBaIwOZuFikVKA61LNy3joMmtwCZIvI
f3t5bfYUm2z7P+NnoBR6Rl37gJrUy7xzW8B0nf9qu654NCwjhp+ksGbCkJ5BYwODmU5Gxl+Rsa8b
si0GH2i6m8bX291xuztiVTQyO2e/AQAA//8DAP7WRo9GAwAA
headers:
CF-RAY:
- 93458dcf6d0ef53b-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 22 Apr 2025 13:44:07 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '1391'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999989'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_74408ec81f430b6e9795cf5332262b8d
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,101 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "What is the capital of France?"}],
"model": "gpt-4.1-nano-2025-04-14", "stop": []}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '125'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.12
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//jJJPb9swDMXv/hSCznEQuwrq5bgNBbZTh7aHYigMRqJtbbKkSXT/oMh3
L2Snsdt1wC4+8MdHv0fxOWOMa8V3jMsOSPbe5J8vqy/D96K/+PPjSn/Fh5szMewfb/eiar/d8lVS
uP0vlPSqWkvXe4OknZ2wDAiEaWpxLrZn5SchqhH0TqFJstZTLtZFbsG6vNyU23wj8kIc5Z3TEiPf
sZ8ZY4w9j99k1Cp85Du2Wb1WeowRWuS7UxNjPDiTKhxi1JHAEl/NUDpLaEfv1x0yCV4TGOYadhHA
SmQ6sksIOq6XqoDNECFZt4MxCwDWOoIUffR7dySHk0PjWh/cPr6T8kZbHbs6IERnk5tIzvORHjLG
7sZNDG/CcR9c76km9xvH3xViGsfnB5hhdWTkCMxcLsvVB8NqhQTaxMUiuQTZoZqV89ZhUNotQLaI
/LeXj2ZPsbVt/2f8DKRET6hqH1Bp+Tbv3BYwXee/2k4rHg3ziOFeS6xJY0jPoLCBwUwnw+NTJOzr
RtsWgw96upvG14gKq2ajxJZnh+wFAAD//wMATWCJPkYDAAA=
headers:
CF-RAY:
- 93458dd9aebff53b-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 22 Apr 2025 13:44:08 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '134'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999990'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_0ba738662e063fb55ea01793aafdcecc
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,101 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "What is the capital of France?"}],
"model": "gpt-4.1", "stop": []}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '109'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.12
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA4xSwYrbMBC9+yvEHEMcHMeh3dx2SwttoYTSW1nMRB7b2pUlIY27XZb8e5GdxM5u
C734MG/e83tP85IIAaqCnQDZIsvO6fRu//6Db26fnvTtx8/5t7v95rmT3y1+pc0XDcvIsIcHknxm
raTtnCZW1oyw9IRMUXX9rthu8pui2A5AZyvSkdY4TovVOs2zfJtmRbouTszWKkkBduJnIoQQL8M3
ejQV/YadyJbnSUchYEOwuywJAd7qOAEMQQVGw7CcQGkNkxls/2hJSHSKUQtbi08ejSShglgs9uhV
WCxWc6anug8YnZte6xmAxljGmHzwfH9CjheX2jbO20N4RYVaGRXa0hMGa6KjwNbBgB4TIe6HNvqr
gOC87RyXbB9p+N26GOVg6n8GnpoCtox6mudn0pVaWRGj0mHWJkiULVUTc6oe+0rZGZDMMr818zft
Mbcyzf/IT4CU5Jiq0nmqlLwOPK15itf5r7VLx4NhCOR/KUklK/LxHSqqsdfj3UB4DkxdWSvTkHde
jcdTuxJllh2yLMskJMfkDwAAAP//AwC4aq9JRgMAAA==
headers:
CF-RAY:
- 93458dcc1b9df53b-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 22 Apr 2025 13:44:06 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '296'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999989'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_7b4e1628a0608e9547e71e7857a29b58
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,643 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Researcher. You''re
an expert researcher, specialized in technology, software engineering, AI and
startups. You work as a freelancer and are now working on doing research and
analysis for a new customer.\nYour personal goal is: Make the best research
and analysis on content about AI and AI agents. Use the tool provided to you.\nYou
ONLY have access to the following tools, and should NEVER make up tools that
are not listed here:\n\nTool Name: what amazing tool\nTool Arguments: {}\nTool
Description: My tool\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [what amazing tool], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple JSON object, enclosed in curly
braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
all necessary information is gathered, return the following format:\n\n```\nThought:
I now know the final answer\nFinal Answer: the final answer to the original
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
a list of 5 interesting ideas to explore for an article, what makes them unique
and interesting.\n\nThis is the expected criteria for your final answer: Bullet
point list of 5 interesting ideas.\nyou MUST return the actual complete content
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
use the tools available and give your best Final Answer, your job depends on
it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1666'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.12
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-BNNPLDDeGQYegE6neZK6ogJmDOMYs\",\n \"object\":
\"chat.completion\",\n \"created\": 1744911223,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I need to generate interesting ideas
for an article related to AI and AI agents. \\n\\nAction: what amazing tool
\ \\nAction Input: {} \",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 326,\n \"completion_tokens\":
28,\n \"total_tokens\": 354,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_0392822090\"\n}\n"
headers:
CF-RAY:
- 931dab4c79581b2e-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 17 Apr 2025 17:33:44 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '736'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999620'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_0767caa75d1851f392ea34a68763b1bc
http_version: HTTP/1.1
status_code: 200
- request:
body: !!binary |
CqzaAQokCiIKDHNlcnZpY2UubmFtZRISChBjcmV3QUktdGVsZW1ldHJ5EoLaAQoSChBjcmV3YWku
dGVsZW1ldHJ5EpYIChAm+VDVVrKTGFLYrGmqyFN/EgiVIKrLvczC6SoMQ3JldyBDcmVhdGVkMAE5
uBnOOYMrNxhBWHDXOYMrNxhKGwoOY3Jld2FpX3ZlcnNpb24SCQoHMC4xMTQuMEobCg5weXRob25f
dmVyc2lvbhIJCgczLjExLjEySi4KCGNyZXdfa2V5EiIKIDVlNmVmZmU2ODBhNWQ5N2RjMzg3M2Ix
NDgyNWNjZmEzSjEKB2NyZXdfaWQSJgokMzI4MzZhOTctMGIyYi00MTAwLTgxZDYtYmIwZWJjY2I2
ZDYyShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRj
cmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAUo6ChBj
cmV3X2ZpbmdlcnByaW50EiYKJGQ5YmEwZDE3LTE5YjMtNGQ5Yi04ZTNmLThiMjNhOGM0YzgyM0o7
ChtjcmV3X2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0wNC0xN1QxNDozMzo0My4yMzg4
MzhKzQIKC2NyZXdfYWdlbnRzEr0CCroCW3sia2V5IjogIjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdk
NDI0MGEyOTRkIiwgImlkIjogImMzMWMwNzRmLTg5Y2YtNGRkNi04ZTY2LWM3NDM1OTc0NmNkMSIs
ICJyb2xlIjogIlNjb3JlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyNSwgIm1h
eF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8t
bWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlv
bj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K+wEK
CmNyZXdfdGFza3MS7AEK6QFbeyJrZXkiOiAiMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFi
ODYiLCAiaWQiOiAiYTNiODU1ODAtZTVjNC00YmU2LWI0ZmItMDU1NDU2Y2RkZWJkIiwgImFzeW5j
X2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6
ICJTY29yZXIiLCAiYWdlbnRfa2V5IjogIjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRk
IiwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAASgAQKED9XhldOUhgSlcVbrT/HuRwSCCF7
8YboEaZnKgxUYXNrIENyZWF0ZWQwATlYmeU5gys3GEHwx+c5gys3GEouCghjcmV3X2tleRIiCiA1
ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2ZhM0oxCgdjcmV3X2lkEiYKJDMyODM2YTk3LTBi
MmItNDEwMC04MWQ2LWJiMGViY2NiNmQ2MkouCgh0YXNrX2tleRIiCiAyN2VmMzhjYzk5ZGE0YThk
ZWQ3MGVkNDA2ZTQ0YWI4NkoxCgd0YXNrX2lkEiYKJGEzYjg1NTgwLWU1YzQtNGJlNi1iNGZiLTA1
NTQ1NmNkZGViZEo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJGQ5YmEwZDE3LTE5YjMtNGQ5Yi04ZTNm
LThiMjNhOGM0YzgyM0o6ChB0YXNrX2ZpbmdlcnByaW50EiYKJDc0OTEwMDcxLTM1MWQtNDljNy1h
YmZlLTJmMWZhNWUzZTEyYko7Cht0YXNrX2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0w
NC0xN1QxNDozMzo0My4yMzg4MDFKOwoRYWdlbnRfZmluZ2VycHJpbnQSJgokYzMyNmIyMWUtYWJh
YS00N2ZjLWE0NGQtMzk2NTQ4ZjczZTRiegIYAYUBAAEAABKYCAoQH3AbYnQyty/P5gHiJzrjLxII
PR8P3tNCS74qDENyZXcgQ3JlYXRlZDABOYgCyj6DKzcYQSD71D6DKzcYShsKDmNyZXdhaV92ZXJz
aW9uEgkKBzAuMTE0LjBKGwoOcHl0aG9uX3ZlcnNpb24SCQoHMy4xMS4xMkouCghjcmV3X2tleRIi
CiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2ZhM0oxCgdjcmV3X2lkEiYKJDBkY2JhOGY4
LTdhYzQtNDdmMC04ZWIxLWE0ZTJkODFjOGZkYkoeCgxjcmV3X3Byb2Nlc3MSDgoMaGllcmFyY2hp
Y2FsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jl
d19udW1iZXJfb2ZfYWdlbnRzEgIYAUo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJDJiYzYzYWE1LWU5
YzYtNDAxNi1hMTdiLWFmZDM5ZWJlYmEwNko7ChtjcmV3X2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQS
HAoaMjAyNS0wNC0xN1QxNDozMzo0My4zMjI3OTlKzQIKC2NyZXdfYWdlbnRzEr0CCroCW3sia2V5
IjogIjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRkIiwgImlkIjogImYyY2Y4YWRlLTdj
YmYtNDkwMS1hODMwLWE3ZDkxODA4NjI3NCIsICJyb2xlIjogIlNjb3JlciIsICJ2ZXJib3NlPyI6
IGZhbHNlLCAibWF4X2l0ZXIiOiAyNSwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGlu
Z19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/Ijog
ZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6
IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K+wEKCmNyZXdfdGFza3MS7AEK6QFbeyJrZXkiOiAiMjdl
ZjM4Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFiODYiLCAiaWQiOiAiMWY3MTljNTktOTJjMC00NGU1
LWFmMjUtNzIwYjE1NWE1Njg1IiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lu
cHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJTY29yZXIiLCAiYWdlbnRfa2V5IjogIjkyZTdl
YjE5MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRkIiwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQAB
AAASgAQKEIVZhLW3PhEOxntJYQn8IYkSCDNELyrmM+eYKgxUYXNrIENyZWF0ZWQwATlYr+0+gys3
GEGIJO4+gys3GEouCghjcmV3X2tleRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2Zh
M0oxCgdjcmV3X2lkEiYKJDBkY2JhOGY4LTdhYzQtNDdmMC04ZWIxLWE0ZTJkODFjOGZkYkouCgh0
YXNrX2tleRIiCiAyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4NkoxCgd0YXNrX2lkEiYK
JDFmNzE5YzU5LTkyYzAtNDRlNS1hZjI1LTcyMGIxNTVhNTY4NUo6ChBjcmV3X2ZpbmdlcnByaW50
EiYKJDJiYzYzYWE1LWU5YzYtNDAxNi1hMTdiLWFmZDM5ZWJlYmEwNko6ChB0YXNrX2ZpbmdlcnBy
aW50EiYKJGJmMDU5YjBiLWFlYmYtNGIzMS04YTc4LTA2ZTlmMjcyZDQ2MEo7Cht0YXNrX2Zpbmdl
cnByaW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0wNC0xN1QxNDozMzo0My4zMjI3NTVKOwoRYWdlbnRf
ZmluZ2VycHJpbnQSJgokY2IzZWQ2ZGQtNzEzNC00YTc3LThiMjctZWIwZGRkZGZlMjdiegIYAYUB
AAEAABKdAQoQ6S1CnqyOxKwRN/Vq7X81HRIIso7ugOmXnjEqClRvb2wgVXNhZ2UwATnAIk4/gys3
GEE4YlU/gys3GEobCg5jcmV3YWlfdmVyc2lvbhIJCgcwLjExNC4wSigKCXRvb2xfbmFtZRIbChlE
ZWxlZ2F0ZSB3b3JrIHRvIGNvd29ya2VySg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAASlggKENaM
zYnzHCL03v+Ihe5ZidsSCIXobzKG03Z6KgxDcmV3IENyZWF0ZWQwATlQ8uM/gys3GEFIfOk/gys3
GEobCg5jcmV3YWlfdmVyc2lvbhIJCgcwLjExNC4wShsKDnB5dGhvbl92ZXJzaW9uEgkKBzMuMTEu
MTJKLgoIY3Jld19rZXkSIgogNWU2ZWZmZTY4MGE1ZDk3ZGMzODczYjE0ODI1Y2NmYTNKMQoHY3Jl
d19pZBImCiQ5ZmYzZmU1ZC01YmYyLTRlNzEtODU0ZS05OTAyZGYxYzIxYTRKHAoMY3Jld19wcm9j
ZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rh
c2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSjoKEGNyZXdfZmluZ2VycHJpbnQS
JgokOTkwOTQ1YmUtOTczMS00ZTQxLTg5ZDAtYzEzMjljNGQ3NjQ5SjsKG2NyZXdfZmluZ2VycHJp
bnRfY3JlYXRlZF9hdBIcChoyMDI1LTA0LTE3VDE0OjMzOjQzLjM0MTIyM0rNAgoLY3Jld19hZ2Vu
dHMSvQIKugJbeyJrZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVkN2Q0MjQwYTI5NGQiLCAiaWQi
OiAiMGM3NDhlNDgtOGNmNC00ZDJhLWFlYWMtZDY5OTUzMDkwZThmIiwgInJvbGUiOiAiU2NvcmVy
IiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDI1LCAibWF4X3JwbSI6IG51bGwsICJm
dW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVnYXRp
b25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4
X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUr7AQoKY3Jld190YXNrcxLsAQrp
AVt7ImtleSI6ICIyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4NiIsICJpZCI6ICI2NzBj
MjdhOS1kYTc3LTRhNTQtYmYxYS1mM2M0YjVlNTcwNDkiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZh
bHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIlNjb3JlciIsICJhZ2Vu
dF9rZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVkN2Q0MjQwYTI5NGQiLCAidG9vbHNfbmFtZXMi
OiBbXX1degIYAYUBAAEAABKABAoQ0PunZB/jswUd8i8ahTz20BIIFtRhrWbDsGIqDFRhc2sgQ3Jl
YXRlZDABOejM8z+DKzcYQZAu9D+DKzcYSi4KCGNyZXdfa2V5EiIKIDVlNmVmZmU2ODBhNWQ5N2Rj
Mzg3M2IxNDgyNWNjZmEzSjEKB2NyZXdfaWQSJgokOWZmM2ZlNWQtNWJmMi00ZTcxLTg1NGUtOTkw
MmRmMWMyMWE0Si4KCHRhc2tfa2V5EiIKIDI3ZWYzOGNjOTlkYTRhOGRlZDcwZWQ0MDZlNDRhYjg2
SjEKB3Rhc2tfaWQSJgokNjcwYzI3YTktZGE3Ny00YTU0LWJmMWEtZjNjNGI1ZTU3MDQ5SjoKEGNy
ZXdfZmluZ2VycHJpbnQSJgokOTkwOTQ1YmUtOTczMS00ZTQxLTg5ZDAtYzEzMjljNGQ3NjQ5SjoK
EHRhc2tfZmluZ2VycHJpbnQSJgokMjczZmM3YjYtZGJmYS00MzYyLWEwZTEtNzhhNjJjNDY0OTlh
SjsKG3Rhc2tfZmluZ2VycHJpbnRfY3JlYXRlZF9hdBIcChoyMDI1LTA0LTE3VDE0OjMzOjQzLjM0
MTE5NUo7ChFhZ2VudF9maW5nZXJwcmludBImCiQwNTc3MWI4Ny04ZGIzLTRhZGYtOWJhZC0yNzcw
ZjgzYTZiYTN6AhgBhQEAAQAAEpgIChA5RacttE9X5amPuTPHLuYDEgiZ/RIthMTU5yoMQ3JldyBD
cmVhdGVkMAE5EC+lQIMrNxhBsJGsQIMrNxhKGwoOY3Jld2FpX3ZlcnNpb24SCQoHMC4xMTQuMEob
Cg5weXRob25fdmVyc2lvbhIJCgczLjExLjEySi4KCGNyZXdfa2V5EiIKIDVlNmVmZmU2ODBhNWQ5
N2RjMzg3M2IxNDgyNWNjZmEzSjEKB2NyZXdfaWQSJgokMTc5MzYxNTItNDJmMi00YmY3LWIzOTEt
ZTU0MDU1ZTY2NGU4Sh4KDGNyZXdfcHJvY2VzcxIOCgxoaWVyYXJjaGljYWxKEQoLY3Jld19tZW1v
cnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2Vu
dHMSAhgBSjoKEGNyZXdfZmluZ2VycHJpbnQSJgokNjI0ZTI1NjEtMDFkOC00YmNkLWJhMjEtZDcx
ZTQ0MTZkNGJiSjsKG2NyZXdfZmluZ2VycHJpbnRfY3JlYXRlZF9hdBIcChoyMDI1LTA0LTE3VDE0
OjMzOjQzLjM1Mzg3M0rNAgoLY3Jld19hZ2VudHMSvQIKugJbeyJrZXkiOiAiOTJlN2ViMTkxNjY0
YzkzNTc4NWVkN2Q0MjQwYTI5NGQiLCAiaWQiOiAiYTEyMTFiNmYtMzFhMy00MzY4LWI5YzItZDNh
NjA1NTZlOTk2IiwgInJvbGUiOiAiU2NvcmVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRl
ciI6IDI1LCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxt
IjogImdwdC00by1taW5pIiwgImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2Nv
ZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVz
IjogW119XUr7AQoKY3Jld190YXNrcxLsAQrpAVt7ImtleSI6ICIyN2VmMzhjYzk5ZGE0YThkZWQ3
MGVkNDA2ZTQ0YWI4NiIsICJpZCI6ICI3ZjJjNGIxMi00MjNhLTRmMTctOTZiMS0zZmEyNmUzMDM3
MmMiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJh
Z2VudF9yb2xlIjogIlNjb3JlciIsICJhZ2VudF9rZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVk
N2Q0MjQwYTI5NGQiLCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUBAAEAABKABAoQG2i2Mf2AK81I
gFDuyTZ4hxIIGTCH8bEW9REqDFRhc2sgQ3JlYXRlZDABOajZvECDKzcYQbArvUCDKzcYSi4KCGNy
ZXdfa2V5EiIKIDVlNmVmZmU2ODBhNWQ5N2RjMzg3M2IxNDgyNWNjZmEzSjEKB2NyZXdfaWQSJgok
MTc5MzYxNTItNDJmMi00YmY3LWIzOTEtZTU0MDU1ZTY2NGU4Si4KCHRhc2tfa2V5EiIKIDI3ZWYz
OGNjOTlkYTRhOGRlZDcwZWQ0MDZlNDRhYjg2SjEKB3Rhc2tfaWQSJgokN2YyYzRiMTItNDIzYS00
ZjE3LTk2YjEtM2ZhMjZlMzAzNzJjSjoKEGNyZXdfZmluZ2VycHJpbnQSJgokNjI0ZTI1NjEtMDFk
OC00YmNkLWJhMjEtZDcxZTQ0MTZkNGJiSjoKEHRhc2tfZmluZ2VycHJpbnQSJgokZjViNmU5YjUt
ZjcxMi00YzM3LTkyYjAtMWFjNzQ2ZTYzYWJjSjsKG3Rhc2tfZmluZ2VycHJpbnRfY3JlYXRlZF9h
dBIcChoyMDI1LTA0LTE3VDE0OjMzOjQzLjM1Mzg0M0o7ChFhZ2VudF9maW5nZXJwcmludBImCiQ1
ZjhkM2YwYS1lODZhLTRiMmUtYWFmMC1jOWMyMDg2N2M1ODR6AhgBhQEAAQAAEpwBChBCoZs2F2Pk
GqD1dlo+B0jIEgh/8w+r7HLXDioKVG9vbCBVc2FnZTABOfhQHUGDKzcYQeChJUGDKzcYShsKDmNy
ZXdhaV92ZXJzaW9uEgkKBzAuMTE0LjBKJwoJdG9vbF9uYW1lEhoKGEFzayBxdWVzdGlvbiB0byBj
b3dvcmtlckoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAAEpYIChAtKHG1/6WFVXbuKiUU/tulEggA
5PFku/BBtSoMQ3JldyBDcmVhdGVkMAE5qOuzQYMrNxhBYNO5QYMrNxhKGwoOY3Jld2FpX3ZlcnNp
b24SCQoHMC4xMTQuMEobCg5weXRob25fdmVyc2lvbhIJCgczLjExLjEySi4KCGNyZXdfa2V5EiIK
IDVlNmVmZmU2ODBhNWQ5N2RjMzg3M2IxNDgyNWNjZmEzSjEKB2NyZXdfaWQSJgokNWQzODQyMmQt
NTk2MC00NGQ0LWFmZjctYWM5MDFiMjU1NzM5ShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFs
ShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19u
dW1iZXJfb2ZfYWdlbnRzEgIYAUo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJGZhNWYxODc1LWRiYTQt
NDc2MS04ZDk4LTliNzlmNDg2ZTNlY0o7ChtjcmV3X2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoa
MjAyNS0wNC0xN1QxNDozMzo0My4zNzE2MzZKzQIKC2NyZXdfYWdlbnRzEr0CCroCW3sia2V5Ijog
IjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRkIiwgImlkIjogIjgwZDg1ZDY3LTg3M2Qt
NDFhNi05MzY1LWJkODVjYTc5MGI2MyIsICJyb2xlIjogIlNjb3JlciIsICJ2ZXJib3NlPyI6IGZh
bHNlLCAibWF4X2l0ZXIiOiAyNSwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19s
bG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFs
c2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIs
ICJ0b29sc19uYW1lcyI6IFtdfV1K+wEKCmNyZXdfdGFza3MS7AEK6QFbeyJrZXkiOiAiMjdlZjM4
Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFiODYiLCAiaWQiOiAiZWYwNmQ3NTEtZWY2Yy00YjJiLWI2
MTQtMmVhMmU1NGM0MjVlIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0
PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJTY29yZXIiLCAiYWdlbnRfa2V5IjogIjkyZTdlYjE5
MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRkIiwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAAS
gAQKEJUYH+zYJdlQxAU/SEz07wwSCHIyR0YIxKoQKgxUYXNrIENyZWF0ZWQwATnYg8NBgys3GEFQ
7cNBgys3GEouCghjcmV3X2tleRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2ZhM0ox
CgdjcmV3X2lkEiYKJDVkMzg0MjJkLTU5NjAtNDRkNC1hZmY3LWFjOTAxYjI1NTczOUouCgh0YXNr
X2tleRIiCiAyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4NkoxCgd0YXNrX2lkEiYKJGVm
MDZkNzUxLWVmNmMtNGIyYi1iNjE0LTJlYTJlNTRjNDI1ZUo6ChBjcmV3X2ZpbmdlcnByaW50EiYK
JGZhNWYxODc1LWRiYTQtNDc2MS04ZDk4LTliNzlmNDg2ZTNlY0o6ChB0YXNrX2ZpbmdlcnByaW50
EiYKJDA1ZTU2ZTIzLWI5YjgtNDIwMy05MWYwLTY2ZmE5MDgzNzYzNUo7Cht0YXNrX2ZpbmdlcnBy
aW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0wNC0xN1QxNDozMzo0My4zNzE2MDJKOwoRYWdlbnRfZmlu
Z2VycHJpbnQSJgokZGZiMDEzYTItNzg0MC00NDFhLTg1YzMtMzI0OWQ1OGJhNmIzegIYAYUBAAEA
ABKWCAoQ67KSQgBBFIpwJAjqKwKTNxIIPHptHAGKIGYqDENyZXcgQ3JlYXRlZDABOeB7T0KDKzcY
QVhEVUKDKzcYShsKDmNyZXdhaV92ZXJzaW9uEgkKBzAuMTE0LjBKGwoOcHl0aG9uX3ZlcnNpb24S
CQoHMy4xMS4xMkouCghjcmV3X2tleRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2Zh
M0oxCgdjcmV3X2lkEiYKJDRmOTE0YWZhLTVlMTAtNDU3Ni1hYjJjLWVkZmNlZWQzYTZiYkocCgxj
cmV3X3Byb2Nlc3MSDAoKc2VxdWVudGlhbEoRCgtjcmV3X21lbW9yeRICEABKGgoUY3Jld19udW1i
ZXJfb2ZfdGFza3MSAhgBShsKFWNyZXdfbnVtYmVyX29mX2FnZW50cxICGAFKOgoQY3Jld19maW5n
ZXJwcmludBImCiQzODFkOTIwYi1iMGE1LTRiNGUtYTQ0OS1kZjg5OGNjZjVmZDdKOwobY3Jld19m
aW5nZXJwcmludF9jcmVhdGVkX2F0EhwKGjIwMjUtMDQtMTdUMTQ6MzM6NDMuMzgxODc1Ss0CCgtj
cmV3X2FnZW50cxK9Agq6Alt7ImtleSI6ICI5MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0
ZCIsICJpZCI6ICI5ZjVhZTRmOC02MjkwLTQ5NTUtOGI2OC00YmNjOTM5ZjhhMDkiLCAicm9sZSI6
ICJTY29yZXIiLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjUsICJtYXhfcnBtIjog
bnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAi
ZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFs
c2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX1dSvsBCgpjcmV3X3Rh
c2tzEuwBCukBW3sia2V5IjogIjI3ZWYzOGNjOTlkYTRhOGRlZDcwZWQ0MDZlNDRhYjg2IiwgImlk
IjogIjViODI1OWU3LWI2Y2YtNGJlYi1iYTRiLWY3MDg4Y2E4YWM3NiIsICJhc3luY19leGVjdXRp
b24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAiU2NvcmVy
IiwgImFnZW50X2tleSI6ICI5MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJ0b29s
c19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEoAEChCDE114uxBCLFPLICeD1DCNEgiFJ3IambqX4yoM
VGFzayBDcmVhdGVkMAE5+P9iQoMrNxhBcGljQoMrNxhKLgoIY3Jld19rZXkSIgogNWU2ZWZmZTY4
MGE1ZDk3ZGMzODczYjE0ODI1Y2NmYTNKMQoHY3Jld19pZBImCiQ0ZjkxNGFmYS01ZTEwLTQ1NzYt
YWIyYy1lZGZjZWVkM2E2YmJKLgoIdGFza19rZXkSIgogMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQw
NmU0NGFiODZKMQoHdGFza19pZBImCiQ1YjgyNTllNy1iNmNmLTRiZWItYmE0Yi1mNzA4OGNhOGFj
NzZKOgoQY3Jld19maW5nZXJwcmludBImCiQzODFkOTIwYi1iMGE1LTRiNGUtYTQ0OS1kZjg5OGNj
ZjVmZDdKOgoQdGFza19maW5nZXJwcmludBImCiRmZWIzZDVjYy0yMzEwLTRhNDgtOWQ5My1jMzQ5
MTI0MGU3NTlKOwobdGFza19maW5nZXJwcmludF9jcmVhdGVkX2F0EhwKGjIwMjUtMDQtMTdUMTQ6
MzM6NDMuMzgxODQ4SjsKEWFnZW50X2ZpbmdlcnByaW50EiYKJDdhYzY3YTkzLTkwNjctNGJjNC1i
NmFmLTcxZWVjZmRjNzEzOHoCGAGFAQABAAASmAgKEGmCIFMw9GayxClAketnXWsSCPvb3mKDZYhn
KgxDcmV3IENyZWF0ZWQwATkA88JGgys3GEGAhMpGgys3GEobCg5jcmV3YWlfdmVyc2lvbhIJCgcw
LjExNC4wShsKDnB5dGhvbl92ZXJzaW9uEgkKBzMuMTEuMTJKLgoIY3Jld19rZXkSIgogNWU2ZWZm
ZTY4MGE1ZDk3ZGMzODczYjE0ODI1Y2NmYTNKMQoHY3Jld19pZBImCiRmNmE1MGVkMy02ODk5LTQ4
MTItODVkNy1iNDZlYTQyNzUwN2FKHgoMY3Jld19wcm9jZXNzEg4KDGhpZXJhcmNoaWNhbEoRCgtj
cmV3X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgBShsKFWNyZXdfbnVtYmVy
X29mX2FnZW50cxICGAFKOgoQY3Jld19maW5nZXJwcmludBImCiQzYzRlM2VmZS01YmQ0LTRkNDgt
YThmNS0yOGY0NDdhNDI0OGRKOwobY3Jld19maW5nZXJwcmludF9jcmVhdGVkX2F0EhwKGjIwMjUt
MDQtMTdUMTQ6MzM6NDMuNDU2MDg4Ss0CCgtjcmV3X2FnZW50cxK9Agq6Alt7ImtleSI6ICI5MmU3
ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJpZCI6ICJmNzMwNTE0Mi1jOTViLTQ0MmQt
YjJiOS1jYTVhN2IzMzZlYjMiLCAicm9sZSI6ICJTY29yZXIiLCAidmVyYm9zZT8iOiBmYWxzZSwg
Im1heF9pdGVyIjogMjUsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjog
IiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAi
YWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9v
bHNfbmFtZXMiOiBbXX1dSvsBCgpjcmV3X3Rhc2tzEuwBCukBW3sia2V5IjogIjI3ZWYzOGNjOTlk
YTRhOGRlZDcwZWQ0MDZlNDRhYjg2IiwgImlkIjogIjU5MDRiMDg5LTMzNzYtNDZhMy04NWU1LTRk
ZTc0NDQyYTljYyIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBm
YWxzZSwgImFnZW50X3JvbGUiOiAiU2NvcmVyIiwgImFnZW50X2tleSI6ICI5MmU3ZWIxOTE2NjRj
OTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEoAEChD9
DkCz0iVsHA4a1S3+yN8lEgjsMseCiYSvFioMVGFzayBDcmVhdGVkMAE5+ATcRoMrNxhB0F7cRoMr
NxhKLgoIY3Jld19rZXkSIgogNWU2ZWZmZTY4MGE1ZDk3ZGMzODczYjE0ODI1Y2NmYTNKMQoHY3Jl
d19pZBImCiRmNmE1MGVkMy02ODk5LTQ4MTItODVkNy1iNDZlYTQyNzUwN2FKLgoIdGFza19rZXkS
IgogMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFiODZKMQoHdGFza19pZBImCiQ1OTA0YjA4
OS0zMzc2LTQ2YTMtODVlNS00ZGU3NDQ0MmE5Y2NKOgoQY3Jld19maW5nZXJwcmludBImCiQzYzRl
M2VmZS01YmQ0LTRkNDgtYThmNS0yOGY0NDdhNDI0OGRKOgoQdGFza19maW5nZXJwcmludBImCiQ3
MjExMmY3OS1iMWU3LTRkYTctYTg4YS00NjU3NTJiZTIwZDdKOwobdGFza19maW5nZXJwcmludF9j
cmVhdGVkX2F0EhwKGjIwMjUtMDQtMTdUMTQ6MzM6NDMuNDU2MDU5SjsKEWFnZW50X2ZpbmdlcnBy
aW50EiYKJGJiMzQ0NzIxLTE3M2QtNGRmNS1iMWRmLWQ2NjBkZjZlZmVjZnoCGAGFAQABAAASnAEK
EOg/b+TBCd7kQOaEdNwvgZoSCGYJctohLuuaKgpUb29sIFVzYWdlMAE58JA0R4MrNxhBqOM9R4Mr
NxhKGwoOY3Jld2FpX3ZlcnNpb24SCQoHMC4xMTQuMEonCgl0b29sX25hbWUSGgoYQXNrIHF1ZXN0
aW9uIHRvIGNvd29ya2VySg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAASlwoKEMn/aYwi4Jc7AohP
Y7puRXsSCB83OjVnCZfLKgxDcmV3IENyZWF0ZWQwATlw8dVHgys3GEGA9NtHgys3GEobCg5jcmV3
YWlfdmVyc2lvbhIJCgcwLjExNC4wShsKDnB5dGhvbl92ZXJzaW9uEgkKBzMuMTEuMTJKLgoIY3Jl
d19rZXkSIgogZDQyNjA4MzNhYjBjMjBiYjQ0OTIyYzc5OWFhOTZiNGFKMQoHY3Jld19pZBImCiQx
ZjA0NjU0YS05ODE5LTQ0MTEtOTVlZC1kMmMwMTJlNWU3YjJKHAoMY3Jld19wcm9jZXNzEgwKCnNl
cXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAkob
ChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSjoKEGNyZXdfZmluZ2VycHJpbnQSJgokYjhhYjU0
YzEtYjFjZS00OGIyLTlkODMtNDVkNzkyMTkxOWQ0SjsKG2NyZXdfZmluZ2VycHJpbnRfY3JlYXRl
ZF9hdBIcChoyMDI1LTA0LTE3VDE0OjMzOjQzLjQ3NDIyN0rlAgoLY3Jld19hZ2VudHMS1QIK0gJb
eyJrZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVkN2Q0MjQwYTI5NGQiLCAiaWQiOiAiMjg5YzIz
NzgtNWUxOC00YzJiLWE1NDMtNzVkOTk5YWUyYWQyIiwgInJvbGUiOiAiU2NvcmVyIiwgInZlcmJv
c2U/IjogdHJ1ZSwgIm1heF9pdGVyIjogMjUsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2Nh
bGxpbmdfbGxtIjogImdwdC0zLjUtdHVyYm8tMDEyNSIsICJsbG0iOiAiZ3B0LTQtMDEyNS1wcmV2
aWV3IiwgImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9u
PyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUrkAwoK
Y3Jld190YXNrcxLVAwrSA1t7ImtleSI6ICIyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4
NiIsICJpZCI6ICJiNTAxMWUwNi02YTdmLTQzYWItYWQzNy1mNGI4ODBhMmJlYjgiLCAiYXN5bmNf
ZXhlY3V0aW9uPyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjog
IlNjb3JlciIsICJhZ2VudF9rZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVkN2Q0MjQwYTI5NGQi
LCAidG9vbHNfbmFtZXMiOiBbXX0sIHsia2V5IjogIjYwOWRlZTM5MTA4OGNkMWM4N2I4ZmE2NmFh
NjdhZGJlIiwgImlkIjogIjRiODE5NjQ5LTYyMjQtNGQ3Mi1hZDFkLTM0ODZhYzBkODQwNSIsICJh
c3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3Jv
bGUiOiAiU2NvcmVyIiwgImFnZW50X2tleSI6ICI5MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBh
Mjk0ZCIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEoAEChCdiwcYoV5twj//vpjaNfMl
EghUv7y5H7+dXioMVGFzayBDcmVhdGVkMAE5wN3kR4MrNxhBsDPlR4MrNxhKLgoIY3Jld19rZXkS
IgogZDQyNjA4MzNhYjBjMjBiYjQ0OTIyYzc5OWFhOTZiNGFKMQoHY3Jld19pZBImCiQxZjA0NjU0
YS05ODE5LTQ0MTEtOTVlZC1kMmMwMTJlNWU3YjJKLgoIdGFza19rZXkSIgogMjdlZjM4Y2M5OWRh
NGE4ZGVkNzBlZDQwNmU0NGFiODZKMQoHdGFza19pZBImCiRiNTAxMWUwNi02YTdmLTQzYWItYWQz
Ny1mNGI4ODBhMmJlYjhKOgoQY3Jld19maW5nZXJwcmludBImCiRiOGFiNTRjMS1iMWNlLTQ4YjIt
OWQ4My00NWQ3OTIxOTE5ZDRKOgoQdGFza19maW5nZXJwcmludBImCiQ3NDcyMTM3Yi0wYzBkLTRi
OTEtYTgwYy01YzZkYWQwZmM3YTBKOwobdGFza19maW5nZXJwcmludF9jcmVhdGVkX2F0EhwKGjIw
MjUtMDQtMTdUMTQ6MzM6NDMuNDc0MTc1SjsKEWFnZW50X2ZpbmdlcnByaW50EiYKJDVlMWU1Nzdh
LWY5N2QtNDA2OS04NzNmLTc1ZjYyMDE0ZWJlNnoCGAGFAQABAAASgAQKEA6OsjCwXi+o2MUfKyS+
TmgSCOIWlM7TtxNCKgxUYXNrIENyZWF0ZWQwATmQ915Igys3GEHYaF9Igys3GEouCghjcmV3X2tl
eRIiCiBkNDI2MDgzM2FiMGMyMGJiNDQ5MjJjNzk5YWE5NmI0YUoxCgdjcmV3X2lkEiYKJDFmMDQ2
NTRhLTk4MTktNDQxMS05NWVkLWQyYzAxMmU1ZTdiMkouCgh0YXNrX2tleRIiCiA2MDlkZWUzOTEw
ODhjZDFjODdiOGZhNjZhYTY3YWRiZUoxCgd0YXNrX2lkEiYKJDRiODE5NjQ5LTYyMjQtNGQ3Mi1h
ZDFkLTM0ODZhYzBkODQwNUo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJGI4YWI1NGMxLWIxY2UtNDhi
Mi05ZDgzLTQ1ZDc5MjE5MTlkNEo6ChB0YXNrX2ZpbmdlcnByaW50EiYKJGYwZWUxMjY2LWExNzIt
NDhiMi1iMTM2LTczN2I3YWNkYWYwNko7Cht0YXNrX2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoa
MjAyNS0wNC0xN1QxNDozMzo0My40NzQyMDVKOwoRYWdlbnRfZmluZ2VycHJpbnQSJgokNWUxZTU3
N2EtZjk3ZC00MDY5LTg3M2YtNzVmNjIwMTRlYmU2egIYAYUBAAEAABL/CQoQOon8z16iVXeCtEnr
visR+hII1ztFvfUBPyAqDENyZXcgQ3JlYXRlZDABOYC2CUmDKzcYQVhjEUmDKzcYShsKDmNyZXdh
aV92ZXJzaW9uEgkKBzAuMTE0LjBKGwoOcHl0aG9uX3ZlcnNpb24SCQoHMy4xMS4xMkouCghjcmV3
X2tleRIiCiBhOTU0MGNkMGVhYTUzZjY3NTQzN2U5YmQ0ZmE1ZTQ0Y0oxCgdjcmV3X2lkEiYKJDFh
MDA5NjZhLTcwYmQtNDc4OS1hZmQxLTY5NjgzMzZjYjc2NkocCgxjcmV3X3Byb2Nlc3MSDAoKc2Vx
dWVudGlhbEoRCgtjcmV3X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgCShsK
FWNyZXdfbnVtYmVyX29mX2FnZW50cxICGAFKOgoQY3Jld19maW5nZXJwcmludBImCiRjOGI1ZDUx
My0xNzY0LTQyYWQtOGVlYS0wYWU0MDAzZTRmNTRKOwobY3Jld19maW5nZXJwcmludF9jcmVhdGVk
X2F0EhwKGjIwMjUtMDQtMTdUMTQ6MzM6NDMuNDk0ODMySs0CCgtjcmV3X2FnZW50cxK9Agq6Alt7
ImtleSI6ICI5MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJpZCI6ICIwZTg1MzFl
YS05MmM4LTQzMGQtYmE0Yy1jMmIxYzkwZDBlMWQiLCAicm9sZSI6ICJTY29yZXIiLCAidmVyYm9z
ZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjUsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2Nh
bGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdhdGlvbl9lbmFibGVk
PyI6IGZhbHNlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGlt
aXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX1dSuQDCgpjcmV3X3Rhc2tzEtUDCtIDW3sia2V5Ijog
IjI3ZWYzOGNjOTlkYTRhOGRlZDcwZWQ0MDZlNDRhYjg2IiwgImlkIjogImI2NWQ1ZGUzLWY5MjAt
NDVhZi1hOTgwLTA2NjBkMDU5YzdiZiIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1h
bl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAiU2NvcmVyIiwgImFnZW50X2tleSI6ICI5
MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJr
ZXkiOiAiYjBkMzRhNmY2MjFhN2IzNTgwZDVkMWY0ZTI2NjViOTIiLCAiaWQiOiAiZjk2MDU5NzMt
YzYyMi00NzRjLWFhZjktYWJiOWMwZWZhYmQ0IiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwg
Imh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJTY29yZXIiLCAiYWdlbnRfa2V5
IjogIjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRkIiwgInRvb2xzX25hbWVzIjogW119
XXoCGAGFAQABAAASgAQKEGysuSY/RpRwwsmMtZmmkLgSCB3RCKVN5vMGKgxUYXNrIENyZWF0ZWQw
ATmYyRpJgys3GEFwIxtJgys3GEouCghjcmV3X2tleRIiCiBhOTU0MGNkMGVhYTUzZjY3NTQzN2U5
YmQ0ZmE1ZTQ0Y0oxCgdjcmV3X2lkEiYKJDFhMDA5NjZhLTcwYmQtNDc4OS1hZmQxLTY5NjgzMzZj
Yjc2NkouCgh0YXNrX2tleRIiCiAyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4NkoxCgd0
YXNrX2lkEiYKJGI2NWQ1ZGUzLWY5MjAtNDVhZi1hOTgwLTA2NjBkMDU5YzdiZko6ChBjcmV3X2Zp
bmdlcnByaW50EiYKJGM4YjVkNTEzLTE3NjQtNDJhZC04ZWVhLTBhZTQwMDNlNGY1NEo6ChB0YXNr
X2ZpbmdlcnByaW50EiYKJDZjNDhlMTkxLTViMjEtNDBmNi1iNmQwLWRjMWY3ZmM2YWQzN0o7Cht0
YXNrX2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0wNC0xN1QxNDozMzo0My40OTQ3NzdK
OwoRYWdlbnRfZmluZ2VycHJpbnQSJgokNTZmYTQ1MTYtM2RiNS00Mjk3LTg3NzUtOTI2MWVhNWI4
OWI2egIYAYUBAAEAABKABAoQJRURFvAOz5/5e2bQNRT4ChIIpSiy2tnBCrsqDFRhc2sgQ3JlYXRl
ZDABOdgreEmDKzcYQSCdeEmDKzcYSi4KCGNyZXdfa2V5EiIKIGE5NTQwY2QwZWFhNTNmNjc1NDM3
ZTliZDRmYTVlNDRjSjEKB2NyZXdfaWQSJgokMWEwMDk2NmEtNzBiZC00Nzg5LWFmZDEtNjk2ODMz
NmNiNzY2Si4KCHRhc2tfa2V5EiIKIGIwZDM0YTZmNjIxYTdiMzU4MGQ1ZDFmNGUyNjY1YjkySjEK
B3Rhc2tfaWQSJgokZjk2MDU5NzMtYzYyMi00NzRjLWFhZjktYWJiOWMwZWZhYmQ0SjoKEGNyZXdf
ZmluZ2VycHJpbnQSJgokYzhiNWQ1MTMtMTc2NC00MmFkLThlZWEtMGFlNDAwM2U0ZjU0SjoKEHRh
c2tfZmluZ2VycHJpbnQSJgokMWI0YzI5NTYtMDZkOC00NWNjLWFmYWMtNmZhZDk0MzdkNTZmSjsK
G3Rhc2tfZmluZ2VycHJpbnRfY3JlYXRlZF9hdBIcChoyMDI1LTA0LTE3VDE0OjMzOjQzLjQ5NDgw
OEo7ChFhZ2VudF9maW5nZXJwcmludBImCiQ1NmZhNDUxNi0zZGI1LTQyOTctODc3NS05MjYxZWE1
Yjg5YjZ6AhgBhQEAAQAAEpYIChCt1KD2VBrK6+JIjPGbSQYWEghyEaRdKejivSoMQ3JldyBDcmVh
dGVkMAE54Fn7SYMrNxhBMPkBSoMrNxhKGwoOY3Jld2FpX3ZlcnNpb24SCQoHMC4xMTQuMEobCg5w
eXRob25fdmVyc2lvbhIJCgczLjExLjEySi4KCGNyZXdfa2V5EiIKIDVlNmVmZmU2ODBhNWQ5N2Rj
Mzg3M2IxNDgyNWNjZmEzSjEKB2NyZXdfaWQSJgokMDQ4ZDQ5MzctNmU3OC00OWFkLTllZDMtYzVi
MjdlNDgyNThlShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQ
AEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIY
AUo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJDIxYjViOWQ3LWVjNTktNDBhYi1iZjY1LTk1NzM2M2Fl
ZDRkZEo7ChtjcmV3X2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0wNC0xN1QxNDozMzo0
My41MTA0NjZKzQIKC2NyZXdfYWdlbnRzEr0CCroCW3sia2V5IjogIjkyZTdlYjE5MTY2NGM5MzU3
ODVlZDdkNDI0MGEyOTRkIiwgImlkIjogImFkN2ExMTJiLWY5NDQtNDViYy05YzE4LWJjOWQzMDE4
NjE1OCIsICJyb2xlIjogIlNjb3JlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAy
NSwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJn
cHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4
ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtd
fV1K+wEKCmNyZXdfdGFza3MS7AEK6QFbeyJrZXkiOiAiMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQw
NmU0NGFiODYiLCAiaWQiOiAiZTJmMDI3YzItZWQ2NC00MDU4LThkNzUtNjQ1OWMzYTllYWIwIiwg
ImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRf
cm9sZSI6ICJTY29yZXIiLCAiYWdlbnRfa2V5IjogIjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdkNDI0
MGEyOTRkIiwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAASgAQKELdQRmZyaC0pfEDI4tAa
vRsSCCE13pISM1wEKgxUYXNrIENyZWF0ZWQwATmwBwpKgys3GEG4WQpKgys3GEouCghjcmV3X2tl
eRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2ZhM0oxCgdjcmV3X2lkEiYKJDA0OGQ0
OTM3LTZlNzgtNDlhZC05ZWQzLWM1YjI3ZTQ4MjU4ZUouCgh0YXNrX2tleRIiCiAyN2VmMzhjYzk5
ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4NkoxCgd0YXNrX2lkEiYKJGUyZjAyN2MyLWVkNjQtNDA1OC04
ZDc1LTY0NTljM2E5ZWFiMEo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJDIxYjViOWQ3LWVjNTktNDBh
Yi1iZjY1LTk1NzM2M2FlZDRkZEo6ChB0YXNrX2ZpbmdlcnByaW50EiYKJDNmMDU4NDE1LTNkNzUt
NDZhNi05OWFjLTIyZmM5OWM4OTBmM0o7Cht0YXNrX2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoa
MjAyNS0wNC0xN1QxNDozMzo0My41MTA0NDBKOwoRYWdlbnRfZmluZ2VycHJpbnQSJgokMjViMTgx
NjYtYjk3My00ZDM3LWI0OTUtMTI4YmIyMzQyNjhmegIYAYUBAAEAABKWCAoQuKzLEfp7NclRHh6f
Sm2/nxIIucJxIhUlkkwqDENyZXcgQ3JlYXRlZDABOfDOa0qDKzcYQfh/cUqDKzcYShsKDmNyZXdh
aV92ZXJzaW9uEgkKBzAuMTE0LjBKGwoOcHl0aG9uX3ZlcnNpb24SCQoHMy4xMS4xMkouCghjcmV3
X2tleRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2ZhM0oxCgdjcmV3X2lkEiYKJDhl
YzE2ZWQ3LTNiYjItNDkxZC04YjYxLTI3MWYxNmZlOGFlMUocCgxjcmV3X3Byb2Nlc3MSDAoKc2Vx
dWVudGlhbEoRCgtjcmV3X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgBShsK
FWNyZXdfbnVtYmVyX29mX2FnZW50cxICGAFKOgoQY3Jld19maW5nZXJwcmludBImCiQxN2MxN2Qz
NC1mZWZkLTQ1ZTktYWM2NS00ODY2ZTBlNTgwYTBKOwobY3Jld19maW5nZXJwcmludF9jcmVhdGVk
X2F0EhwKGjIwMjUtMDQtMTdUMTQ6MzM6NDMuNTE4MDI0Ss0CCgtjcmV3X2FnZW50cxK9Agq6Alt7
ImtleSI6ICI5MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJpZCI6ICJhODk2OTRm
ZS1lNjE1LTQzOWItOTQ4MC02MmVmZTRiNGY4ODUiLCAicm9sZSI6ICJTY29yZXIiLCAidmVyYm9z
ZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjUsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2Nh
bGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdhdGlvbl9lbmFibGVk
PyI6IGZhbHNlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGlt
aXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX1dSvsBCgpjcmV3X3Rhc2tzEuwBCukBW3sia2V5Ijog
IjI3ZWYzOGNjOTlkYTRhOGRlZDcwZWQ0MDZlNDRhYjg2IiwgImlkIjogIjMwYjNmYmQ4LWM0MmMt
NDhlYi1hNDdlLTAzNzFlOTFmYTAxYiIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1h
bl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAiU2NvcmVyIiwgImFnZW50X2tleSI6ICI5
MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgB
hQEAAQAAEoAEChBSqrS4nl90w/7Z/EGzEvYpEgj54GNjf+XWQyoMVGFzayBDcmVhdGVkMAE5GJ55
SoMrNxhBOOx5SoMrNxhKLgoIY3Jld19rZXkSIgogNWU2ZWZmZTY4MGE1ZDk3ZGMzODczYjE0ODI1
Y2NmYTNKMQoHY3Jld19pZBImCiQ4ZWMxNmVkNy0zYmIyLTQ5MWQtOGI2MS0yNzFmMTZmZThhZTFK
LgoIdGFza19rZXkSIgogMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFiODZKMQoHdGFza19p
ZBImCiQzMGIzZmJkOC1jNDJjLTQ4ZWItYTQ3ZS0wMzcxZTkxZmEwMWJKOgoQY3Jld19maW5nZXJw
cmludBImCiQxN2MxN2QzNC1mZWZkLTQ1ZTktYWM2NS00ODY2ZTBlNTgwYTBKOgoQdGFza19maW5n
ZXJwcmludBImCiQ5MGYyODZhZC1lOWQ4LTRkOWUtYjA5MC05YTQyY2I3ZjYzNDhKOwobdGFza19m
aW5nZXJwcmludF9jcmVhdGVkX2F0EhwKGjIwMjUtMDQtMTdUMTQ6MzM6NDMuNTE3OTk4SjsKEWFn
ZW50X2ZpbmdlcnByaW50EiYKJDhhMTliOTQxLWI4MTgtNDU3MC05NmJjLWZlYWQ4MWNkMjk1NXoC
GAGFAQABAAASlggKEFFBQgm1QyQTSqKMpRKbGjwSCCoFcjlB3aOLKgxDcmV3IENyZWF0ZWQwATlY
/RVLgys3GEEwLR1Lgys3GEobCg5jcmV3YWlfdmVyc2lvbhIJCgcwLjExNC4wShsKDnB5dGhvbl92
ZXJzaW9uEgkKBzMuMTEuMTJKLgoIY3Jld19rZXkSIgogNWU2ZWZmZTY4MGE1ZDk3ZGMzODczYjE0
ODI1Y2NmYTNKMQoHY3Jld19pZBImCiQ5NDYyMGE5YS1jMjc1LTRjMjItYjk1OC0zZmZlYWRiOGFl
ZGJKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNy
ZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSjoKEGNy
ZXdfZmluZ2VycHJpbnQSJgokNzQzZDgxNDYtZTM0My00Njk0LTljODUtMmVmZWRkMzZhYTlkSjsK
G2NyZXdfZmluZ2VycHJpbnRfY3JlYXRlZF9hdBIcChoyMDI1LTA0LTE3VDE0OjMzOjQzLjUyOTAz
MkrNAgoLY3Jld19hZ2VudHMSvQIKugJbeyJrZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVkN2Q0
MjQwYTI5NGQiLCAiaWQiOiAiZWE5NTMyZDktZDczNy00ZTIyLWE3MjItMDg2ODQ4NGViMmY5Iiwg
InJvbGUiOiAiU2NvcmVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDI1LCAibWF4
X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1t
aW5pIiwgImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9u
PyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUr7AQoK
Y3Jld190YXNrcxLsAQrpAVt7ImtleSI6ICIyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4
NiIsICJpZCI6ICIyMmUyMjZiNC1kNmI5LTRiMzgtOTQwMi05NDY0NTBhOWExZjUiLCAiYXN5bmNf
ZXhlY3V0aW9uPyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjog
IlNjb3JlciIsICJhZ2VudF9rZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVkN2Q0MjQwYTI5NGQi
LCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUBAAEAABKABAoQBNXlZK0H3tATZKP2Uw7bBxII0nvw
9UluHcgqDFRhc2sgQ3JlYXRlZDABOXhiJ0uDKzcYQZiwJ0uDKzcYSi4KCGNyZXdfa2V5EiIKIDVl
NmVmZmU2ODBhNWQ5N2RjMzg3M2IxNDgyNWNjZmEzSjEKB2NyZXdfaWQSJgokOTQ2MjBhOWEtYzI3
NS00YzIyLWI5NTgtM2ZmZWFkYjhhZWRiSi4KCHRhc2tfa2V5EiIKIDI3ZWYzOGNjOTlkYTRhOGRl
ZDcwZWQ0MDZlNDRhYjg2SjEKB3Rhc2tfaWQSJgokMjJlMjI2YjQtZDZiOS00YjM4LTk0MDItOTQ2
NDUwYTlhMWY1SjoKEGNyZXdfZmluZ2VycHJpbnQSJgokNzQzZDgxNDYtZTM0My00Njk0LTljODUt
MmVmZWRkMzZhYTlkSjoKEHRhc2tfZmluZ2VycHJpbnQSJgokYjI3M2Q1OTMtYzcxZC00ZWRmLWI0
NmMtMTkyMmIxY2ZmZjY1SjsKG3Rhc2tfZmluZ2VycHJpbnRfY3JlYXRlZF9hdBIcChoyMDI1LTA0
LTE3VDE0OjMzOjQzLjUyOTAwMko7ChFhZ2VudF9maW5nZXJwcmludBImCiQ5ZWYxZjc4Ny1lZTIz
LTRlNGUtOGJkYi04NGJiNjkwZjlkZjV6AhgBhQEAAQAAEpYIChCLERZYyvqc04xX3gWXPO59EggU
L4MaGDu+FyoMQ3JldyBDcmVhdGVkMAE54NLGS4MrNxhBeGbNS4MrNxhKGwoOY3Jld2FpX3ZlcnNp
b24SCQoHMC4xMTQuMEobCg5weXRob25fdmVyc2lvbhIJCgczLjExLjEySi4KCGNyZXdfa2V5EiIK
IDVlNmVmZmU2ODBhNWQ5N2RjMzg3M2IxNDgyNWNjZmEzSjEKB2NyZXdfaWQSJgokZWI4MDdlMzEt
MGFiZS00NjQ0LWEwZjEtMDM4N2ZkZTYzYWQ1ShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFs
ShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19u
dW1iZXJfb2ZfYWdlbnRzEgIYAUo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJDBkZDRmNDNlLTEzNzEt
NDVlNS1iMGNmLWY2ZDE5YTNjNzZkOUo7ChtjcmV3X2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoa
MjAyNS0wNC0xN1QxNDozMzo0My41NDA2MzhKzQIKC2NyZXdfYWdlbnRzEr0CCroCW3sia2V5Ijog
IjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRkIiwgImlkIjogImY0ZDM5NzdlLTg4YjAt
NDQ2Zi04YzQ5LThkMjc0NDgwOGQ4NiIsICJyb2xlIjogIlNjb3JlciIsICJ2ZXJib3NlPyI6IGZh
bHNlLCAibWF4X2l0ZXIiOiAyNSwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19s
bG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFs
c2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIs
ICJ0b29sc19uYW1lcyI6IFtdfV1K+wEKCmNyZXdfdGFza3MS7AEK6QFbeyJrZXkiOiAiMjdlZjM4
Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFiODYiLCAiaWQiOiAiYjA4MTZmY2QtYjcyMS00YTZkLTk0
ODQtNDc4YWM4ZDdkYTg4IiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0
PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJTY29yZXIiLCAiYWdlbnRfa2V5IjogIjkyZTdlYjE5
MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRkIiwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAAS
gAQKEB0emJtWTE8969Cle2pbw8QSCFZbdjoOT655KgxUYXNrIENyZWF0ZWQwATlgt9VLgys3GEFo
CdZLgys3GEouCghjcmV3X2tleRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2ZhM0ox
CgdjcmV3X2lkEiYKJGViODA3ZTMxLTBhYmUtNDY0NC1hMGYxLTAzODdmZGU2M2FkNUouCgh0YXNr
X2tleRIiCiAyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4NkoxCgd0YXNrX2lkEiYKJGIw
ODE2ZmNkLWI3MjEtNGE2ZC05NDg0LTQ3OGFjOGQ3ZGE4OEo6ChBjcmV3X2ZpbmdlcnByaW50EiYK
JDBkZDRmNDNlLTEzNzEtNDVlNS1iMGNmLWY2ZDE5YTNjNzZkOUo6ChB0YXNrX2ZpbmdlcnByaW50
EiYKJGM1ODIyZGM4LWIyYmYtNDUyZS04YjQ2LWRkNjAyMDNkNTA0Zko7Cht0YXNrX2ZpbmdlcnBy
aW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0wNC0xN1QxNDozMzo0My41NDA2MDlKOwoRYWdlbnRfZmlu
Z2VycHJpbnQSJgokZjZhMjA0MDYtOTM0Yy00ZjVmLWI1MzMtMDYwMTQwMGUxMTM1egIYAYUBAAEA
ABL4BwoQxJQoNY/stf/qihFVNGkp1hII9x5mkc7Cz5AqDENyZXcgQ3JlYXRlZDABOcB+REyDKzcY
QYA7SkyDKzcYShsKDmNyZXdhaV92ZXJzaW9uEgkKBzAuMTE0LjBKGwoOcHl0aG9uX3ZlcnNpb24S
CQoHMy4xMS4xMkouCghjcmV3X2tleRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2Zh
M0oxCgdjcmV3X2lkEiYKJDY3NmIxNjlhLTI5YTYtNDVhOC05NmNjLTY4MjMyNzgzYmU3NUoeCgxj
cmV3X3Byb2Nlc3MSDgoMaGllcmFyY2hpY2FsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251
bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAUo6ChBjcmV3X2Zp
bmdlcnByaW50EiYKJDMzZDRkZWU1LWIxOTEtNDAzYy04OWEwLTYyNWJiNjVmMmYxOUo7ChtjcmV3
X2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0wNC0xN1QxNDozMzo0My41NDg4OThKzQIK
C2NyZXdfYWdlbnRzEr0CCroCW3sia2V5IjogIjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdkNDI0MGEy
OTRkIiwgImlkIjogImE0Mzc1OWYyLTY0NDEtNDNiMy1hOGRjLWYxYmQzMTU3MDdlZiIsICJyb2xl
IjogIlNjb3JlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyNSwgIm1heF9ycG0i
OiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIs
ICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBm
YWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K2wEKCmNyZXdf
dGFza3MSzAEKyQFbeyJrZXkiOiAiMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFiODYiLCAi
aWQiOiAiMDVmYzk4OWItMDY5Mi00ODAzLTgyMTMtMWQ2ODY5YTYwMTBlIiwgImFzeW5jX2V4ZWN1
dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJOb25l
IiwgImFnZW50X2tleSI6IG51bGwsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEoAEChDT
QOB/rAOBZumhFuP0yYFYEggS6cvMoHzBYioMVGFzayBDcmVhdGVkMAE5cB9dTIMrNxhBeHFdTIMr
NxhKLgoIY3Jld19rZXkSIgogNWU2ZWZmZTY4MGE1ZDk3ZGMzODczYjE0ODI1Y2NmYTNKMQoHY3Jl
d19pZBImCiQ2NzZiMTY5YS0yOWE2LTQ1YTgtOTZjYy02ODIzMjc4M2JlNzVKLgoIdGFza19rZXkS
IgogMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFiODZKMQoHdGFza19pZBImCiQwNWZjOTg5
Yi0wNjkyLTQ4MDMtODIxMy0xZDY4NjlhNjAxMGVKOgoQY3Jld19maW5nZXJwcmludBImCiQzM2Q0
ZGVlNS1iMTkxLTQwM2MtODlhMC02MjViYjY1ZjJmMTlKOgoQdGFza19maW5nZXJwcmludBImCiQ2
MGIzYTVkNi04MWUxLTQzOGYtOTE3Yy0yNjU4MzU4NjdmNzZKOwobdGFza19maW5nZXJwcmludF9j
cmVhdGVkX2F0EhwKGjIwMjUtMDQtMTdUMTQ6MzM6NDMuNTQ4ODcwSjsKEWFnZW50X2ZpbmdlcnBy
aW50EiYKJDgwZjQ2NDkxLWM0YjItNDExNy05MTM2LTZhOTQxZjI5ODBiZHoCGAGFAQABAAASnAEK
EObnhSrs8UQiBg/0Bricu2cSCMFW2rX4WlIFKgpUb29sIFVzYWdlMAE5CJ26TIMrNxhBmE/DTIMr
NxhKGwoOY3Jld2FpX3ZlcnNpb24SCQoHMC4xMTQuMEonCgl0b29sX25hbWUSGgoYQXNrIHF1ZXN0
aW9uIHRvIGNvd29ya2VySg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAAS0AoKEBk04hfXjSFQOWWK
xJFMHB8SCLGFr6R51/OhKgxDcmV3IENyZWF0ZWQwATl4TzNNgys3GEFIMzlNgys3GEobCg5jcmV3
YWlfdmVyc2lvbhIJCgcwLjExNC4wShsKDnB5dGhvbl92ZXJzaW9uEgkKBzMuMTEuMTJKLgoIY3Jl
d19rZXkSIgogNzQyNzU3MzEyZWY3YmI0ZWUwYjA2NjJkMWMyZTIxNzlKMQoHY3Jld19pZBImCiQ2
MWRiMTdjYi0xODQ3LTRmYzktOWNkMy1hMDY3ZjRkOGExMzRKHAoMY3Jld19wcm9jZXNzEgwKCnNl
cXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUob
ChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgCSjoKEGNyZXdfZmluZ2VycHJpbnQSJgokNjQ3OGY2
MjItZGVlOC00OWYyLTljMDktOGNiOTBjMjEwYzNjSjsKG2NyZXdfZmluZ2VycHJpbnRfY3JlYXRl
ZF9hdBIcChoyMDI1LTA0LTE3VDE0OjMzOjQzLjU2NDI2NEqGBQoLY3Jld19hZ2VudHMS9gQK8wRb
eyJrZXkiOiAiODljZjMxMWI0OGI1MjE2OWQ0MmYzOTI1YzViZTFjNWEiLCAiaWQiOiAiM2E3YjUx
ZGItYzhhZC00ZDU3LWE1OGUtOGIzY2EyYzIxZTI2IiwgInJvbGUiOiAiTWFuYWdlciIsICJ2ZXJi
b3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyNSwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25f
Y2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJs
ZWQ/IjogdHJ1ZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xp
bWl0IjogMiwgInRvb2xzX25hbWVzIjogW119LCB7ImtleSI6ICI5MmU3ZWIxOTE2NjRjOTM1Nzg1
ZWQ3ZDQyNDBhMjk0ZCIsICJpZCI6ICIxNWZiMjExOC0zOTE0LTQ2ZTctODRkZi05M2E5ZDA0ZWVj
M2YiLCAicm9sZSI6ICJTY29yZXIiLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjUs
ICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0
LTRvLW1pbmkiLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IHRydWUsICJhbGxvd19jb2RlX2V4ZWN1
dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K
/AEKCmNyZXdfdGFza3MS7QEK6gFbeyJrZXkiOiAiMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0
NGFiODYiLCAiaWQiOiAiMzcwNTE2YzQtNWNhMC00NDBjLWE3YTctNWFhMjZiMDQ4MmFmIiwgImFz
eW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9s
ZSI6ICJNYW5hZ2VyIiwgImFnZW50X2tleSI6ICI4OWNmMzExYjQ4YjUyMTY5ZDQyZjM5MjVjNWJl
MWM1YSIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEoAEChBGCATWQjaQu6Zpx66QI8ik
Egg05IWKCa/6oyoMVGFzayBDcmVhdGVkMAE54CVFTYMrNxhB0HtFTYMrNxhKLgoIY3Jld19rZXkS
IgogNzQyNzU3MzEyZWY3YmI0ZWUwYjA2NjJkMWMyZTIxNzlKMQoHY3Jld19pZBImCiQ2MWRiMTdj
Yi0xODQ3LTRmYzktOWNkMy1hMDY3ZjRkOGExMzRKLgoIdGFza19rZXkSIgogMjdlZjM4Y2M5OWRh
NGE4ZGVkNzBlZDQwNmU0NGFiODZKMQoHdGFza19pZBImCiQzNzA1MTZjNC01Y2EwLTQ0MGMtYTdh
Ny01YWEyNmIwNDgyYWZKOgoQY3Jld19maW5nZXJwcmludBImCiQ2NDc4ZjYyMi1kZWU4LTQ5ZjIt
OWMwOS04Y2I5MGMyMTBjM2NKOgoQdGFza19maW5nZXJwcmludBImCiQxMWU1ZDRhNi04ODc5LTQx
ZDktYWE3ZS04OTdhNDMzMDhlZWVKOwobdGFza19maW5nZXJwcmludF9jcmVhdGVkX2F0EhwKGjIw
MjUtMDQtMTdUMTQ6MzM6NDMuNTY0MjMzSjsKEWFnZW50X2ZpbmdlcnByaW50EiYKJGUyMzRlYzA5
LTk3MzktNGE3Zi04MDgxLTBhNjJkYzNlNzY1ZHoCGAGFAQABAAASnQEKEFjGvzVUde99Ey+6qDaD
oG4SCFWLiXG8McaKKgpUb29sIFVzYWdlMAE5kI2lTYMrNxhBMG2tTYMrNxhKGwoOY3Jld2FpX3Zl
cnNpb24SCQoHMC4xMTQuMEooCgl0b29sX25hbWUSGwoZRGVsZWdhdGUgd29yayB0byBjb3dvcmtl
ckoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAAEooIChCB+QR8dk6jPImbeJiKh5UYEgj00bY0zkAX
LSoMQ3JldyBDcmVhdGVkMAE5cAkrToMrNxhBKNM1ToMrNxhKGwoOY3Jld2FpX3ZlcnNpb24SCQoH
MC4xMTQuMEobCg5weXRob25fdmVyc2lvbhIJCgczLjExLjEySi4KCGNyZXdfa2V5EiIKIDMwM2I4
ZWRkNWIwMDg3MTBkNzZhYWY4MjVhNzA5ZTU1SjEKB2NyZXdfaWQSJgokNmUzM2ZiOTItNTZjMC00
OTY3LTk1MjItZGQ3ZWFhOWY5NjFlSh4KDGNyZXdfcHJvY2VzcxIOCgxoaWVyYXJjaGljYWxKEQoL
Y3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJl
cl9vZl9hZ2VudHMSAhgBSjoKEGNyZXdfZmluZ2VycHJpbnQSJgokNDY5MjUyODctOTY2OS00MTRl
LWEwNzctZGVjNDE2ZTE3NDRlSjsKG2NyZXdfZmluZ2VycHJpbnRfY3JlYXRlZF9hdBIcChoyMDI1
LTA0LTE3VDE0OjMzOjQzLjU4MDUyM0rfAgoLY3Jld19hZ2VudHMSzwIKzAJbeyJrZXkiOiAiOTJl
N2ViMTkxNjY0YzkzNTc4NWVkN2Q0MjQwYTI5NGQiLCAiaWQiOiAiNzllZTU5MjktNjdlMy00NjNi
LTljYjEtMjFlNTdjNjZlNGQ4IiwgInJvbGUiOiAiU2NvcmVyIiwgInZlcmJvc2U/IjogZmFsc2Us
ICJtYXhfaXRlciI6IDI1LCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6
ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwg
ImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRv
b2xzX25hbWVzIjogWyJzY29yaW5nX2V4YW1wbGVzIl19XUrbAQoKY3Jld190YXNrcxLMAQrJAVt7
ImtleSI6ICJmMzU3NWIwMTNjMjI5NGI3Y2M4ZTgwMzE1NWM4YmE0NiIsICJpZCI6ICJhZTlmMjgz
MC1hY2NhLTRiNmQtOGRjNy01MjMzZmZlYmJhM2QiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNl
LCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIk5vbmUiLCAiYWdlbnRfa2V5
IjogbnVsbCwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAASgAQKEHOf6gKzlTQXpW3qTaF+
AlQSCIXd9+wt3hyiKgxUYXNrIENyZWF0ZWQwATkws0hOgys3GEE4BUlOgys3GEouCghjcmV3X2tl
eRIiCiAzMDNiOGVkZDViMDA4NzEwZDc2YWFmODI1YTcwOWU1NUoxCgdjcmV3X2lkEiYKJDZlMzNm
YjkyLTU2YzAtNDk2Ny05NTIyLWRkN2VhYTlmOTYxZUouCgh0YXNrX2tleRIiCiBmMzU3NWIwMTNj
MjI5NGI3Y2M4ZTgwMzE1NWM4YmE0NkoxCgd0YXNrX2lkEiYKJGFlOWYyODMwLWFjY2EtNGI2ZC04
ZGM3LTUyMzNmZmViYmEzZEo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJDQ2OTI1Mjg3LTk2NjktNDE0
ZS1hMDc3LWRlYzQxNmUxNzQ0ZUo6ChB0YXNrX2ZpbmdlcnByaW50EiYKJDU4ZGVjODNjLWFjZTQt
NGI1Ni05YzhjLWQyZGQ1YjVlMWYyNUo7Cht0YXNrX2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoa
MjAyNS0wNC0xN1QxNDozMzo0My41ODA0OTNKOwoRYWdlbnRfZmluZ2VycHJpbnQSJgokMGVjNzAw
NWEtOGMzOC00N2RlLWI5YzctNjc3ZmFkOTU2ZmMzegIYAYUBAAEAABJpChDdogUTH/ocvPeyU/F2
dgcNEghEqOA/7aEsiyoQVG9vbCBVc2FnZSBFcnJvcjABOVBbt06DKzcYQajPvU6DKzcYShsKDmNy
ZXdhaV92ZXJzaW9uEgkKBzAuMTE0LjB6AhgBhQEAAQAAEp0BChD7kRw1jX3K2nusXZXPvHyGEghl
QFQp8mFhuCoKVG9vbCBVc2FnZTABORCS/E6DKzcYQXDPBE+DKzcYShsKDmNyZXdhaV92ZXJzaW9u
EgkKBzAuMTE0LjBKKAoJdG9vbF9uYW1lEhsKGURlbGVnYXRlIHdvcmsgdG8gY293b3JrZXJKDgoI
YXR0ZW1wdHMSAhgBegIYAYUBAAEAAA==
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '27952'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.31.1
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Thu, 17 Apr 2025 17:33:47 GMT
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,81 @@
interactions:
- request:
body: '{"contents": [{"role": "user", "parts": [{"text": "\nCurrent Task: Give
me a list of 5 interesting ideas to explore for an article, what makes them
unique and interesting.\n\nThis is the expected criteria for your final answer:
Bullet point list of 5 interesting ideas.\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nBegin! This is VERY important
to you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}]}], "system_instruction": {"parts": [{"text": "You are
Researcher. You''re an expert researcher, specialized in technology, software
engineering, AI and startups. You work as a freelancer and are now working on
doing research and analysis for a new customer.\nYour personal goal is: Make
the best research and analysis on content about AI and AI agents. Use the tool
provided to you.\nYou ONLY have access to the following tools, and should NEVER
make up tools that are not listed here:\n\nTool Name: what amazing tool\nTool
Arguments: {}\nTool Description: My tool\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [what amazing tool], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"}]}, "generationConfig":
{"temperature": 0.7, "stop_sequences": ["\nObservation:"]}}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1718'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
user-agent:
- litellm/1.60.2
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-latest:generateContent
response:
body:
string: !!binary |
H4sIAAAAAAAC/61STWvbQBC961cMe+klFnJUy8S30JZiaGloTQlEOaylibRktavujtKkxv+9s3Lk
rE2PFUha9r2Z9+ZjlwCISppa1ZLQixXc8Q3AbvwGzBpCQwxMV3zZS0dv3MOzi85MIXwOQWLT2qFp
aQVrMIg1kIUGDTpWA1Wj9PBgHUgDnFJVGkFu7UBwvea7evwxnXwKsH6nNQwegVp+rdUhV4u6hw5h
66Qynqzr0tKU5roiZc0KfreSQHbyjzLNGDNBsDb9wK52pfg1oHsp2WspPk/OFqC4bIeeQuA/feJz
r60L8LnXC6ZWgw8QC40WOvmIHlBW7ZgMBqNYdgyLhNJS7EXUxv3xfH/x1nxnNYbOdrZGPdH3E0E8
KKN8+50dWxNoPzbfbsQRlU/NF9v0zm7D/GZZml1dLoti8X4+X+RFvszzPJmkR1ExeK7qK5LkBZHH
NRCcoutpYx/RfLDDuCB5nh10ooU6IRTLV5wsSX0aezVhUWL/kWWVjjctWkKuX2pFL+OWfbrdiKhH
dOZr6lISNfPc5X9SK5anYsnrcA7z+onOq8NgGux4VLN5uphxzbMsuxTJPvkLk2fgU5EDAAA=
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Encoding:
- gzip
Content-Type:
- application/json; charset=UTF-8
Date:
- Thu, 17 Apr 2025 19:00:14 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=1972
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- nosniff
X-Frame-Options:
- SAMEORIGIN
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1 @@
"""Tests for CLI deploy."""

Some files were not shown because too many files have changed in this diff Show More