Compare commits

..

27 Commits

Author SHA1 Message Date
Greyson Lalonde
d36f53312c fix: remove dead _save_user_data function and stale mock 2026-03-13 01:40:42 -04:00
Greyson Lalonde
e303ca4243 fix: replace dual-lock with single cross-process lock in LanceDB storage 2026-03-13 01:29:41 -04:00
Greyson Lalonde
5a4f6956b3 fix: avoid blocking event loop in async browser session wait 2026-03-13 00:44:34 -04:00
Greyson Lalonde
3949d9f4d0 Merge branch 'main' into gl/fix/add-cross-process-locking 2026-03-13 00:39:53 -04:00
Greyson LaLonde
48eb7c6937 fix: propagate contextvars across all thread and executor boundaries
Some checks are pending
CodeQL Advanced / Analyze (actions) (push) Waiting to run
CodeQL Advanced / Analyze (python) (push) Waiting to run
2026-03-13 00:32:22 -04:00
Greyson Lalonde
4d82b08fb2 fix: use async lock acquisition in chromadb async methods 2026-03-12 22:36:39 -04:00
Greyson Lalonde
fbd9b800d3 fix: add error handling to update_user_data 2026-03-12 22:34:16 -04:00
Greyson Lalonde
10099757dd fix: close TOCTOU race in browser session manager 2026-03-12 22:33:03 -04:00
Greyson Lalonde
a6e4d35bb9 perf: move embedding calls outside cross-process lock in RAG adapter 2026-03-12 22:23:13 -04:00
Greyson Lalonde
a41cfbd9f6 fix: avoid event loop deadlock in snowflake pool lock 2026-03-12 22:21:18 -04:00
Greyson Lalonde
0228445080 style: apply ruff formatting and import sorting 2026-03-12 22:06:56 -04:00
Greyson Lalonde
d2a156f244 fix: add cross-process and thread-safe locking to unprotected I/O 2026-03-12 22:02:30 -04:00
danglies007
d8e38f2f0b fix: propagate ContextVars into async task threads
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
threading.Thread() does not inherit the parent's contextvars.Context,
causing ContextVar-based state (OpenTelemetry spans, Langfuse trace IDs,
and any other request-scoped vars) to be silently dropped in async tasks.

Fix by calling contextvars.copy_context() before spawning each thread and
using ctx.run() as the thread target, which runs the function inside the
captured context.

Affected locations:
- task.py: execute_async() — the primary async task execution path
- utilities/streaming.py: create_chunk_generator() — streaming execution path

Fixes: #4822
Related: #4168, #4286

Co-authored-by: Claude <noreply@anthropic.com>
2026-03-12 15:33:58 -04:00
Greyson LaLonde
542afe61a8 docs: update changelog and version for v1.10.2a1
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-03-11 11:44:00 -04:00
Greyson LaLonde
8a5b3bc237 feat: bump versions to 1.10.2a1
* feat: bump versions to 1.10.2a1

* chore: update tool specifications

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-03-11 11:30:11 -04:00
Greyson LaLonde
534f0707ca fix: resolve LockException under concurrent multi-process execution 2026-03-11 11:15:24 -04:00
Giulio Leone
0046f9a96f fix(bedrock): group parallel tool results in single user message (#4775)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix(bedrock): group parallel tool results in single user message

When an AWS Bedrock model makes multiple tool calls in a single
response, the Converse API requires all corresponding tool results
to be sent back in a single user message. Previously, each tool
result was emitted as a separate user message, causing:

  ValidationException: Expected toolResult blocks at messages.2.content

Fix: When processing consecutive tool messages, append the toolResult
block to the preceding user message (if it already contains
toolResult blocks) instead of creating a new message. This groups
all parallel tool results together while keeping tool results from
different assistant turns separate.

Fixes #4749

Signed-off-by: Giulio Leone <6887247+giulio-leone@users.noreply.github.com>

* Update lib/crewai/tests/llms/bedrock/test_bedrock.py

* fix: group bedrock tool results

Co-authored-by: João Moura <joaomdmoura@gmail.com>

---------

Signed-off-by: Giulio Leone <6887247+giulio-leone@users.noreply.github.com>
Co-authored-by: Giulio Leone <6887247+giulio-leone@users.noreply.github.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2026-03-10 17:28:40 -03:00
Lucas Gomide
e72a80be6e Addressing MCP tools resolutions & eliminates all shared mutable connection (#4792)
* fix: allow hyphenated tool names in MCP references like notion#get-page

The _SLUG_RE regex on BaseAgent rejected MCP tool references containing
hyphens (e.g. "notion#get-page") because the fragment pattern only
matched \w (word chars)

* fix: create fresh MCP client per tool invocation to prevent parallel call races

When the LLM dispatches parallel calls to MCP tools on the same server, the executor runs them concurrently via ThreadPoolExecutor. Previously, all tools from a server shared a single MCPClient instance, and even the same tool called twice would reuse one client. Since each thread creates its own asyncio event loop via asyncio.run(), concurrent connect/disconnect calls on the shared client caused anyio cancel-scope errors ("Attempted to exit cancel scope in a different task than it was entered in").

The fix introduces a client_factory pattern: MCPNativeTool now receives a zero-arg callable that produces a fresh MCPClient + transport on every
_run_async() invocation. This eliminates all shared mutable connection state between concurrent calls, whether to the same tool or different tools from the same server.

* test: ensure we can filter hyphenated MCP tool
2026-03-10 14:00:40 -04:00
Lorenze Jay
7cffcab84a ensure we support tool search - saving tokens and dynamically inject appropriate tools during execution - anthropic (#4779)
* ensure we support tool search

* linted

* dont tool search if there is only one tool
2026-03-10 10:48:13 -07:00
João Moura
f070ce8abd fix: update llm parameter handling in human_feedback function (#4801)
Modified the llm parameter assignment to retrieve the model attribute from llm if it is not a string, ensuring compatibility with different llm types.
2026-03-10 14:27:09 -03:00
Sampson
d9f6e2222f Introduce more Brave Search tools (#4446)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: add dedicated Brave Search tools for web, news, image, video, local POIs, and Brave's newest LLM Context endpoint

* fix: normalize transformed response shape

* revert legacy tool name

* fix: schema change prevented property resolution

* Update tool.specs.json

* fix: add fallback for search_langugage

* simplify exports

* makes rate-limiting logic per-instance

* fix(brave-tools): correct _refine_response return type annotations

The abstract method and subclasses annotated _refine_response as returning
dict[str, Any] but most implementations actually return list[dict[str, Any]].
Updated base to return Any, and each subclass to match its actual return type.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Joao Moura <joaomdmoura@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 01:38:54 -03:00
Lucas Gomide
adef605410 fix: add missing list/dict methods to LockedListProxy and LockedDictProxy
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
2026-03-09 09:38:35 -04:00
Greyson LaLonde
cd42bcf035 refactor(memory): convert memory classes to serializable
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* refactor(memory): convert Memory, MemoryScope, and MemorySlice to BaseModel

* fix(test): update mock memory attribute from _read_only to read_only

* fix: handle re-validation in wrap validators and patch BaseModel class in tests
2026-03-08 23:08:10 -04:00
Greyson LaLonde
bc45a7fbe3 feat: create action for nightly releases
Some checks failed
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-03-06 18:32:52 -05:00
Matt Aitchison
87759cdb14 fix(deps): bump gitpython to >=3.1.41 to resolve CVE path traversal vulnerability (#4740)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
GitPython ==3.1.38 is affected by a high-severity path traversal
vulnerability (dependabot alert #1). Bump to >=3.1.41,<4 which
includes the fix.
2026-03-05 12:41:24 -06:00
Tiago Freire
059cb93aeb fix(executor): propagate contextvars context to parallel tool call threads
ThreadPoolExecutor threads do not inherit the calling thread's contextvars
context, causing _event_id_stack and _current_celery_task_id to be empty
in worker threads. This broke OTel span parenting for parallel tool calls
(missing parent_event_id) and lost the Celery task ID in the enterprise
tracking layer ([Task ID: no-task]).

Fix by capturing an independent context copy per submission via
contextvars.copy_context().run in CrewAgentExecutor._handle_native_tool_calls,
so each worker thread starts with the correct inherited context without
sharing mutable state across threads.
2026-03-05 08:20:09 -05:00
Lorenze Jay
cebc52694e docs: update changelog and version for v1.10.1
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-03-04 18:20:02 -05:00
104 changed files with 12624 additions and 5406 deletions

127
.github/workflows/nightly.yml vendored Normal file
View File

@@ -0,0 +1,127 @@
name: Nightly Canary Release
on:
schedule:
- cron: '0 6 * * *' # daily at 6am UTC
workflow_dispatch:
jobs:
check:
name: Check for new commits
runs-on: ubuntu-latest
permissions:
contents: read
outputs:
has_changes: ${{ steps.check.outputs.has_changes }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check for commits in last 24h
id: check
run: |
RECENT=$(git log --since="24 hours ago" --oneline | head -1)
if [ -n "$RECENT" ]; then
echo "has_changes=true" >> "$GITHUB_OUTPUT"
else
echo "has_changes=false" >> "$GITHUB_OUTPUT"
fi
build:
name: Build nightly packages
needs: check
if: needs.check.outputs.has_changes == 'true' || github.event_name == 'workflow_dispatch'
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Stamp nightly versions
run: |
DATE=$(date +%Y%m%d)
for init_file in \
lib/crewai/src/crewai/__init__.py \
lib/crewai-tools/src/crewai_tools/__init__.py \
lib/crewai-files/src/crewai_files/__init__.py; do
CURRENT=$(python -c "
import re
text = open('$init_file').read()
print(re.search(r'__version__\s*=\s*\"(.*?)\"\s*$', text, re.MULTILINE).group(1))
")
NIGHTLY="${CURRENT}.dev${DATE}"
sed -i "s/__version__ = .*/__version__ = \"${NIGHTLY}\"/" "$init_file"
echo "$init_file: $CURRENT -> $NIGHTLY"
done
# Update cross-package dependency pins to nightly versions
sed -i "s/\"crewai-tools==[^\"]*\"/\"crewai-tools==${NIGHTLY}\"/" lib/crewai/pyproject.toml
sed -i "s/\"crewai==[^\"]*\"/\"crewai==${NIGHTLY}\"/" lib/crewai-tools/pyproject.toml
echo "Updated cross-package dependency pins to ${NIGHTLY}"
- name: Build packages
run: |
uv build --all-packages
rm dist/.gitignore
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: dist
path: dist/
publish:
name: Publish nightly to PyPI
needs: build
runs-on: ubuntu-latest
environment:
name: pypi
url: https://pypi.org/p/crewai
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.12"
enable-cache: false
- name: Download artifacts
uses: actions/download-artifact@v4
with:
name: dist
path: dist
- name: Publish to PyPI
env:
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
run: |
failed=0
for package in dist/*; do
if [[ "$package" == *"crewai_devtools"* ]]; then
echo "Skipping private package: $package"
continue
fi
echo "Publishing $package"
if ! uv publish "$package"; then
echo "Failed to publish $package"
failed=1
fi
done
if [ $failed -eq 1 ]; then
echo "Some packages failed to publish"
exit 1
fi

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,71 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="Mar 11, 2026">
## v1.10.2a1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.2a1)
## What's Changed
### Features
- Add support for tool search, saving tokens, and dynamically injecting appropriate tools during execution for Anthropics.
- Introduce more Brave Search tools.
- Create action for nightly releases.
### Bug Fixes
- Fix LockException under concurrent multi-process execution.
- Resolve issues with grouping parallel tool results in a single user message.
- Address MCP tools resolutions and eliminate all shared mutable connections.
- Update LLM parameter handling in the human_feedback function.
- Add missing list/dict methods to LockedListProxy and LockedDictProxy.
- Propagate contextvars context to parallel tool call threads.
- Bump gitpython dependency to >=3.1.41 to resolve CVE path traversal vulnerability.
### Refactoring
- Refactor memory classes to be serializable.
### Documentation
- Update changelog and version for v1.10.1.
## Contributors
@akaKuruma, @github-actions[bot], @giulio-leone, @greysonlalonde, @joaomdmoura, @jonathansampson, @lorenzejay, @lucasgomide, @mattatcha
</Update>
<Update label="Mar 04, 2026">
## v1.10.1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1)
## What's Changed
### Features
- Upgrade Gemini GenAI
### Bug Fixes
- Adjust executor listener value to avoid recursion
- Group parallel function response parts in a single Content object in Gemini
- Surface thought output from thinking models in Gemini
- Load MCP and platform tools when agent tools are None
- Support Jupyter environments with running event loops in A2A
- Use anonymous ID for ephemeral traces
- Conditionally pass plus header
- Skip signal handler registration in non-main threads for telemetry
- Inject tool errors as observations and resolve name collisions
- Upgrade pypdf from 4.x to 6.7.4 to resolve Dependabot alerts
- Resolve critical and high Dependabot security alerts
### Documentation
- Sync Composio tool documentation across locales
## Contributors
@giulio-leone, @greysonlalonde, @haxzie, @joaomdmoura, @lorenzejay, @mattatcha, @mplachta, @nicoferdi96
</Update>
<Update label="Feb 27, 2026">
## v1.10.1a1

View File

@@ -1,97 +1,316 @@
---
title: Brave Search
description: The `BraveSearchTool` is designed to search the internet using the Brave Search API.
title: Brave Search Tools
description: A suite of tools for querying the Brave Search API — covering web, news, image, and video search.
icon: searchengin
mode: "wide"
---
# `BraveSearchTool`
# Brave Search Tools
## Description
This tool is designed to perform web searches using the Brave Search API. It allows you to search the internet with a specified query and retrieve relevant results. The tool supports customizable result counts and country-specific searches.
CrewAI offers a family of Brave Search tools, each targeting a specific [Brave Search API](https://brave.com/search/api/) endpoint.
Rather than a single catch-all tool, you can pick exactly the tool that matches the kind of results your agent needs:
| Tool | Endpoint | Use case |
| --- | --- | --- |
| `BraveWebSearchTool` | Web Search | General web results, snippets, and URLs |
| `BraveNewsSearchTool` | News Search | Recent news articles and headlines |
| `BraveImageSearchTool` | Image Search | Image results with dimensions and source URLs |
| `BraveVideoSearchTool` | Video Search | Video results from across the web |
| `BraveLocalPOIsTool` | Local POIs | Find points of interest (e.g., restaurants) |
| `BraveLocalPOIsDescriptionTool` | Local POIs | Retrieve AI-generated location descriptions |
| `BraveLLMContextTool` | LLM Context | Pre-extracted web content optimized for AI agents, LLM grounding, and RAG pipelines. |
All tools share a common base class (`BraveSearchToolBase`) that provides consistent behavior — rate limiting, automatic retries on `429` responses, header and parameter validation, and optional file saving.
<Note>
The older `BraveSearchTool` class is still available for backwards compatibility, but it is considered **legacy** and will not receive the same level of attention going forward. We recommend migrating to the specific tools listed above, which offer richer configuration and a more focused interface.
</Note>
<Note>
While many tools (e.g., _BraveWebSearchTool_, _BraveNewsSearchTool_, _BraveImageSearchTool_, and _BraveVideoSearchTool_) can be used with a free Brave Search API subscription/plan, some parameters (e.g., `enable_snippets`) and tools (e.g., _BraveLocalPOIsTool_ and _BraveLocalPOIsDescriptionTool_) require a paid plan. Consult your subscription plan's capabilities for clarification.
</Note>
## Installation
To incorporate this tool into your project, follow the installation instructions below:
```shell
pip install 'crewai[tools]'
```
## Steps to Get Started
## Getting Started
To effectively use the `BraveSearchTool`, follow these steps:
1. **Install the package** — confirm that `crewai[tools]` is installed in your Python environment.
2. **Get an API key** — sign up at [api-dashboard.search.brave.com/login](https://api-dashboard.search.brave.com/login) to generate a key.
3. **Set the environment variable** — store your key as `BRAVE_API_KEY`, or pass it directly via the `api_key` parameter.
1. **Package Installation**: Confirm that the `crewai[tools]` package is installed in your Python environment.
2. **API Key Acquisition**: Acquire a Brave Search API key at https://api.search.brave.com/app/keys (sign in to generate a key).
3. **Environment Configuration**: Store your obtained API key in an environment variable named `BRAVE_API_KEY` to facilitate its use by the tool.
## Quick Examples
## Example
The following example demonstrates how to initialize the tool and execute a search with a given query:
### Web Search
```python Code
from crewai_tools import BraveSearchTool
from crewai_tools import BraveWebSearchTool
# Initialize the tool for internet searching capabilities
tool = BraveSearchTool()
# Execute a search
results = tool.run(search_query="CrewAI agent framework")
tool = BraveWebSearchTool()
results = tool.run(q="CrewAI agent framework")
print(results)
```
## Parameters
The `BraveSearchTool` accepts the following parameters:
- **search_query**: Mandatory. The search query you want to use to search the internet.
- **country**: Optional. Specify the country for the search results. Default is empty string.
- **n_results**: Optional. Number of search results to return. Default is `10`.
- **save_file**: Optional. Whether to save the search results to a file. Default is `False`.
## Example with Parameters
Here is an example demonstrating how to use the tool with additional parameters:
### News Search
```python Code
from crewai_tools import BraveSearchTool
from crewai_tools import BraveNewsSearchTool
# Initialize the tool with custom parameters
tool = BraveSearchTool(
country="US",
n_results=5,
save_file=True
tool = BraveNewsSearchTool()
results = tool.run(q="latest AI breakthroughs")
print(results)
```
### Image Search
```python Code
from crewai_tools import BraveImageSearchTool
tool = BraveImageSearchTool()
results = tool.run(q="northern lights photography")
print(results)
```
### Video Search
```python Code
from crewai_tools import BraveVideoSearchTool
tool = BraveVideoSearchTool()
results = tool.run(q="how to build AI agents")
print(results)
```
### Location POI Descriptions
```python Code
from crewai_tools import (
BraveWebSearchTool,
BraveLocalPOIsDescriptionTool,
)
# Execute a search
results = tool.run(search_query="Latest AI developments")
print(results)
web_search = BraveWebSearchTool(raw=True)
poi_details = BraveLocalPOIsDescriptionTool()
results = web_search.run(q="italian restaurants in pensacola, florida")
if "locations" in results:
location_ids = [ loc["id"] for loc in results["locations"]["results"] ]
if location_ids:
descriptions = poi_details.run(ids=location_ids)
print(descriptions)
```
## Common Constructor Parameters
Every Brave Search tool accepts the following parameters at initialization:
| Parameter | Type | Default | Description |
| --- | --- | --- | --- |
| `api_key` | `str \| None` | `None` | Brave API key. Falls back to the `BRAVE_API_KEY` environment variable. |
| `headers` | `dict \| None` | `None` | Additional HTTP headers to send with every request (e.g., `api-version`, geolocation headers). |
| `requests_per_second` | `float` | `1.0` | Maximum request rate. The tool will sleep between calls to stay within this limit. |
| `save_file` | `bool` | `False` | When `True`, each response is written to a timestamped `.txt` file. |
| `raw` | `bool` | `False` | When `True`, the full API JSON response is returned without any refinement. |
| `timeout` | `int` | `30` | HTTP request timeout in seconds. |
| `country` | `str \| None` | `None` | Legacy shorthand for geo-targeting (e.g., `"US"`). Prefer using the `country` query parameter directly. |
| `n_results` | `int` | `10` | Legacy shorthand for result count. Prefer using the `count` query parameter directly. |
<Warning>
The `country` and `n_results` constructor parameters exist for backwards compatibility. They are applied as defaults when the corresponding query parameters (`country`, `count`) are not provided at call time. For new code, we recommend passing `country` and `count` directly as query parameters instead.
</Warning>
## Query Parameters
Each tool validates its query parameters against a Pydantic schema before sending the request.
The parameters vary slightly per endpoint — here is a summary of the most commonly used ones:
### BraveWebSearchTool
| Parameter | Description |
| --- | --- |
| `q` | **(required)** Search query string (max 400 chars). |
| `country` | Two-letter country code for geo-targeting (e.g., `"US"`). |
| `search_lang` | Two-letter language code for results (e.g., `"en"`). |
| `count` | Max number of results to return (120). |
| `offset` | Skip the first N pages of results (09). |
| `safesearch` | Content filter: `"off"`, `"moderate"`, or `"strict"`. |
| `freshness` | Recency filter: `"pd"` (past day), `"pw"` (past week), `"pm"` (past month), `"py"` (past year), or a date range like `"2025-01-01to2025-06-01"`. |
| `extra_snippets` | Include up to 5 additional text snippets per result. |
| `goggles` | Brave Goggles URL(s) and/or source for custom re-ranking. |
For the complete parameter and header reference, see the [Brave Web Search API documentation](https://api-dashboard.search.brave.com/api-reference/web/search/get).
### BraveNewsSearchTool
| Parameter | Description |
| --- | --- |
| `q` | **(required)** Search query string (max 400 chars). |
| `country` | Two-letter country code for geo-targeting. |
| `search_lang` | Two-letter language code for results. |
| `count` | Max number of results to return (150). |
| `offset` | Skip the first N pages of results (09). |
| `safesearch` | Content filter: `"off"`, `"moderate"`, or `"strict"`. |
| `freshness` | Recency filter (same options as Web Search). |
| `goggles` | Brave Goggles URL(s) and/or source for custom re-ranking. |
For the complete parameter and header reference, see the [Brave News Search API documentation](https://api-dashboard.search.brave.com/api-reference/news/news_search/get).
### BraveImageSearchTool
| Parameter | Description |
| --- | --- |
| `q` | **(required)** Search query string (max 400 chars). |
| `country` | Two-letter country code for geo-targeting. |
| `search_lang` | Two-letter language code for results. |
| `count` | Max number of results to return (1200). |
| `safesearch` | Content filter: `"off"` or `"strict"`. |
| `spellcheck` | Attempt to correct spelling errors in the query. |
For the complete parameter and header reference, see the [Brave Image Search API documentation](https://api-dashboard.search.brave.com/api-reference/images/image_search).
### BraveVideoSearchTool
| Parameter | Description |
| --- | --- |
| `q` | **(required)** Search query string (max 400 chars). |
| `country` | Two-letter country code for geo-targeting. |
| `search_lang` | Two-letter language code for results. |
| `count` | Max number of results to return (150). |
| `offset` | Skip the first N pages of results (09). |
| `safesearch` | Content filter: `"off"`, `"moderate"`, or `"strict"`. |
| `freshness` | Recency filter (same options as Web Search). |
For the complete parameter and header reference, see the [Brave Video Search API documentation](https://api-dashboard.search.brave.com/api-reference/videos/video_search/get).
### BraveLocalPOIsTool
| Parameter | Description |
| --- | --- |
| `ids` | **(required)** A list of unique identifiers for the desired locations. |
| `search_lang` | Two-letter language code for results. |
For the complete parameter and header reference, see [Brave Local POIs API documentation](https://api-dashboard.search.brave.com/api-reference/web/local_pois).
### BraveLocalPOIsDescriptionTool
| Parameter | Description |
| --- | --- |
| `ids` | **(required)** A list of unique identifiers for the desired locations. |
For the complete parameter and header reference, see [Brave POI Descriptions API documentation](https://api-dashboard.search.brave.com/api-reference/web/poi_descriptions).
## Custom Headers
All tools support custom HTTP request headers. The Web Search tool, for example, accepts geolocation headers for location-aware results:
```python Code
from crewai_tools import BraveWebSearchTool
tool = BraveWebSearchTool(
headers={
"x-loc-lat": "37.7749",
"x-loc-long": "-122.4194",
"x-loc-city": "San Francisco",
"x-loc-state": "CA",
"x-loc-country": "US",
}
)
results = tool.run(q="best coffee shops nearby")
```
You can also update headers after initialization using the `set_headers()` method:
```python Code
tool.set_headers({"api-version": "2025-01-01"})
```
## Raw Mode
By default, each tool refines the API response into a concise list of results. If you need the full, unprocessed API response, enable raw mode:
```python Code
from crewai_tools import BraveWebSearchTool
tool = BraveWebSearchTool(raw=True)
full_response = tool.run(q="Brave Search API")
```
## Agent Integration Example
Here's how to integrate the `BraveSearchTool` with a CrewAI agent:
Here's how to equip a CrewAI agent with multiple Brave Search tools:
```python Code
from crewai import Agent
from crewai.project import agent
from crewai_tools import BraveSearchTool
from crewai_tools import BraveWebSearchTool, BraveNewsSearchTool
# Initialize the tool
brave_search_tool = BraveSearchTool()
web_search = BraveWebSearchTool()
news_search = BraveNewsSearchTool()
# Define an agent with the BraveSearchTool
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config["researcher"],
allow_delegation=False,
tools=[brave_search_tool]
tools=[web_search, news_search],
)
```
## Advanced Example
Combining multiple parameters for a targeted search:
```python Code
from crewai_tools import BraveWebSearchTool
tool = BraveWebSearchTool(
requests_per_second=0.5, # conservative rate limit
save_file=True,
)
results = tool.run(
q="artificial intelligence news",
country="US",
search_lang="en",
count=5,
freshness="pm", # past month only
extra_snippets=True,
)
print(results)
```
## Migrating from `BraveSearchTool` (Legacy)
If you are currently using `BraveSearchTool`, switching to the new tools is straightforward:
```python Code
# Before (legacy)
from crewai_tools import BraveSearchTool
tool = BraveSearchTool(country="US", n_results=5, save_file=True)
results = tool.run(search_query="AI agents")
# After (recommended)
from crewai_tools import BraveWebSearchTool
tool = BraveWebSearchTool(save_file=True)
results = tool.run(q="AI agents", country="US", count=5)
```
Key differences:
- **Import**: Use `BraveWebSearchTool` (or the news/image/video variant) instead of `BraveSearchTool`.
- **Query parameter**: Use `q` instead of `search_query`. (Both `search_query` and `query` are still accepted for convenience, but `q` is the preferred parameter.)
- **Result count**: Pass `count` as a query parameter instead of `n_results` at init time.
- **Country**: Pass `country` as a query parameter instead of at init time.
- **API key**: Can now be passed directly via `api_key=` in addition to the `BRAVE_API_KEY` environment variable.
- **Rate limiting**: Configurable via `requests_per_second` with automatic retry on `429` responses.
## Conclusion
By integrating the `BraveSearchTool` into Python projects, users gain the ability to conduct real-time, relevant searches across the internet directly from their applications. The tool provides a simple interface to the powerful Brave Search API, making it easy to retrieve and process search results programmatically. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward.
The Brave Search tool suite gives your CrewAI agents flexible, endpoint-specific access to the Brave Search API. Whether you need web pages, breaking news, images, or videos, there is a dedicated tool with validated parameters and built-in resilience. Pick the tool that fits your use case, and refer to the [Brave Search API documentation](https://brave.com/search/api/) for the full details on available parameters and response formats.

View File

@@ -4,6 +4,71 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
icon: "clock"
mode: "wide"
---
<Update label="2026년 3월 11일">
## v1.10.2a1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.10.2a1)
## 변경 사항
### 기능
- Anthropics에 대한 도구 검색 지원 추가, 토큰 저장, 실행 중 적절한 도구를 동적으로 주입하는 기능 추가.
- 더 많은 Brave Search 도구 도입.
- 야간 릴리스를 위한 액션 생성.
### 버그 수정
- 동시 다중 프로세스 실행 중 LockException 수정.
- 단일 사용자 메시지에서 병렬 도구 결과 그룹화 문제 해결.
- MCP 도구 해상도 문제 해결 및 모든 공유 가변 연결 제거.
- human_feedback 함수에서 LLM 매개변수 처리 업데이트.
- LockedListProxy 및 LockedDictProxy에 누락된 list/dict 메서드 추가.
- 병렬 도구 호출 스레드에 contextvars 컨텍스트 전파.
- CVE 경로 탐색 취약점을 해결하기 위해 gitpython 의존성을 >=3.1.41로 업데이트.
### 리팩토링
- 메모리 클래스를 직렬화 가능하도록 리팩토링.
### 문서
- v1.10.1에 대한 변경 로그 및 버전 업데이트.
## 기여자
@akaKuruma, @github-actions[bot], @giulio-leone, @greysonlalonde, @joaomdmoura, @jonathansampson, @lorenzejay, @lucasgomide, @mattatcha
</Update>
<Update label="2026년 3월 4일">
## v1.10.1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1)
## 변경 사항
### 기능
- Gemini GenAI 업그레이드
### 버그 수정
- 재귀를 피하기 위해 실행기 리스너 값을 조정
- Gemini에서 병렬 함수 응답 부분을 단일 Content 객체로 그룹화
- Gemini에서 사고 모델의 사고 출력을 표시
- 에이전트 도구가 None일 때 MCP 및 플랫폼 도구 로드
- A2A에서 실행 이벤트 루프가 있는 Jupyter 환경 지원
- 일시적인 추적을 위해 익명 ID 사용
- 조건부로 플러스 헤더 전달
- 원격 측정을 위해 비주 스레드에서 신호 처리기 등록 건너뛰기
- 도구 오류를 관찰로 주입하고 이름 충돌 해결
- Dependabot 경고를 해결하기 위해 pypdf를 4.x에서 6.7.4로 업그레이드
- 심각 및 높은 Dependabot 보안 경고 해결
### 문서
- Composio 도구 문서를 지역별로 동기화
## 기여자
@giulio-leone, @greysonlalonde, @haxzie, @joaomdmoura, @lorenzejay, @mattatcha, @mplachta, @nicoferdi96
</Update>
<Update label="2026년 2월 27일">
## v1.10.1a1

View File

@@ -4,6 +4,71 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="11 mar 2026">
## v1.10.2a1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.2a1)
## O que mudou
### Recursos
- Adicionar suporte para busca de ferramentas, salvamento de tokens e injeção dinâmica de ferramentas apropriadas durante a execução para Anthropics.
- Introduzir mais ferramentas de Busca Brave.
- Criar ação para lançamentos noturnos.
### Correções de Bugs
- Corrigir LockException durante a execução concorrente de múltiplos processos.
- Resolver problemas com a agrupação de resultados de ferramentas paralelas em uma única mensagem de usuário.
- Abordar resoluções de ferramentas MCP e eliminar todas as conexões mutáveis compartilhadas.
- Atualizar o manuseio de parâmetros LLM na função human_feedback.
- Adicionar métodos de lista/dicionário ausentes a LockedListProxy e LockedDictProxy.
- Propagar o contexto de contextvars para as threads de chamada de ferramentas paralelas.
- Atualizar a dependência gitpython para >=3.1.41 para resolver a vulnerabilidade de travessia de diretórios CVE.
### Refatoração
- Refatorar classes de memória para serem serializáveis.
### Documentação
- Atualizar o changelog e a versão para v1.10.1.
## Contribuidores
@akaKuruma, @github-actions[bot], @giulio-leone, @greysonlalonde, @joaomdmoura, @jonathansampson, @lorenzejay, @lucasgomide, @mattatcha
</Update>
<Update label="04 mar 2026">
## v1.10.1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.10.1)
## O que mudou
### Recursos
- Atualizar Gemini GenAI
### Correções de Bugs
- Ajustar o valor do listener do executor para evitar recursão
- Agrupar partes da resposta da função paralela em um único objeto Content no Gemini
- Exibir a saída de pensamento dos modelos de pensamento no Gemini
- Carregar ferramentas MCP e da plataforma quando as ferramentas do agente forem None
- Suportar ambientes Jupyter com loops de eventos em A2A
- Usar ID anônimo para rastreamentos efêmeros
- Passar condicionalmente o cabeçalho plus
- Ignorar o registro do manipulador de sinal em threads não principais para telemetria
- Injetar erros de ferramentas como observações e resolver colisões de nomes
- Atualizar pypdf de 4.x para 6.7.4 para resolver alertas do Dependabot
- Resolver alertas de segurança críticos e altos do Dependabot
### Documentação
- Sincronizar a documentação da ferramenta Composio entre locais
## Contribuidores
@giulio-leone, @greysonlalonde, @haxzie, @joaomdmoura, @lorenzejay, @mattatcha, @mplachta, @nicoferdi96
</Update>
<Update label="27 fev 2026">
## v1.10.1a1

View File

@@ -152,4 +152,4 @@ __all__ = [
"wrap_file_source",
]
__version__ = "1.10.1"
__version__ = "1.10.2a1"

View File

@@ -11,7 +11,7 @@ dependencies = [
"pytube~=15.0.0",
"requests~=2.32.5",
"docker~=7.1.0",
"crewai==1.10.1",
"crewai==1.10.2a1",
"tiktoken~=0.8.0",
"beautifulsoup4~=4.13.4",
"python-docx~=1.2.0",
@@ -108,7 +108,7 @@ stagehand = [
"stagehand>=0.4.1",
]
github = [
"gitpython==3.1.38",
"gitpython>=3.1.41,<4",
"PyGithub==1.59.1",
]
rag = [

View File

@@ -10,7 +10,18 @@ from crewai_tools.aws.s3.writer_tool import S3WriterTool
from crewai_tools.tools.ai_mind_tool.ai_mind_tool import AIMindTool
from crewai_tools.tools.apify_actors_tool.apify_actors_tool import ApifyActorsTool
from crewai_tools.tools.arxiv_paper_tool.arxiv_paper_tool import ArxivPaperTool
from crewai_tools.tools.brave_search_tool.brave_image_tool import BraveImageSearchTool
from crewai_tools.tools.brave_search_tool.brave_llm_context_tool import (
BraveLLMContextTool,
)
from crewai_tools.tools.brave_search_tool.brave_local_pois_tool import (
BraveLocalPOIsDescriptionTool,
BraveLocalPOIsTool,
)
from crewai_tools.tools.brave_search_tool.brave_news_tool import BraveNewsSearchTool
from crewai_tools.tools.brave_search_tool.brave_search_tool import BraveSearchTool
from crewai_tools.tools.brave_search_tool.brave_video_tool import BraveVideoSearchTool
from crewai_tools.tools.brave_search_tool.brave_web_tool import BraveWebSearchTool
from crewai_tools.tools.brightdata_tool.brightdata_dataset import (
BrightDataDatasetTool,
)
@@ -200,7 +211,14 @@ __all__ = [
"ArxivPaperTool",
"BedrockInvokeAgentTool",
"BedrockKBRetrieverTool",
"BraveImageSearchTool",
"BraveLLMContextTool",
"BraveLocalPOIsDescriptionTool",
"BraveLocalPOIsTool",
"BraveNewsSearchTool",
"BraveSearchTool",
"BraveVideoSearchTool",
"BraveWebSearchTool",
"BrightDataDatasetTool",
"BrightDataSearchTool",
"BrightDataWebUnlockerTool",
@@ -291,4 +309,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.10.1"
__version__ = "1.10.2a1"

View File

@@ -1,7 +1,9 @@
from collections.abc import Callable
import os
from pathlib import Path
from typing import Any
from crewai.utilities.lock_store import lock as store_lock
from lancedb import ( # type: ignore[import-untyped]
DBConnection as LanceDBConnection,
connect as lancedb_connect,
@@ -33,21 +35,24 @@ class LanceDBAdapter(Adapter):
_db: LanceDBConnection = PrivateAttr()
_table: LanceDBTable = PrivateAttr()
_lock_name: str = PrivateAttr(default="")
def model_post_init(self, __context: Any) -> None:
self._db = lancedb_connect(self.uri)
self._table = self._db.open_table(self.table_name)
self._lock_name = f"lancedb:{os.path.realpath(str(self.uri))}"
super().model_post_init(__context)
def query(self, question: str) -> str: # type: ignore[override]
query = self.embedding_function([question])[0]
results = (
self._table.search(query, vector_column_name=self.vector_column_name)
.limit(self.top_k)
.select([self.text_column_name])
.to_list()
)
with store_lock(self._lock_name):
results = (
self._table.search(query, vector_column_name=self.vector_column_name)
.limit(self.top_k)
.select([self.text_column_name])
.to_list()
)
values = [result[self.text_column_name] for result in results]
return "\n".join(values)
@@ -56,4 +61,5 @@ class LanceDBAdapter(Adapter):
*args: Any,
**kwargs: Any,
) -> None:
self._table.add(*args, **kwargs)
with store_lock(self._lock_name):
self._table.add(*args, **kwargs)

View File

@@ -1,6 +1,9 @@
from __future__ import annotations
import asyncio
import contextvars
import logging
import threading
from typing import TYPE_CHECKING
@@ -18,6 +21,9 @@ class BrowserSessionManager:
This class maintains separate browser sessions for different threads,
enabling concurrent usage of browsers in multi-threaded environments.
Browsers are created lazily only when needed by tools.
Uses per-key events to serialize creation for the same thread_id without
blocking unrelated callers or wasting resources on duplicate sessions.
"""
def __init__(self, region: str = "us-west-2"):
@@ -27,8 +33,10 @@ class BrowserSessionManager:
region: AWS region for browser client
"""
self.region = region
self._lock = threading.Lock()
self._async_sessions: dict[str, tuple[BrowserClient, AsyncBrowser]] = {}
self._sync_sessions: dict[str, tuple[BrowserClient, SyncBrowser]] = {}
self._creating: dict[str, threading.Event] = {}
async def get_async_browser(self, thread_id: str) -> AsyncBrowser:
"""Get or create an async browser for the specified thread.
@@ -39,10 +47,29 @@ class BrowserSessionManager:
Returns:
An async browser instance specific to the thread
"""
if thread_id in self._async_sessions:
return self._async_sessions[thread_id][1]
loop = asyncio.get_event_loop()
while True:
with self._lock:
if thread_id in self._async_sessions:
return self._async_sessions[thread_id][1]
if thread_id not in self._creating:
self._creating[thread_id] = threading.Event()
break
event = self._creating[thread_id]
ctx = contextvars.copy_context()
await loop.run_in_executor(None, ctx.run, event.wait)
return await self._create_async_browser_session(thread_id)
try:
browser_client, browser = await self._create_async_browser_session(
thread_id
)
with self._lock:
self._async_sessions[thread_id] = (browser_client, browser)
return browser
finally:
with self._lock:
evt = self._creating.pop(thread_id)
evt.set()
def get_sync_browser(self, thread_id: str) -> SyncBrowser:
"""Get or create a sync browser for the specified thread.
@@ -53,19 +80,33 @@ class BrowserSessionManager:
Returns:
A sync browser instance specific to the thread
"""
if thread_id in self._sync_sessions:
return self._sync_sessions[thread_id][1]
while True:
with self._lock:
if thread_id in self._sync_sessions:
return self._sync_sessions[thread_id][1]
if thread_id not in self._creating:
self._creating[thread_id] = threading.Event()
break
event = self._creating[thread_id]
event.wait()
return self._create_sync_browser_session(thread_id)
try:
return self._create_sync_browser_session(thread_id)
finally:
with self._lock:
evt = self._creating.pop(thread_id)
evt.set()
async def _create_async_browser_session(self, thread_id: str) -> AsyncBrowser:
async def _create_async_browser_session(
self, thread_id: str
) -> tuple[BrowserClient, AsyncBrowser]:
"""Create a new async browser session for the specified thread.
Args:
thread_id: Unique identifier for the thread
Returns:
The newly created async browser instance
Tuple of (BrowserClient, AsyncBrowser).
Raises:
Exception: If browser session creation fails
@@ -75,10 +116,8 @@ class BrowserSessionManager:
browser_client = BrowserClient(region=self.region)
try:
# Start browser session
browser_client.start()
# Get WebSocket connection info
ws_url, headers = browser_client.generate_ws_headers()
logger.info(
@@ -87,7 +126,6 @@ class BrowserSessionManager:
from playwright.async_api import async_playwright
# Connect to browser using Playwright
playwright = await async_playwright().start()
browser = await playwright.chromium.connect_over_cdp(
endpoint_url=ws_url, headers=headers, timeout=30000
@@ -96,17 +134,13 @@ class BrowserSessionManager:
f"Successfully connected to async browser for thread {thread_id}"
)
# Store session resources
self._async_sessions[thread_id] = (browser_client, browser)
return browser
return browser_client, browser
except Exception as e:
logger.error(
f"Failed to create async browser session for thread {thread_id}: {e}"
)
# Clean up resources if session creation fails
if browser_client:
try:
browser_client.stop()
@@ -132,10 +166,8 @@ class BrowserSessionManager:
browser_client = BrowserClient(region=self.region)
try:
# Start browser session
browser_client.start()
# Get WebSocket connection info
ws_url, headers = browser_client.generate_ws_headers()
logger.info(
@@ -144,7 +176,6 @@ class BrowserSessionManager:
from playwright.sync_api import sync_playwright
# Connect to browser using Playwright
playwright = sync_playwright().start()
browser = playwright.chromium.connect_over_cdp(
endpoint_url=ws_url, headers=headers, timeout=30000
@@ -153,8 +184,8 @@ class BrowserSessionManager:
f"Successfully connected to sync browser for thread {thread_id}"
)
# Store session resources
self._sync_sessions[thread_id] = (browser_client, browser)
with self._lock:
self._sync_sessions[thread_id] = (browser_client, browser)
return browser
@@ -163,7 +194,6 @@ class BrowserSessionManager:
f"Failed to create sync browser session for thread {thread_id}: {e}"
)
# Clean up resources if session creation fails
if browser_client:
try:
browser_client.stop()
@@ -178,13 +208,13 @@ class BrowserSessionManager:
Args:
thread_id: Unique identifier for the thread
"""
if thread_id not in self._async_sessions:
logger.warning(f"No async browser session found for thread {thread_id}")
return
with self._lock:
if thread_id not in self._async_sessions:
logger.warning(f"No async browser session found for thread {thread_id}")
return
browser_client, browser = self._async_sessions[thread_id]
browser_client, browser = self._async_sessions.pop(thread_id)
# Close browser
if browser:
try:
await browser.close()
@@ -193,7 +223,6 @@ class BrowserSessionManager:
f"Error closing async browser for thread {thread_id}: {e}"
)
# Stop browser client
if browser_client:
try:
browser_client.stop()
@@ -202,8 +231,6 @@ class BrowserSessionManager:
f"Error stopping browser client for thread {thread_id}: {e}"
)
# Remove session from dictionary
del self._async_sessions[thread_id]
logger.info(f"Async browser session cleaned up for thread {thread_id}")
def close_sync_browser(self, thread_id: str) -> None:
@@ -212,13 +239,13 @@ class BrowserSessionManager:
Args:
thread_id: Unique identifier for the thread
"""
if thread_id not in self._sync_sessions:
logger.warning(f"No sync browser session found for thread {thread_id}")
return
with self._lock:
if thread_id not in self._sync_sessions:
logger.warning(f"No sync browser session found for thread {thread_id}")
return
browser_client, browser = self._sync_sessions[thread_id]
browser_client, browser = self._sync_sessions.pop(thread_id)
# Close browser
if browser:
try:
browser.close()
@@ -227,7 +254,6 @@ class BrowserSessionManager:
f"Error closing sync browser for thread {thread_id}: {e}"
)
# Stop browser client
if browser_client:
try:
browser_client.stop()
@@ -236,19 +262,17 @@ class BrowserSessionManager:
f"Error stopping browser client for thread {thread_id}: {e}"
)
# Remove session from dictionary
del self._sync_sessions[thread_id]
logger.info(f"Sync browser session cleaned up for thread {thread_id}")
async def close_all_browsers(self) -> None:
"""Close all browser sessions."""
# Close all async browsers
async_thread_ids = list(self._async_sessions.keys())
with self._lock:
async_thread_ids = list(self._async_sessions.keys())
sync_thread_ids = list(self._sync_sessions.keys())
for thread_id in async_thread_ids:
await self.close_async_browser(thread_id)
# Close all sync browsers
sync_thread_ids = list(self._sync_sessions.keys())
for thread_id in sync_thread_ids:
self.close_sync_browser(thread_id)

View File

@@ -1,9 +1,11 @@
import logging
import os
from pathlib import Path
from typing import Any
from uuid import uuid4
import chromadb
from crewai.utilities.lock_store import lock as store_lock
from pydantic import BaseModel, Field, PrivateAttr
from crewai_tools.rag.base_loader import BaseLoader
@@ -38,22 +40,32 @@ class RAG(Adapter):
_client: Any = PrivateAttr()
_collection: Any = PrivateAttr()
_embedding_service: EmbeddingService = PrivateAttr()
_lock_name: str = PrivateAttr(default="")
def model_post_init(self, __context: Any) -> None:
try:
if self.persist_directory:
self._client = chromadb.PersistentClient(path=self.persist_directory)
else:
self._client = chromadb.Client()
self._collection = self._client.get_or_create_collection(
name=self.collection_name,
metadata={
"hnsw:space": "cosine",
"description": "CrewAI Knowledge Base",
},
self._lock_name = (
f"chromadb:{os.path.realpath(self.persist_directory)}"
if self.persist_directory
else "chromadb:ephemeral"
)
with store_lock(self._lock_name):
if self.persist_directory:
self._client = chromadb.PersistentClient(
path=self.persist_directory
)
else:
self._client = chromadb.Client()
self._collection = self._client.get_or_create_collection(
name=self.collection_name,
metadata={
"hnsw:space": "cosine",
"description": "CrewAI Knowledge Base",
},
)
self._embedding_service = EmbeddingService(
provider=self.embedding_provider,
model=self.embedding_model,
@@ -87,29 +99,8 @@ class RAG(Adapter):
loader_result = loader.load(source_content)
doc_id = loader_result.doc_id
existing_doc = self._collection.get(
where={"source": source_content.source_ref}, limit=1
)
existing_doc_id = (
existing_doc and existing_doc["metadatas"][0]["doc_id"]
if existing_doc["metadatas"]
else None
)
if existing_doc_id == doc_id:
logger.warning(
f"Document with source {loader_result.source} already exists"
)
return
# Document with same source ref does exists but the content has changed, deleting the oldest reference
if existing_doc_id and existing_doc_id != loader_result.doc_id:
logger.warning(f"Deleting old document with doc_id {existing_doc_id}")
self._collection.delete(where={"doc_id": existing_doc_id})
documents = []
chunks = chunker.chunk(loader_result.content)
documents = []
for i, chunk in enumerate(chunks):
doc_metadata = (metadata or {}).copy()
doc_metadata["chunk_index"] = i
@@ -136,7 +127,6 @@ class RAG(Adapter):
ids = [doc.id for doc in documents]
metadatas = []
for doc in documents:
doc_metadata = doc.metadata.copy()
doc_metadata.update(
@@ -148,27 +138,48 @@ class RAG(Adapter):
)
metadatas.append(doc_metadata)
try:
self._collection.add(
ids=ids,
embeddings=embeddings,
documents=contents,
metadatas=metadatas,
with store_lock(self._lock_name):
existing_doc = self._collection.get(
where={"source": source_content.source_ref}, limit=1
)
logger.info(f"Added {len(documents)} documents to knowledge base")
except Exception as e:
logger.error(f"Failed to add documents to ChromaDB: {e}")
existing_doc_id = (
existing_doc and existing_doc["metadatas"][0]["doc_id"]
if existing_doc["metadatas"]
else None
)
if existing_doc_id == doc_id:
logger.warning(
f"Document with source {loader_result.source} already exists"
)
return
if existing_doc_id and existing_doc_id != loader_result.doc_id:
logger.warning(f"Deleting old document with doc_id {existing_doc_id}")
self._collection.delete(where={"doc_id": existing_doc_id})
try:
self._collection.add(
ids=ids,
embeddings=embeddings,
documents=contents,
metadatas=metadatas,
)
logger.info(f"Added {len(documents)} documents to knowledge base")
except Exception as e:
logger.error(f"Failed to add documents to ChromaDB: {e}")
def query(self, question: str, where: dict[str, Any] | None = None) -> str: # type: ignore
try:
question_embedding = self._embedding_service.embed_text(question)
results = self._collection.query(
query_embeddings=[question_embedding],
n_results=self.top_k,
where=where,
include=["documents", "metadatas", "distances"],
)
with store_lock(self._lock_name):
results = self._collection.query(
query_embeddings=[question_embedding],
n_results=self.top_k,
where=where,
include=["documents", "metadatas", "distances"],
)
if (
not results
@@ -201,7 +212,8 @@ class RAG(Adapter):
def delete_collection(self) -> None:
try:
self._client.delete_collection(self.collection_name)
with store_lock(self._lock_name):
self._client.delete_collection(self.collection_name)
logger.info(f"Deleted collection: {self.collection_name}")
except Exception as e:
logger.error(f"Failed to delete collection: {e}")

View File

@@ -1,7 +1,18 @@
from crewai_tools.tools.ai_mind_tool.ai_mind_tool import AIMindTool
from crewai_tools.tools.apify_actors_tool.apify_actors_tool import ApifyActorsTool
from crewai_tools.tools.arxiv_paper_tool.arxiv_paper_tool import ArxivPaperTool
from crewai_tools.tools.brave_search_tool.brave_image_tool import BraveImageSearchTool
from crewai_tools.tools.brave_search_tool.brave_llm_context_tool import (
BraveLLMContextTool,
)
from crewai_tools.tools.brave_search_tool.brave_local_pois_tool import (
BraveLocalPOIsDescriptionTool,
BraveLocalPOIsTool,
)
from crewai_tools.tools.brave_search_tool.brave_news_tool import BraveNewsSearchTool
from crewai_tools.tools.brave_search_tool.brave_search_tool import BraveSearchTool
from crewai_tools.tools.brave_search_tool.brave_video_tool import BraveVideoSearchTool
from crewai_tools.tools.brave_search_tool.brave_web_tool import BraveWebSearchTool
from crewai_tools.tools.brightdata_tool import (
BrightDataDatasetTool,
BrightDataSearchTool,
@@ -185,7 +196,14 @@ __all__ = [
"AIMindTool",
"ApifyActorsTool",
"ArxivPaperTool",
"BraveImageSearchTool",
"BraveLLMContextTool",
"BraveLocalPOIsDescriptionTool",
"BraveLocalPOIsTool",
"BraveNewsSearchTool",
"BraveSearchTool",
"BraveVideoSearchTool",
"BraveWebSearchTool",
"BrightDataDatasetTool",
"BrightDataSearchTool",
"BrightDataWebUnlockerTool",

View File

@@ -0,0 +1,322 @@
from __future__ import annotations
from abc import ABC, abstractmethod
from datetime import datetime
import json
import logging
import os
import threading
import time
from typing import Any, ClassVar
from crewai.tools import BaseTool, EnvVar
from pydantic import BaseModel, Field
import requests
logger = logging.getLogger(__name__)
# Brave API error codes that indicate non-retryable quota/usage exhaustion.
_QUOTA_CODES = frozenset({"QUOTA_LIMITED", "USAGE_LIMIT_EXCEEDED"})
def _save_results_to_file(content: str) -> None:
"""Saves the search results to a file."""
filename = f"search_results_{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}.txt"
with open(filename, "w") as file:
file.write(content)
def _parse_error_body(resp: requests.Response) -> dict[str, Any] | None:
"""Extract the structured "error" object from a Brave API error response."""
try:
body = resp.json()
error = body.get("error")
return error if isinstance(error, dict) else None
except (ValueError, KeyError):
return None
def _raise_for_error(resp: requests.Response) -> None:
"""Brave Search API error responses contain helpful JSON payloads"""
status = resp.status_code
try:
body = json.dumps(resp.json())
except (ValueError, KeyError):
body = resp.text[:500]
raise RuntimeError(f"Brave Search API error (HTTP {status}): {body}")
def _is_retryable(resp: requests.Response) -> bool:
"""Return True for transient failures that are worth retrying.
* 429 + RATE_LIMITED — the per-second sliding window is full.
* 5xx — transient server-side errors.
Quota exhaustion (QUOTA_LIMITED, USAGE_LIMIT_EXCEEDED) is
explicitly excluded: retrying will never succeed until the billing
period resets.
"""
if resp.status_code == 429:
error = _parse_error_body(resp) or {}
return error.get("code") not in _QUOTA_CODES
return 500 <= resp.status_code < 600
def _retry_delay(resp: requests.Response, attempt: int) -> float:
"""Compute wait time before the next retry attempt.
Prefers the server-supplied Retry-After header when available;
falls back to exponential backoff (1s, 2s, 4s, ...).
"""
retry_after = resp.headers.get("Retry-After")
if retry_after is not None:
try:
return max(0.0, float(retry_after))
except (ValueError, TypeError):
pass
return float(2**attempt)
class BraveSearchToolBase(BaseTool, ABC):
"""
Base class for Brave Search API interactions.
Individual tool subclasses must provide the following:
- search_url
- header_schema (pydantic model)
- args_schema (pydantic model)
- _refine_payload() -> dict[str, Any]
"""
search_url: str
raw: bool = False
args_schema: type[BaseModel]
header_schema: type[BaseModel]
# Tool options (legacy parameters)
country: str | None = None
save_file: bool = False
n_results: int = 10
env_vars: list[EnvVar] = Field(
default_factory=lambda: [
EnvVar(
name="BRAVE_API_KEY",
description="API key for Brave Search",
required=True,
),
]
)
def __init__(
self,
*,
api_key: str | None = None,
headers: dict[str, Any] | None = None,
requests_per_second: float = 1.0,
save_file: bool = False,
raw: bool = False,
timeout: int = 30,
**kwargs: Any,
):
super().__init__(**kwargs)
self._api_key = api_key or os.environ.get("BRAVE_API_KEY")
if not self._api_key:
raise ValueError("BRAVE_API_KEY environment variable is required")
self.raw = bool(raw)
self._timeout = int(timeout)
self.save_file = bool(save_file)
self._requests_per_second = float(requests_per_second)
self._headers = self._build_and_validate_headers(headers or {})
# Per-instance rate limiting: each instance has its own clock and lock.
# Total process rate is the sum of limits of instances you create.
self._last_request_time: float = 0
self._rate_limit_lock = threading.Lock()
@property
def api_key(self) -> str:
return self._api_key
@property
def headers(self) -> dict[str, Any]:
return self._headers
def set_headers(self, headers: dict[str, Any]) -> BraveSearchToolBase:
merged = {**self._headers, **{k.lower(): v for k, v in headers.items()}}
self._headers = self._build_and_validate_headers(merged)
return self
def _build_and_validate_headers(self, headers: dict[str, Any]) -> dict[str, Any]:
normalized = {k.lower(): v for k, v in headers.items()}
normalized.setdefault("x-subscription-token", self._api_key)
normalized.setdefault("accept", "application/json")
try:
self.header_schema(**normalized)
except Exception as e:
raise ValueError(f"Invalid headers: {e}") from e
return normalized
def _rate_limit(self) -> None:
"""Enforce minimum interval between requests for this instance. Thread-safe."""
if self._requests_per_second <= 0:
return
min_interval = 1.0 / self._requests_per_second
with self._rate_limit_lock:
now = time.time()
next_allowed = self._last_request_time + min_interval
if now < next_allowed:
time.sleep(next_allowed - now)
now = time.time()
self._last_request_time = now
def _make_request(
self, params: dict[str, Any], *, _max_retries: int = 3
) -> dict[str, Any]:
"""Execute an HTTP GET against the Brave Search API with retry logic."""
last_resp: requests.Response | None = None
# Retry the request up to _max_retries times
for attempt in range(_max_retries):
self._rate_limit()
# Make the request
try:
resp = requests.get(
self.search_url,
headers=self._headers,
params=params,
timeout=self._timeout,
)
except requests.ConnectionError as exc:
raise RuntimeError(
f"Brave Search API connection failed: {exc}"
) from exc
except requests.Timeout as exc:
raise RuntimeError(
f"Brave Search API request timed out after {self._timeout}s: {exc}"
) from exc
# Log the rate limit headers and request details
logger.debug(
"Brave Search API request: %s %s -> %d",
"GET",
resp.url,
resp.status_code,
)
# Response was OK, return the JSON body
if resp.ok:
try:
return resp.json()
except ValueError as exc:
raise RuntimeError(
f"Brave Search API returned invalid JSON (HTTP {resp.status_code}): {exc}"
) from exc
# Response was not OK, but is retryable
# (e.g., 429 Too Many Requests, 500 Internal Server Error)
if _is_retryable(resp) and attempt < _max_retries - 1:
delay = _retry_delay(resp, attempt)
logger.warning(
"Brave Search API returned %d. Retrying in %.1fs (attempt %d/%d)",
resp.status_code,
delay,
attempt + 1,
_max_retries,
)
time.sleep(delay)
last_resp = resp
continue
# Response was not OK, nor was it retryable
# (e.g., 422 Unprocessable Entity, 400 Bad Request (OPTION_NOT_IN_PLAN))
_raise_for_error(resp)
# All retries exhausted
_raise_for_error(last_resp or resp) # type: ignore[possibly-undefined]
return {} # unreachable (here to satisfy the type checker and linter)
def _run(self, q: str | None = None, **params: Any) -> Any:
# Allow positional usage: tool.run("latest Brave browser features")
if q is not None:
params["q"] = q
params = self._common_payload_refinement(params)
# Validate only schema fields
schema_keys = self.args_schema.model_fields
payload_in = {k: v for k, v in params.items() if k in schema_keys}
try:
validated = self.args_schema(**payload_in)
except Exception as e:
raise ValueError(f"Invalid parameters: {e}") from e
# The subclass may have additional refinements to apply to the payload, such as goggles or other parameters
payload = self._refine_request_payload(validated.model_dump(exclude_none=True))
response = self._make_request(payload)
if not self.raw:
response = self._refine_response(response)
if self.save_file:
_save_results_to_file(json.dumps(response, indent=2))
return response
@abstractmethod
def _refine_request_payload(self, params: dict[str, Any]) -> dict[str, Any]:
"""Subclass must implement: transform validated params dict into API request params."""
raise NotImplementedError
@abstractmethod
def _refine_response(self, response: dict[str, Any]) -> Any:
"""Subclass must implement: transform response dict into a more useful format."""
raise NotImplementedError
_EMPTY_VALUES: ClassVar[tuple[None, str, str, list[Any]]] = (None, "", "null", [])
def _common_payload_refinement(self, params: dict[str, Any]) -> dict[str, Any]:
"""Common payload refinement for all tools."""
# crewAI's schema pipeline (ensure_all_properties_required in
# pydantic_schema_utils.py) marks every property as required so
# that OpenAI strict-mode structured outputs work correctly.
# The side-effect is that the LLM fills in *every* parameter —
# even truly optional ones — using placeholder values such as
# None, "", "null", or []. Only optional fields are affected,
# so we limit the check to those.
fields = self.args_schema.model_fields
params = {
k: v
for k, v in params.items()
# Permit custom and required fields, and fields with non-empty values
if k not in fields or fields[k].is_required() or v not in self._EMPTY_VALUES
}
# Make sure params has "q" for query instead of "query" or "search_query"
query = params.get("query") or params.get("search_query")
if query is not None and "q" not in params:
params["q"] = query
params.pop("query", None)
params.pop("search_query", None)
# If "count" was not explicitly provided, use n_results
# (only when the schema actually supports a "count" field)
if "count" in self.args_schema.model_fields:
if "count" not in params and self.n_results is not None:
params["count"] = self.n_results
# If "country" was not explicitly provided, but self.country is set, use it
# (only when the schema actually supports a "country" field)
if "country" in self.args_schema.model_fields:
if "country" not in params and self.country is not None:
params["country"] = self.country
return params

View File

@@ -0,0 +1,42 @@
from typing import Any
from pydantic import BaseModel
from crewai_tools.tools.brave_search_tool.base import BraveSearchToolBase
from crewai_tools.tools.brave_search_tool.schemas import (
ImageSearchHeaders,
ImageSearchParams,
)
class BraveImageSearchTool(BraveSearchToolBase):
"""A tool that performs image searches using the Brave Search API."""
name: str = "Brave Image Search"
args_schema: type[BaseModel] = ImageSearchParams
header_schema: type[BaseModel] = ImageSearchHeaders
description: str = (
"A tool that performs image searches using the Brave Search API. "
"Results are returned as structured JSON data."
)
search_url: str = "https://api.search.brave.com/res/v1/images/search"
def _refine_request_payload(self, params: dict[str, Any]) -> dict[str, Any]:
return params
def _refine_response(self, response: dict[str, Any]) -> list[dict[str, Any]]:
# Make the response more concise, and easier to consume
results = response.get("results", [])
return [
{
"title": result.get("title"),
"url": result.get("properties", {}).get("url"),
"dimensions": f"{w}x{h}"
if (w := result.get("properties", {}).get("width"))
and (h := result.get("properties", {}).get("height"))
else None,
}
for result in results
]

View File

@@ -0,0 +1,32 @@
from typing import Any
from pydantic import BaseModel
from crewai_tools.tools.brave_search_tool.base import BraveSearchToolBase
from crewai_tools.tools.brave_search_tool.response_types import LLMContext
from crewai_tools.tools.brave_search_tool.schemas import (
LLMContextHeaders,
LLMContextParams,
)
class BraveLLMContextTool(BraveSearchToolBase):
"""A tool that retrieves context for LLM usage from the Brave Search API."""
name: str = "Brave LLM Context"
args_schema: type[BaseModel] = LLMContextParams
header_schema: type[BaseModel] = LLMContextHeaders
description: str = (
"A tool that retrieves context for LLM usage from the Brave Search API. "
"Results are returned as structured JSON data."
)
search_url: str = "https://api.search.brave.com/res/v1/llm/context"
def _refine_request_payload(self, params: dict[str, Any]) -> dict[str, Any]:
return params
def _refine_response(self, response: LLMContext.Response) -> LLMContext.Response:
"""The LLM Context response schema is fairly simple. Return as is."""
return response

View File

@@ -0,0 +1,109 @@
from typing import Any
from pydantic import BaseModel
from crewai_tools.tools.brave_search_tool.base import BraveSearchToolBase
from crewai_tools.tools.brave_search_tool.response_types import LocalPOIs
from crewai_tools.tools.brave_search_tool.schemas import (
LocalPOIsDescriptionHeaders,
LocalPOIsDescriptionParams,
LocalPOIsHeaders,
LocalPOIsParams,
)
DayOpeningHours = LocalPOIs.DayOpeningHours
OpeningHours = LocalPOIs.OpeningHours
LocationResult = LocalPOIs.LocationResult
LocalPOIsResponse = LocalPOIs.Response
def _flatten_slots(slots: list[DayOpeningHours]) -> list[dict[str, str]]:
"""Convert a list of DayOpeningHours dicts into simplified entries."""
return [
{
"day": slot["full_name"].lower(),
"opens": slot["opens"],
"closes": slot["closes"],
}
for slot in slots
]
def _simplify_opening_hours(result: LocationResult) -> list[dict[str, str]] | None:
"""Collapse opening_hours into a flat list of {day, opens, closes} dicts."""
hours = result.get("opening_hours")
if not hours:
return None
entries: list[dict[str, str]] = []
current = hours.get("current_day")
if current:
entries.extend(_flatten_slots(current))
days = hours.get("days")
if days:
for day_slots in days:
entries.extend(_flatten_slots(day_slots))
return entries or None
class BraveLocalPOIsTool(BraveSearchToolBase):
"""A tool that retrieves local POIs using the Brave Search API."""
name: str = "Brave Local POIs"
args_schema: type[BaseModel] = LocalPOIsParams
header_schema: type[BaseModel] = LocalPOIsHeaders
description: str = (
"A tool that retrieves local POIs using the Brave Search API. "
"Results are returned as structured JSON data."
)
search_url: str = "https://api.search.brave.com/res/v1/local/pois"
def _refine_request_payload(self, params: dict[str, Any]) -> dict[str, Any]:
return params
def _refine_response(self, response: LocalPOIsResponse) -> list[dict[str, Any]]:
results = response.get("results", [])
return [
{
"title": result.get("title"),
"url": result.get("url"),
"description": result.get("description"),
"address": result.get("postal_address", {}).get("displayAddress"),
"contact": result.get("contact", {}).get("telephone")
or result.get("contact", {}).get("email")
or None,
"opening_hours": _simplify_opening_hours(result),
}
for result in results
]
class BraveLocalPOIsDescriptionTool(BraveSearchToolBase):
"""A tool that retrieves AI-generated descriptions for local POIs using the Brave Search API."""
name: str = "Brave Local POI Descriptions"
args_schema: type[BaseModel] = LocalPOIsDescriptionParams
header_schema: type[BaseModel] = LocalPOIsDescriptionHeaders
description: str = (
"A tool that retrieves AI-generated descriptions for local POIs using the Brave Search API. "
"Results are returned as structured JSON data."
)
search_url: str = "https://api.search.brave.com/res/v1/local/descriptions"
def _refine_request_payload(self, params: dict[str, Any]) -> dict[str, Any]:
return params
def _refine_response(self, response: LocalPOIsResponse) -> list[dict[str, Any]]:
# Make the response more concise, and easier to consume
results = response.get("results", [])
return [
{
"id": result.get("id"),
"description": result.get("description"),
}
for result in results
]

View File

@@ -0,0 +1,39 @@
from typing import Any
from pydantic import BaseModel
from crewai_tools.tools.brave_search_tool.base import BraveSearchToolBase
from crewai_tools.tools.brave_search_tool.schemas import (
NewsSearchHeaders,
NewsSearchParams,
)
class BraveNewsSearchTool(BraveSearchToolBase):
"""A tool that performs news searches using the Brave Search API."""
name: str = "Brave News Search"
args_schema: type[BaseModel] = NewsSearchParams
header_schema: type[BaseModel] = NewsSearchHeaders
description: str = (
"A tool that performs news searches using the Brave Search API. "
"Results are returned as structured JSON data."
)
search_url: str = "https://api.search.brave.com/res/v1/news/search"
def _refine_request_payload(self, params: dict[str, Any]) -> dict[str, Any]:
return params
def _refine_response(self, response: dict[str, Any]) -> list[dict[str, Any]]:
# Make the response more concise, and easier to consume
results = response.get("results", [])
return [
{
"url": result.get("url"),
"title": result.get("title"),
"description": result.get("description"),
}
for result in results
]

View File

@@ -1,4 +1,3 @@
from datetime import datetime
import json
import os
import time
@@ -10,17 +9,13 @@ from pydantic import BaseModel, Field
from pydantic.types import StringConstraints
import requests
from crewai_tools.tools.brave_search_tool.base import _save_results_to_file
from crewai_tools.tools.brave_search_tool.schemas import WebSearchParams
load_dotenv()
def _save_results_to_file(content: str) -> None:
"""Saves the search results to a file."""
filename = f"search_results_{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}.txt"
with open(filename, "w") as file:
file.write(content)
FreshnessPreset = Literal["pd", "pw", "pm", "py"]
FreshnessRange = Annotated[
str, StringConstraints(pattern=r"^\d{4}-\d{2}-\d{2}to\d{4}-\d{2}-\d{2}$")
@@ -29,51 +24,6 @@ Freshness = FreshnessPreset | FreshnessRange
SafeSearch = Literal["off", "moderate", "strict"]
class BraveSearchToolSchema(BaseModel):
"""Input for BraveSearchTool"""
query: str = Field(..., description="Search query to perform")
country: str | None = Field(
default=None,
description="Country code for geo-targeting (e.g., 'US', 'BR').",
)
search_language: str | None = Field(
default=None,
description="Language code for the search results (e.g., 'en', 'es').",
)
count: int | None = Field(
default=None,
description="The maximum number of results to return. Actual number may be less.",
)
offset: int | None = Field(
default=None, description="Skip the first N result sets/pages. Max is 9."
)
safesearch: SafeSearch | None = Field(
default=None,
description="Filter out explicit content. Options: off/moderate/strict",
)
spellcheck: bool | None = Field(
default=None,
description="Attempt to correct spelling errors in the search query.",
)
freshness: Freshness | None = Field(
default=None,
description="Enforce freshness of results. Options: pd/pw/pm/py, or YYYY-MM-DDtoYYYY-MM-DD",
)
text_decorations: bool | None = Field(
default=None,
description="Include markup to highlight search terms in the results.",
)
extra_snippets: bool | None = Field(
default=None,
description="Include up to 5 text snippets for each page if possible.",
)
operators: bool | None = Field(
default=None,
description="Whether to apply search operators (e.g., site:example.com).",
)
# TODO: Extend support to additional endpoints (e.g., /images, /news, etc.)
class BraveSearchTool(BaseTool):
"""A tool that performs web searches using the Brave Search API."""
@@ -83,7 +33,7 @@ class BraveSearchTool(BaseTool):
"A tool that performs web searches using the Brave Search API. "
"Results are returned as structured JSON data."
)
args_schema: type[BaseModel] = BraveSearchToolSchema
args_schema: type[BaseModel] = WebSearchParams
search_url: str = "https://api.search.brave.com/res/v1/web/search"
n_results: int = 10
save_file: bool = False
@@ -120,8 +70,8 @@ class BraveSearchTool(BaseTool):
# Construct and send the request
try:
# Maintain both "search_query" and "query" for backwards compatibility
query = kwargs.get("search_query") or kwargs.get("query")
# Fallback to "query" or "search_query" for backwards compatibility
query = kwargs.get("q") or kwargs.get("query") or kwargs.get("search_query")
if not query:
raise ValueError("Query is required")
@@ -130,8 +80,11 @@ class BraveSearchTool(BaseTool):
if country := kwargs.get("country"):
payload["country"] = country
if search_language := kwargs.get("search_language"):
payload["search_language"] = search_language
# Fallback to "search_language" for backwards compatibility
if search_lang := kwargs.get("search_lang") or kwargs.get(
"search_language"
):
payload["search_lang"] = search_lang
# Fallback to deprecated n_results parameter if no count is provided
count = kwargs.get("count")

View File

@@ -0,0 +1,39 @@
from typing import Any
from pydantic import BaseModel
from crewai_tools.tools.brave_search_tool.base import BraveSearchToolBase
from crewai_tools.tools.brave_search_tool.schemas import (
VideoSearchHeaders,
VideoSearchParams,
)
class BraveVideoSearchTool(BraveSearchToolBase):
"""A tool that performs video searches using the Brave Search API."""
name: str = "Brave Video Search"
args_schema: type[BaseModel] = VideoSearchParams
header_schema: type[BaseModel] = VideoSearchHeaders
description: str = (
"A tool that performs video searches using the Brave Search API. "
"Results are returned as structured JSON data."
)
search_url: str = "https://api.search.brave.com/res/v1/videos/search"
def _refine_request_payload(self, params: dict[str, Any]) -> dict[str, Any]:
return params
def _refine_response(self, response: dict[str, Any]) -> list[dict[str, Any]]:
# Make the response more concise, and easier to consume
results = response.get("results", [])
return [
{
"url": result.get("url"),
"title": result.get("title"),
"description": result.get("description"),
}
for result in results
]

View File

@@ -0,0 +1,45 @@
from typing import Any
from pydantic import BaseModel
from crewai_tools.tools.brave_search_tool.base import BraveSearchToolBase
from crewai_tools.tools.brave_search_tool.schemas import (
WebSearchHeaders,
WebSearchParams,
)
class BraveWebSearchTool(BraveSearchToolBase):
"""A tool that performs web searches using the Brave Search API."""
name: str = "Brave Web Search"
args_schema: type[BaseModel] = WebSearchParams
header_schema: type[BaseModel] = WebSearchHeaders
description: str = (
"A tool that performs web searches using the Brave Search API. "
"Results are returned as structured JSON data."
)
search_url: str = "https://api.search.brave.com/res/v1/web/search"
def _refine_request_payload(self, params: dict[str, Any]) -> dict[str, Any]:
return params
def _refine_response(self, response: dict[str, Any]) -> list[dict[str, Any]]:
results = response.get("web", {}).get("results", [])
refined = []
for result in results:
snippets = result.get("extra_snippets") or []
if not snippets:
desc = result.get("description")
if desc:
snippets = [desc]
refined.append(
{
"url": result.get("url"),
"title": result.get("title"),
"snippets": snippets,
}
)
return refined

View File

@@ -0,0 +1,67 @@
from __future__ import annotations
from typing import Literal, TypedDict
class LocalPOIs:
class PostalAddress(TypedDict, total=False):
type: Literal["PostalAddress"]
country: str
postalCode: str
streetAddress: str
addressRegion: str
addressLocality: str
displayAddress: str
class DayOpeningHours(TypedDict):
abbr_name: str
full_name: str
opens: str
closes: str
class OpeningHours(TypedDict, total=False):
current_day: list[LocalPOIs.DayOpeningHours]
days: list[list[LocalPOIs.DayOpeningHours]]
class LocationResult(TypedDict, total=False):
provider_url: str
title: str
url: str
id: str | None
opening_hours: LocalPOIs.OpeningHours | None
postal_address: LocalPOIs.PostalAddress | None
class Response(TypedDict, total=False):
type: Literal["local_pois"]
results: list[LocalPOIs.LocationResult]
class LLMContext:
class LLMContextItem(TypedDict, total=False):
snippets: list[str]
title: str
url: str
class LLMContextMapItem(TypedDict, total=False):
name: str
snippets: list[str]
title: str
url: str
class LLMContextPOIItem(TypedDict, total=False):
name: str
snippets: list[str]
title: str
url: str
class Grounding(TypedDict, total=False):
generic: list[LLMContext.LLMContextItem]
poi: LLMContext.LLMContextPOIItem
map: list[LLMContext.LLMContextMapItem]
class Sources(TypedDict, total=False):
pass
class Response(TypedDict, total=False):
grounding: LLMContext.Grounding
sources: LLMContext.Sources

View File

@@ -0,0 +1,525 @@
from typing import Annotated, Literal
from pydantic import BaseModel, Field
from pydantic.types import StringConstraints
# Common types
Units = Literal["metric", "imperial"]
SafeSearch = Literal["off", "moderate", "strict"]
Freshness = (
Literal["pd", "pw", "pm", "py"]
| Annotated[
str, StringConstraints(pattern=r"^\d{4}-\d{2}-\d{2}to\d{4}-\d{2}-\d{2}$")
]
)
ResultFilter = list[
Literal[
"discussions",
"faq",
"infobox",
"news",
"query",
"summarizer",
"videos",
"web",
"locations",
]
]
class LLMContextParams(BaseModel):
"""Parameters for Brave LLM Context endpoint."""
q: str = Field(
description="Search query to perform",
min_length=1,
max_length=400,
)
country: str | None = Field(
default=None,
description="Country code for geo-targeting (e.g., 'US', 'BR').",
pattern=r"^[A-Z]{2}$",
)
search_lang: str | None = Field(
default=None,
description="Language code for the search results (e.g., 'en', 'es').",
pattern=r"^[a-z]{2}$",
)
count: int | None = Field(
default=None,
description="The maximum number of results to return. Actual number may be less.",
ge=1,
le=50,
)
maximum_number_of_urls: int | None = Field(
default=None,
description="The maximum number of URLs to include in the context.",
ge=1,
le=50,
)
maximum_number_of_tokens: int | None = Field(
default=None,
description="The approximate maximum number of tokens to include in the context.",
ge=1,
le=32768,
)
maximum_number_of_snippets: int | None = Field(
default=None,
description="The maximum number of different snippets to include in the context.",
ge=1,
le=100,
)
context_threshold_mode: (
Literal["disabled", "strict", "lenient", "balanced"] | None
) = Field(
default=None,
description="The mode to use for the context thresholding.",
)
maximum_number_of_tokens_per_url: int | None = Field(
default=None,
description="The maximum number of tokens to include for each URL in the context.",
ge=1,
le=8192,
)
maximum_number_of_snippets_per_url: int | None = Field(
default=None,
description="The maximum number of snippets to include per URL.",
ge=1,
le=100,
)
goggles: str | list[str] | None = Field(
default=None,
description="Goggles act as a custom re-ranking mechanism. Goggle source or URLs.",
)
enable_local: bool | None = Field(
default=None,
description="Whether to enable local recall. Not setting this value means auto-detect and uses local recall if any of the localization headers are provided.",
)
class WebSearchParams(BaseModel):
"""Parameters for Brave Web Search endpoint."""
q: str = Field(
description="Search query to perform",
min_length=1,
max_length=400,
)
country: str | None = Field(
default=None,
description="Country code for geo-targeting (e.g., 'US', 'BR').",
pattern=r"^[A-Z]{2}$",
)
search_lang: str | None = Field(
default=None,
description="Language code for the search results (e.g., 'en', 'es').",
pattern=r"^[a-z]{2}$",
)
ui_lang: str | None = Field(
default=None,
description="Language code for the user interface (e.g., 'en-US', 'es-AR').",
pattern=r"^[a-z]{2}-[A-Z]{2}$",
)
count: int | None = Field(
default=None,
description="The maximum number of results to return. Actual number may be less.",
ge=1,
le=20,
)
offset: int | None = Field(
default=None,
description="Skip the first N result sets/pages. Max is 9.",
ge=0,
le=9,
)
safesearch: Literal["off", "moderate", "strict"] | None = Field(
default=None,
description="Filter out explicit content. Options: off/moderate/strict",
)
spellcheck: bool | None = Field(
default=None,
description="Attempt to correct spelling errors in the search query.",
)
freshness: Freshness | None = Field(
default=None,
description="Enforce freshness of results. Options: pd/pw/pm/py, or YYYY-MM-DDtoYYYY-MM-DD",
)
text_decorations: bool | None = Field(
default=None,
description="Include markup to highlight search terms in the results.",
)
extra_snippets: bool | None = Field(
default=None,
description="Include up to 5 text snippets for each page if possible.",
)
result_filter: ResultFilter | None = Field(
default=None,
description="Filter the results by type. Options: discussions/faq/infobox/news/query/summarizer/videos/web/locations. Note: The `count` parameter is applied only to the `web` results.",
)
units: Units | None = Field(
default=None,
description="The units to use for the results. Options: metric/imperial",
)
goggles: str | list[str] | None = Field(
default=None,
description="Goggles act as a custom re-ranking mechanism. Goggle source or URLs.",
)
summary: bool | None = Field(
default=None,
description="Whether to generate a summarizer ID for the results.",
)
enable_rich_callback: bool | None = Field(
default=None,
description="Whether to enable rich callbacks for the results. Requires Pro level subscription.",
)
include_fetch_metadata: bool | None = Field(
default=None,
description="Whether to include fetch metadata (e.g., last fetch time) in the results.",
)
operators: bool | None = Field(
default=None,
description="Whether to apply search operators (e.g., site:example.com).",
)
class LocalPOIsParams(BaseModel):
"""Parameters for Brave Local POIs endpoint."""
ids: list[str] = Field(
description="List of POI IDs to retrieve. Maximum of 20. IDs are valid for 8 hours.",
min_length=1,
max_length=20,
)
search_lang: str | None = Field(
default=None,
description="Language code for the search results (e.g., 'en', 'es').",
pattern=r"^[a-z]{2}$",
)
ui_lang: str | None = Field(
default=None,
description="Language code for the user interface (e.g., 'en-US', 'es-AR').",
pattern=r"^[a-z]{2}-[A-Z]{2}$",
)
units: Units | None = Field(
default=None,
description="The units to use for the results. Options: metric/imperial",
)
class LocalPOIsDescriptionParams(BaseModel):
"""Parameters for Brave Local POI Descriptions endpoint."""
ids: list[str] = Field(
description="List of POI IDs to retrieve. Maximum of 20. IDs are valid for 8 hours.",
min_length=1,
max_length=20,
)
class ImageSearchParams(BaseModel):
"""Parameters for Brave Image Search endpoint."""
q: str = Field(
description="Search query to perform",
min_length=1,
max_length=400,
)
search_lang: str | None = Field(
default=None,
description="Language code for the search results (e.g., 'en', 'es').",
pattern=r"^[a-z]{2}$",
)
country: str | None = Field(
default=None,
description="Country code for geo-targeting (e.g., 'US', 'BR').",
pattern=r"^[A-Z]{2}$",
)
safesearch: Literal["off", "strict"] | None = Field(
default=None,
description="Filter out explicit content. Default is strict.",
)
count: int | None = Field(
default=None,
description="The maximum number of results to return.",
ge=1,
le=200,
)
spellcheck: bool | None = Field(
default=None,
description="Attempt to correct spelling errors in the search query.",
)
class VideoSearchParams(BaseModel):
"""Parameters for Brave Video Search endpoint."""
q: str = Field(
description="Search query to perform",
min_length=1,
max_length=400,
)
search_lang: str | None = Field(
default=None,
description="Language code for the search results (e.g., 'en', 'es').",
pattern=r"^[a-z]{2}$",
)
ui_lang: str | None = Field(
default=None,
description="Language code for the user interface (e.g., 'en-US', 'es-AR').",
pattern=r"^[a-z]{2}-[A-Z]{2}$",
)
country: str | None = Field(
default=None,
description="Country code for geo-targeting (e.g., 'US', 'BR').",
pattern=r"^[A-Z]{2}$",
)
safesearch: SafeSearch | None = Field(
default=None,
description="Filter out explicit content. Options: off/moderate/strict",
)
count: int | None = Field(
default=None,
description="The maximum number of results to return.",
ge=1,
le=50,
)
offset: int | None = Field(
default=None,
description="Skip the first N result sets/pages. Max is 9.",
ge=0,
le=9,
)
spellcheck: bool | None = Field(
default=None,
description="Attempt to correct spelling errors in the search query.",
)
freshness: Freshness | None = Field(
default=None,
description="Enforce freshness of results. Options: pd/pw/pm/py, or YYYY-MM-DDtoYYYY-MM-DD",
)
include_fetch_metadata: bool | None = Field(
default=None,
description="Whether to include fetch metadata (e.g., last fetch time) in the results.",
)
operators: bool | None = Field(
default=None,
description="Whether to apply search operators (e.g., site:example.com).",
)
class NewsSearchParams(BaseModel):
"""Parameters for Brave News Search endpoint."""
q: str = Field(
description="Search query to perform",
min_length=1,
max_length=400,
)
search_lang: str | None = Field(
default=None,
description="Language code for the search results (e.g., 'en', 'es').",
pattern=r"^[a-z]{2}$",
)
ui_lang: str | None = Field(
default=None,
description="Language code for the user interface (e.g., 'en-US', 'es-AR').",
pattern=r"^[a-z]{2}-[A-Z]{2}$",
)
country: str | None = Field(
default=None,
description="Country code for geo-targeting (e.g., 'US', 'BR').",
pattern=r"^[A-Z]{2}$",
)
safesearch: Literal["off", "moderate", "strict"] | None = Field(
default=None,
description="Filter out explicit content. Options: off/moderate/strict",
)
count: int | None = Field(
default=None,
description="The maximum number of results to return.",
ge=1,
le=50,
)
offset: int | None = Field(
default=None,
description="Skip the first N result sets/pages. Max is 9.",
ge=0,
le=9,
)
spellcheck: bool | None = Field(
default=None,
description="Attempt to correct spelling errors in the search query.",
)
freshness: Freshness | None = Field(
default=None,
description="Enforce freshness of results. Options: pd/pw/pm/py, or YYYY-MM-DDtoYYYY-MM-DD",
)
extra_snippets: bool | None = Field(
default=None,
description="Include up to 5 text snippets for each page if possible.",
)
goggles: str | list[str] | None = Field(
default=None,
description="Goggles act as a custom re-ranking mechanism. Goggle source or URLs.",
)
include_fetch_metadata: bool | None = Field(
default=None,
description="Whether to include fetch metadata in the results.",
)
operators: bool | None = Field(
default=None,
description="Whether to apply search operators (e.g., site:example.com).",
)
class BaseSearchHeaders(BaseModel):
"""Common headers for Brave Search endpoints."""
x_subscription_token: str = Field(
alias="x-subscription-token",
description="API key for Brave Search",
)
api_version: str | None = Field(
alias="api-version",
default=None,
description="API version to use. Default is latest available.",
pattern=r"^\d{4}-\d{2}-\d{2}$", # YYYY-MM-DD
)
accept: Literal["application/json"] | Literal["*/*"] | None = Field(
default=None,
description="Accept header for the request.",
)
cache_control: Literal["no-cache"] | None = Field(
alias="cache-control",
default=None,
description="Cache control header for the request.",
)
user_agent: str | None = Field(
alias="user-agent",
default=None,
description="User agent for the request.",
)
class LLMContextHeaders(BaseSearchHeaders):
"""Headers for Brave LLM Context endpoint."""
x_loc_lat: float | None = Field(
alias="x-loc-lat",
default=None,
description="Latitude of the user's location.",
ge=-90.0,
le=90.0,
)
x_loc_long: float | None = Field(
alias="x-loc-long",
default=None,
description="Longitude of the user's location.",
ge=-180.0,
le=180.0,
)
x_loc_city: str | None = Field(
alias="x-loc-city",
default=None,
description="City of the user's location.",
)
x_loc_state: str | None = Field(
alias="x-loc-state",
default=None,
description="State of the user's location.",
)
x_loc_state_name: str | None = Field(
alias="x-loc-state-name",
default=None,
description="Name of the state of the user's location.",
)
x_loc_country: str | None = Field(
alias="x-loc-country",
default=None,
description="The ISO 3166-1 alpha-2 country code of the user's location.",
)
class LocalPOIsHeaders(BaseSearchHeaders):
"""Headers for Brave Local POIs endpoint."""
x_loc_lat: float | None = Field(
alias="x-loc-lat",
default=None,
description="Latitude of the user's location.",
ge=-90.0,
le=90.0,
)
x_loc_long: float | None = Field(
alias="x-loc-long",
default=None,
description="Longitude of the user's location.",
ge=-180.0,
le=180.0,
)
class LocalPOIsDescriptionHeaders(BaseSearchHeaders):
"""Headers for Brave Local POI Descriptions endpoint."""
class VideoSearchHeaders(BaseSearchHeaders):
"""Headers for Brave Video Search endpoint."""
class ImageSearchHeaders(BaseSearchHeaders):
"""Headers for Brave Image Search endpoint."""
class NewsSearchHeaders(BaseSearchHeaders):
"""Headers for Brave News Search endpoint."""
class WebSearchHeaders(BaseSearchHeaders):
"""Headers for Brave Web Search endpoint."""
x_loc_lat: float | None = Field(
alias="x-loc-lat",
default=None,
description="Latitude of the user's location.",
ge=-90.0,
le=90.0,
)
x_loc_long: float | None = Field(
alias="x-loc-long",
default=None,
description="Longitude of the user's location.",
ge=-180.0,
le=180.0,
)
x_loc_timezone: str | None = Field(
alias="x-loc-timezone",
default=None,
description="Timezone of the user's location.",
)
x_loc_city: str | None = Field(
alias="x-loc-city",
default=None,
description="City of the user's location.",
)
x_loc_state: str | None = Field(
alias="x-loc-state",
default=None,
description="State of the user's location.",
)
x_loc_state_name: str | None = Field(
alias="x-loc-state-name",
default=None,
description="Name of the state of the user's location.",
)
x_loc_country: str | None = Field(
alias="x-loc-country",
default=None,
description="The ISO 3166-1 alpha-2 country code of the user's location.",
)
x_loc_postal_code: str | None = Field(
alias="x-loc-postal-code",
default=None,
description="The postal code of the user's location.",
)

View File

@@ -30,9 +30,8 @@ class FileWriterTool(BaseTool):
def _run(self, **kwargs: Any) -> str:
try:
# Create the directory if it doesn't exist
if kwargs.get("directory") and not os.path.exists(kwargs["directory"]):
os.makedirs(kwargs["directory"])
if kwargs.get("directory"):
os.makedirs(kwargs["directory"], exist_ok=True)
# Construct the full path
filepath = os.path.join(kwargs.get("directory") or "", kwargs["filename"])

View File

@@ -99,8 +99,8 @@ class FileCompressorTool(BaseTool):
def _prepare_output(output_path: str, overwrite: bool) -> bool:
"""Ensures output path is ready for writing."""
output_dir = os.path.dirname(output_path)
if output_dir and not os.path.exists(output_dir):
os.makedirs(output_dir)
if output_dir:
os.makedirs(output_dir, exist_ok=True)
if os.path.exists(output_path) and not overwrite:
return False
return True

View File

@@ -18,7 +18,6 @@ class MergeAgentHandlerToolError(Exception):
"""Base exception for Merge Agent Handler tool errors."""
class MergeAgentHandlerTool(BaseTool):
"""
Wrapper for Merge Agent Handler tools.
@@ -174,7 +173,7 @@ class MergeAgentHandlerTool(BaseTool):
>>> tool = MergeAgentHandlerTool.from_tool_name(
... tool_name="linear__create_issue",
... tool_pack_id="134e0111-0f67-44f6-98f0-597000290bb3",
... registered_user_id="91b2b905-e866-40c8-8be2-efe53827a0aa"
... registered_user_id="91b2b905-e866-40c8-8be2-efe53827a0aa",
... )
"""
# Create an empty args schema model (proper BaseModel subclass)
@@ -210,7 +209,10 @@ class MergeAgentHandlerTool(BaseTool):
if "parameters" in tool_schema:
try:
params = tool_schema["parameters"]
if params.get("type") == "object" and "properties" in params:
if (
params.get("type") == "object"
and "properties" in params
):
# Build field definitions for Pydantic
fields = {}
properties = params["properties"]
@@ -298,7 +300,7 @@ class MergeAgentHandlerTool(BaseTool):
>>> tools = MergeAgentHandlerTool.from_tool_pack(
... tool_pack_id="134e0111-0f67-44f6-98f0-597000290bb3",
... registered_user_id="91b2b905-e866-40c8-8be2-efe53827a0aa",
... tool_names=["linear__create_issue", "linear__get_issues"]
... tool_names=["linear__create_issue", "linear__get_issues"],
... )
"""
# Create a temporary instance to fetch the tool list

View File

@@ -110,11 +110,13 @@ class QdrantVectorSearchTool(BaseTool):
self.custom_embedding_fn(query)
if self.custom_embedding_fn
else (
lambda: __import__("openai")
.Client(api_key=os.getenv("OPENAI_API_KEY"))
.embeddings.create(input=[query], model="text-embedding-3-large")
.data[0]
.embedding
lambda: (
__import__("openai")
.Client(api_key=os.getenv("OPENAI_API_KEY"))
.embeddings.create(input=[query], model="text-embedding-3-large")
.data[0]
.embedding
)
)()
)
results = self.client.query_points(

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
import asyncio
from concurrent.futures import ThreadPoolExecutor
import logging
import threading
from typing import TYPE_CHECKING, Any
from crewai.tools.base_tool import BaseTool
@@ -33,6 +34,7 @@ logger = logging.getLogger(__name__)
# Cache for query results
_query_cache: dict[str, list[dict[str, Any]]] = {}
_cache_lock = threading.Lock()
class SnowflakeConfig(BaseModel):
@@ -102,7 +104,7 @@ class SnowflakeSearchTool(BaseTool):
)
_connection_pool: list[SnowflakeConnection] | None = None
_pool_lock: asyncio.Lock | None = None
_pool_lock: threading.Lock | None = None
_thread_pool: ThreadPoolExecutor | None = None
_model_rebuilt: bool = False
package_dependencies: list[str] = Field(
@@ -122,7 +124,7 @@ class SnowflakeSearchTool(BaseTool):
try:
if SNOWFLAKE_AVAILABLE:
self._connection_pool = []
self._pool_lock = asyncio.Lock()
self._pool_lock = threading.Lock()
self._thread_pool = ThreadPoolExecutor(max_workers=self.pool_size)
else:
raise ImportError
@@ -147,7 +149,7 @@ class SnowflakeSearchTool(BaseTool):
)
self._connection_pool = []
self._pool_lock = asyncio.Lock()
self._pool_lock = threading.Lock()
self._thread_pool = ThreadPoolExecutor(max_workers=self.pool_size)
except subprocess.CalledProcessError as e:
raise ImportError("Failed to install Snowflake dependencies") from e
@@ -163,13 +165,12 @@ class SnowflakeSearchTool(BaseTool):
raise RuntimeError("Pool lock not initialized")
if self._connection_pool is None:
raise RuntimeError("Connection pool not initialized")
async with self._pool_lock:
if not self._connection_pool:
conn = await asyncio.get_event_loop().run_in_executor(
self._thread_pool, self._create_connection
)
self._connection_pool.append(conn)
return self._connection_pool.pop()
with self._pool_lock:
if self._connection_pool:
return self._connection_pool.pop()
return await asyncio.get_event_loop().run_in_executor(
self._thread_pool, self._create_connection
)
def _create_connection(self) -> SnowflakeConnection:
"""Create a new Snowflake connection."""
@@ -204,9 +205,10 @@ class SnowflakeSearchTool(BaseTool):
"""Execute a query with retries and return results."""
if self.enable_caching:
cache_key = self._get_cache_key(query, timeout)
if cache_key in _query_cache:
logger.info("Returning cached result")
return _query_cache[cache_key]
with _cache_lock:
if cache_key in _query_cache:
logger.info("Returning cached result")
return _query_cache[cache_key]
for attempt in range(self.max_retries):
try:
@@ -225,7 +227,8 @@ class SnowflakeSearchTool(BaseTool):
]
if self.enable_caching:
_query_cache[self._get_cache_key(query, timeout)] = results
with _cache_lock:
_query_cache[self._get_cache_key(query, timeout)] = results
return results
finally:
@@ -234,7 +237,7 @@ class SnowflakeSearchTool(BaseTool):
self._pool_lock is not None
and self._connection_pool is not None
):
async with self._pool_lock:
with self._pool_lock:
self._connection_pool.append(conn)
except (DatabaseError, OperationalError) as e: # noqa: PERF203
if attempt == self.max_retries - 1:

View File

@@ -1,4 +1,5 @@
import asyncio
import contextvars
import json
import os
import re
@@ -137,7 +138,9 @@ class StagehandTool(BaseTool):
- 'observe': For finding elements in a specific area
"""
args_schema: type[BaseModel] = StagehandToolSchema
package_dependencies: list[str] = Field(default_factory=lambda: ["stagehand<=0.5.9"])
package_dependencies: list[str] = Field(
default_factory=lambda: ["stagehand<=0.5.9"]
)
env_vars: list[EnvVar] = Field(
default_factory=lambda: [
EnvVar(
@@ -620,9 +623,12 @@ class StagehandTool(BaseTool):
# We're in an existing event loop, use it
import concurrent.futures
ctx = contextvars.copy_context()
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(
asyncio.run, self._async_run(instruction, url, command_type)
ctx.run,
asyncio.run,
self._async_run(instruction, url, command_type),
)
result = future.result()
else:
@@ -706,11 +712,12 @@ class StagehandTool(BaseTool):
if loop.is_running():
import concurrent.futures
ctx = contextvars.copy_context()
with (
concurrent.futures.ThreadPoolExecutor() as executor
):
future = executor.submit(
asyncio.run, self._async_close()
ctx.run, asyncio.run, self._async_close()
)
future.result()
else:

View File

@@ -1,80 +1,777 @@
import json
from unittest.mock import patch
import os
from unittest.mock import MagicMock, patch
import pytest
import requests as requests_lib
from crewai_tools.tools.brave_search_tool.brave_search_tool import BraveSearchTool
from crewai_tools.tools.brave_search_tool.base import BraveSearchToolBase
from crewai_tools.tools.brave_search_tool.brave_web_tool import BraveWebSearchTool
from crewai_tools.tools.brave_search_tool.brave_image_tool import BraveImageSearchTool
from crewai_tools.tools.brave_search_tool.brave_news_tool import BraveNewsSearchTool
from crewai_tools.tools.brave_search_tool.brave_video_tool import BraveVideoSearchTool
from crewai_tools.tools.brave_search_tool.brave_llm_context_tool import (
BraveLLMContextTool,
)
from crewai_tools.tools.brave_search_tool.brave_local_pois_tool import (
BraveLocalPOIsTool,
BraveLocalPOIsDescriptionTool,
)
from crewai_tools.tools.brave_search_tool.schemas import (
WebSearchParams,
WebSearchHeaders,
ImageSearchParams,
ImageSearchHeaders,
NewsSearchParams,
NewsSearchHeaders,
VideoSearchParams,
VideoSearchHeaders,
LLMContextParams,
LLMContextHeaders,
LocalPOIsParams,
LocalPOIsHeaders,
LocalPOIsDescriptionParams,
LocalPOIsDescriptionHeaders,
)
def _mock_response(
status_code: int = 200,
json_data: dict | None = None,
headers: dict | None = None,
text: str = "",
) -> MagicMock:
"""Build a ``requests.Response``-like mock with the attributes used by ``_make_request``."""
resp = MagicMock(spec=requests_lib.Response)
resp.status_code = status_code
resp.ok = 200 <= status_code < 400
resp.url = "https://api.search.brave.com/res/v1/web/search?q=test"
resp.text = text or (str(json_data) if json_data else "")
resp.headers = headers or {}
resp.json.return_value = json_data if json_data is not None else {}
return resp
# Fixtures
@pytest.fixture(autouse=True)
def _brave_env_and_rate_limit():
"""Set BRAVE_API_KEY for every test. Rate limiting is per-instance (each tool starts with a fresh clock)."""
with patch.dict(os.environ, {"BRAVE_API_KEY": "test-api-key"}):
yield
@pytest.fixture
def brave_tool():
return BraveSearchTool(n_results=2)
def web_tool():
return BraveWebSearchTool()
def test_brave_tool_initialization():
tool = BraveSearchTool()
assert tool.n_results == 10
@pytest.fixture
def image_tool():
return BraveImageSearchTool()
@pytest.fixture
def news_tool():
return BraveNewsSearchTool()
@pytest.fixture
def video_tool():
return BraveVideoSearchTool()
# Initialization
ALL_TOOL_CLASSES = [
BraveWebSearchTool,
BraveImageSearchTool,
BraveNewsSearchTool,
BraveVideoSearchTool,
BraveLLMContextTool,
BraveLocalPOIsTool,
BraveLocalPOIsDescriptionTool,
]
@pytest.mark.parametrize("tool_cls", ALL_TOOL_CLASSES)
def test_instantiation_with_env_var(tool_cls):
"""Each tool can be created when BRAVE_API_KEY is in the environment."""
tool = tool_cls()
assert tool.api_key == "test-api-key"
@pytest.mark.parametrize("tool_cls", ALL_TOOL_CLASSES)
def test_instantiation_with_explicit_key(tool_cls):
"""An explicit api_key takes precedence over the environment."""
tool = tool_cls(api_key="explicit-key")
assert tool.api_key == "explicit-key"
def test_missing_api_key_raises():
with patch.dict(os.environ, {}, clear=True):
with pytest.raises(ValueError, match="BRAVE_API_KEY"):
BraveWebSearchTool()
def test_default_attributes():
tool = BraveWebSearchTool()
assert tool.save_file is False
assert tool.n_results == 10
assert tool._timeout == 30
assert tool._requests_per_second == 1.0
assert tool.raw is False
@patch("requests.get")
def test_brave_tool_search(mock_get, brave_tool):
mock_response = {
def test_custom_constructor_args():
tool = BraveWebSearchTool(
save_file=True,
timeout=60,
n_results=5,
requests_per_second=0.5,
raw=True,
)
assert tool.save_file is True
assert tool._timeout == 60
assert tool.n_results == 5
assert tool._requests_per_second == 0.5
assert tool.raw is True
# Headers
def test_default_headers():
tool = BraveWebSearchTool()
assert tool.headers["x-subscription-token"] == "test-api-key"
assert tool.headers["accept"] == "application/json"
def test_set_headers_merges_and_normalizes():
tool = BraveWebSearchTool()
tool.set_headers({"Cache-Control": "no-cache"})
assert tool.headers["cache-control"] == "no-cache"
assert tool.headers["x-subscription-token"] == "test-api-key"
def test_set_headers_returns_self_for_chaining():
tool = BraveWebSearchTool()
assert tool.set_headers({"Cache-Control": "no-cache"}) is tool
def test_invalid_header_value_raises():
tool = BraveImageSearchTool()
with pytest.raises(ValueError, match="Invalid headers"):
tool.set_headers({"Accept": "text/xml"})
# Endpoint & Schema Wiring
@pytest.mark.parametrize(
"tool_cls, expected_url, expected_params, expected_headers",
[
(
BraveWebSearchTool,
"https://api.search.brave.com/res/v1/web/search",
WebSearchParams,
WebSearchHeaders,
),
(
BraveImageSearchTool,
"https://api.search.brave.com/res/v1/images/search",
ImageSearchParams,
ImageSearchHeaders,
),
(
BraveNewsSearchTool,
"https://api.search.brave.com/res/v1/news/search",
NewsSearchParams,
NewsSearchHeaders,
),
(
BraveVideoSearchTool,
"https://api.search.brave.com/res/v1/videos/search",
VideoSearchParams,
VideoSearchHeaders,
),
(
BraveLLMContextTool,
"https://api.search.brave.com/res/v1/llm/context",
LLMContextParams,
LLMContextHeaders,
),
(
BraveLocalPOIsTool,
"https://api.search.brave.com/res/v1/local/pois",
LocalPOIsParams,
LocalPOIsHeaders,
),
(
BraveLocalPOIsDescriptionTool,
"https://api.search.brave.com/res/v1/local/descriptions",
LocalPOIsDescriptionParams,
LocalPOIsDescriptionHeaders,
),
],
)
def test_tool_wiring(tool_cls, expected_url, expected_params, expected_headers):
tool = tool_cls()
assert tool.search_url == expected_url
assert tool.args_schema is expected_params
assert tool.header_schema is expected_headers
# Payload Refinement (e.g., `query` -> `q`, `count` fallback, param pass-through)
def test_web_refine_request_payload_passes_all_params(web_tool):
params = web_tool._common_payload_refinement(
{
"query": "test",
"country": "US",
"search_lang": "en",
"count": 5,
"offset": 2,
"safesearch": "moderate",
"freshness": "pw",
}
)
refined_params = web_tool._refine_request_payload(params)
assert refined_params["q"] == "test"
assert "query" not in refined_params
assert refined_params["count"] == 5
assert refined_params["country"] == "US"
assert refined_params["search_lang"] == "en"
assert refined_params["offset"] == 2
assert refined_params["safesearch"] == "moderate"
assert refined_params["freshness"] == "pw"
def test_image_refine_request_payload_passes_all_params(image_tool):
params = image_tool._common_payload_refinement(
{
"query": "cat photos",
"country": "US",
"search_lang": "en",
"safesearch": "strict",
"count": 50,
"spellcheck": True,
}
)
refined_params = image_tool._refine_request_payload(params)
assert refined_params["q"] == "cat photos"
assert "query" not in refined_params
assert refined_params["country"] == "US"
assert refined_params["safesearch"] == "strict"
assert refined_params["count"] == 50
assert refined_params["spellcheck"] is True
def test_news_refine_request_payload_passes_all_params(news_tool):
params = news_tool._common_payload_refinement(
{
"query": "breaking news",
"country": "US",
"count": 10,
"offset": 1,
"freshness": "pd",
"extra_snippets": True,
}
)
refined_params = news_tool._refine_request_payload(params)
assert refined_params["q"] == "breaking news"
assert "query" not in refined_params
assert refined_params["country"] == "US"
assert refined_params["offset"] == 1
assert refined_params["freshness"] == "pd"
assert refined_params["extra_snippets"] is True
def test_video_refine_request_payload_passes_all_params(video_tool):
params = video_tool._common_payload_refinement(
{
"query": "tutorial",
"country": "US",
"count": 25,
"offset": 0,
"safesearch": "strict",
"freshness": "pm",
}
)
refined_params = video_tool._refine_request_payload(params)
assert refined_params["q"] == "tutorial"
assert "query" not in refined_params
assert refined_params["country"] == "US"
assert refined_params["offset"] == 0
assert refined_params["freshness"] == "pm"
def test_legacy_constructor_params_flow_into_query_params():
"""The legacy n_results and country constructor params are applied as defaults
when count/country are not explicitly provided at call time."""
tool = BraveWebSearchTool(n_results=3, country="BR")
params = tool._common_payload_refinement({"query": "test"})
assert params["count"] == 3
assert params["country"] == "BR"
def test_legacy_constructor_params_do_not_override_explicit_query_params():
"""Explicit query-time count/country take precedence over constructor defaults."""
tool = BraveWebSearchTool(n_results=3, country="BR")
params = tool._common_payload_refinement(
{"query": "test", "count": 10, "country": "US"}
)
assert params["count"] == 10
assert params["country"] == "US"
def test_refine_request_payload_passes_multiple_goggles_as_multiple_params(web_tool):
result = web_tool._refine_request_payload(
{
"query": "test",
"goggles": ["goggle1", "goggle2"],
}
)
assert result["goggles"] == ["goggle1", "goggle2"]
# Null-like / empty value stripping
#
# crewAI's ensure_all_properties_required (pydantic_schema_utils.py) marks
# every schema property as required for OpenAI strict-mode compatibility.
# Because optional Brave API parameters look required to the LLM, it fills
# them with placeholder junk — None, "", "null", or []. The test below
# verifies that _common_payload_refinement strips these from optional fields.
def test_common_refinement_strips_null_like_values(web_tool):
"""_common_payload_refinement drops optional keys with None / '' / 'null' / []."""
params = web_tool._common_payload_refinement(
{
"query": "test",
"country": "US",
"search_lang": "",
"freshness": "null",
"count": 5,
"goggles": [],
}
)
assert params["q"] == "test"
assert params["country"] == "US"
assert params["count"] == 5
assert "search_lang" not in params
assert "freshness" not in params
assert "goggles" not in params
# End-to-End _run() with Mocked HTTP Response
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_web_search_end_to_end(mock_get, web_tool):
web_tool.raw = True
data = {"web": {"results": [{"title": "R", "url": "http://r.co"}]}}
mock_get.return_value = _mock_response(json_data=data)
result = web_tool._run(query="test")
mock_get.assert_called_once()
call_args = mock_get.call_args.kwargs
assert call_args["params"]["q"] == "test"
assert call_args["headers"]["x-subscription-token"] == "test-api-key"
assert result == data
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_image_search_end_to_end(mock_get, image_tool):
image_tool.raw = True
data = {"results": [{"url": "http://img.co/a.jpg"}]}
mock_get.return_value = _mock_response(json_data=data)
assert image_tool._run(query="cats") == data
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_news_search_end_to_end(mock_get, news_tool):
news_tool.raw = True
data = {"results": [{"title": "News", "url": "http://n.co"}]}
mock_get.return_value = _mock_response(json_data=data)
assert news_tool._run(query="headlines") == data
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_video_search_end_to_end(mock_get, video_tool):
video_tool.raw = True
data = {"results": [{"title": "Vid", "url": "http://v.co"}]}
mock_get.return_value = _mock_response(json_data=data)
assert video_tool._run(query="python tutorial") == data
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_raw_false_calls_refine_response(mock_get, web_tool):
"""With raw=False (the default), _refine_response transforms the API response."""
api_response = {
"web": {
"results": [
{
"title": "Test Title",
"url": "http://test.com",
"description": "Test Description",
"title": "CrewAI",
"url": "https://crewai.com",
"description": "AI agent framework",
}
]
}
}
mock_get.return_value.json.return_value = mock_response
mock_get.return_value = _mock_response(json_data=api_response)
result = brave_tool.run(query="test")
data = json.loads(result)
assert isinstance(data, list)
assert len(data) >= 1
assert data[0]["title"] == "Test Title"
assert data[0]["url"] == "http://test.com"
assert web_tool.raw is False
result = web_tool._run(query="crewai")
# The web tool's _refine_response extracts and reshapes results.
# The key assertion: we should NOT get back the raw API envelope.
assert result != api_response
@patch("requests.get")
def test_brave_tool(mock_get):
mock_response = {
"web": {
"results": [
{
"title": "Brave Browser",
"url": "https://brave.com",
"description": "Brave Browser description",
}
]
}
}
mock_get.return_value.json.return_value = mock_response
tool = BraveSearchTool(n_results=2)
result = tool.run(query="Brave Browser")
assert result is not None
# Parse JSON so we can examine the structure
data = json.loads(result)
assert isinstance(data, list)
assert len(data) >= 1
# First item should have expected fields: title, url, and description
first = data[0]
assert "title" in first
assert first["title"] == "Brave Browser"
assert "url" in first
assert first["url"] == "https://brave.com"
assert "description" in first
assert first["description"] == "Brave Browser description"
# Backward Compatibility & Legacy Parameter Support
if __name__ == "__main__":
test_brave_tool()
test_brave_tool_initialization()
# test_brave_tool_search(brave_tool)
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_positional_query_argument(mock_get, web_tool):
"""tool.run('my query') works as a positional argument."""
mock_get.return_value = _mock_response(json_data={})
web_tool._run("positional test")
assert mock_get.call_args.kwargs["params"]["q"] == "positional test"
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_search_query_backward_compat(mock_get, web_tool):
"""The legacy 'search_query' param is mapped to 'query'."""
mock_get.return_value = _mock_response(json_data={})
web_tool._run(search_query="legacy test")
assert mock_get.call_args.kwargs["params"]["q"] == "legacy test"
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
@patch("crewai_tools.tools.brave_search_tool.base._save_results_to_file")
def test_save_file_called_when_enabled(mock_save, mock_get):
mock_get.return_value = _mock_response(json_data={"results": []})
tool = BraveWebSearchTool(save_file=True)
tool._run(query="test")
mock_save.assert_called_once()
# Error Handling
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_connection_error_raises_runtime_error(mock_get, web_tool):
mock_get.side_effect = requests_lib.exceptions.ConnectionError("refused")
with pytest.raises(RuntimeError, match="Brave Search API connection failed"):
web_tool._run(query="test")
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_timeout_raises_runtime_error(mock_get, web_tool):
mock_get.side_effect = requests_lib.exceptions.Timeout("timed out")
with pytest.raises(RuntimeError, match="timed out"):
web_tool._run(query="test")
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_invalid_params_raises_value_error(mock_get, web_tool):
"""count=999 exceeds WebSearchParams.count le=20."""
with pytest.raises(ValueError, match="Invalid parameters"):
web_tool._run(query="test", count=999)
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_4xx_error_raises_with_api_detail(mock_get, web_tool):
"""A 422 with a structured error body includes code and detail in the message."""
mock_get.return_value = _mock_response(
status_code=422,
json_data={
"error": {
"id": "abc-123",
"status": 422,
"code": "OPTION_NOT_IN_PLAN",
"detail": "extra_snippets requires a Pro plan",
}
},
)
with pytest.raises(RuntimeError, match="OPTION_NOT_IN_PLAN") as exc_info:
web_tool._run(query="test")
assert "extra_snippets requires a Pro plan" in str(exc_info.value)
assert "HTTP 422" in str(exc_info.value)
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_auth_error_raises_immediately(mock_get, web_tool):
"""A 401 with SUBSCRIPTION_TOKEN_INVALID is not retried."""
mock_get.return_value = _mock_response(
status_code=401,
json_data={
"error": {
"id": "xyz",
"status": 401,
"code": "SUBSCRIPTION_TOKEN_INVALID",
"detail": "The subscription token is invalid",
}
},
)
with pytest.raises(RuntimeError, match="SUBSCRIPTION_TOKEN_INVALID"):
web_tool._run(query="test")
# Should NOT have retried — only one call.
assert mock_get.call_count == 1
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_quota_limited_429_raises_immediately(mock_get, web_tool):
"""A 429 with QUOTA_LIMITED is NOT retried — quota exhaustion is terminal."""
mock_get.return_value = _mock_response(
status_code=429,
json_data={
"error": {
"id": "ql-1",
"status": 429,
"code": "QUOTA_LIMITED",
"detail": "Monthly quota exceeded",
}
},
)
with pytest.raises(RuntimeError, match="QUOTA_LIMITED") as exc_info:
web_tool._run(query="test")
assert "Monthly quota exceeded" in str(exc_info.value)
# Terminal — only one HTTP call, no retries.
assert mock_get.call_count == 1
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_usage_limit_exceeded_429_raises_immediately(mock_get, web_tool):
"""USAGE_LIMIT_EXCEEDED is also non-retryable, just like QUOTA_LIMITED."""
mock_get.return_value = _mock_response(
status_code=429,
json_data={
"error": {
"id": "ule-1",
"status": 429,
"code": "USAGE_LIMIT_EXCEEDED",
}
},
text="usage limit exceeded",
)
with pytest.raises(RuntimeError, match="USAGE_LIMIT_EXCEEDED"):
web_tool._run(query="test")
assert mock_get.call_count == 1
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_error_body_is_fully_included_in_message(mock_get, web_tool):
"""The full JSON error body is included in the RuntimeError message."""
mock_get.return_value = _mock_response(
status_code=429,
json_data={
"error": {
"id": "x",
"status": 429,
"code": "QUOTA_LIMITED",
"detail": "Exceeded",
"meta": {"plan": "free", "limit": 1000},
}
},
)
with pytest.raises(RuntimeError) as exc_info:
web_tool._run(query="test")
msg = str(exc_info.value)
assert "HTTP 429" in msg
assert "QUOTA_LIMITED" in msg
assert "free" in msg
assert "1000" in msg
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_error_without_json_body_falls_back_to_text(mock_get, web_tool):
"""When the error response isn't valid JSON, resp.text is used as the detail."""
resp = _mock_response(status_code=500, text="Internal Server Error")
resp.json.side_effect = ValueError("No JSON")
mock_get.return_value = resp
with pytest.raises(RuntimeError, match="Internal Server Error"):
web_tool._run(query="test")
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
def test_invalid_json_on_success_raises_runtime_error(mock_get, web_tool):
"""A 200 OK with a non-JSON body raises RuntimeError."""
resp = _mock_response(status_code=200)
resp.json.side_effect = ValueError("Expecting value")
mock_get.return_value = resp
with pytest.raises(RuntimeError, match="invalid JSON"):
web_tool._run(query="test")
# Rate Limiting
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
@patch("crewai_tools.tools.brave_search_tool.base.time")
def test_rate_limit_sleeps_when_too_fast(mock_time, mock_get, web_tool):
"""Back-to-back calls within the interval trigger a sleep."""
mock_get.return_value = _mock_response(json_data={})
# Simulate: last request was at t=100, "now" is t=100.2 (only 0.2s elapsed).
# With default 1 req/s the min interval is 1.0s, so it should sleep ~0.8s.
mock_time.time.return_value = 100.2
web_tool._last_request_time = 100.0
web_tool._run(query="test")
mock_time.sleep.assert_called_once()
sleep_duration = mock_time.sleep.call_args[0][0]
assert 0.7 < sleep_duration < 0.9 # approximately 0.8s
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
@patch("crewai_tools.tools.brave_search_tool.base.time")
def test_rate_limit_skips_sleep_when_enough_time_passed(mock_time, mock_get, web_tool):
"""No sleep when the elapsed time already exceeds the interval."""
mock_get.return_value = _mock_response(json_data={})
# Last request was at t=100, "now" is t=102 (2s elapsed > 1s interval).
mock_time.time.return_value = 102.0
web_tool._last_request_time = 100.0
web_tool._run(query="test")
mock_time.sleep.assert_not_called()
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
@patch("crewai_tools.tools.brave_search_tool.base.time")
def test_rate_limit_disabled_when_zero(mock_time, mock_get, web_tool):
"""requests_per_second=0 disables rate limiting entirely."""
mock_get.return_value = _mock_response(json_data={})
web_tool._last_request_time = 100.0
mock_time.time.return_value = 100.0 # same instant
web_tool._run(query="test")
mock_time.sleep.assert_not_called()
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
@patch("crewai_tools.tools.brave_search_tool.base.time")
def test_rate_limit_per_instance_independent(mock_time, mock_get, web_tool, image_tool):
"""Each instance has its own rate-limit clock; a request on one does not delay the other."""
mock_get.return_value = _mock_response(json_data={})
# Web tool fires at t=100 (its clock goes 0 -> 100).
mock_time.time.return_value = 100.0
web_tool._run(query="test")
# Image tool fires at t=100.3. Its clock is still 0 (separate instance), so
# next_allowed = 1.0 and 100.3 > 1.0 — no sleep. Total process rate can be sum of instance limits.
mock_time.time.return_value = 100.3
image_tool._run(query="cats")
mock_time.sleep.assert_not_called()
# Retry Behavior
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
@patch("crewai_tools.tools.brave_search_tool.base.time")
def test_429_rate_limited_retries_then_succeeds(mock_time, mock_get, web_tool):
"""A transient RATE_LIMITED 429 is retried; success on the second attempt."""
mock_time.time.return_value = 200.0
resp_429 = _mock_response(
status_code=429,
json_data={"error": {"id": "r", "status": 429, "code": "RATE_LIMITED"}},
headers={"Retry-After": "2"},
)
resp_200 = _mock_response(status_code=200, json_data={"web": {"results": []}})
mock_get.side_effect = [resp_429, resp_200]
web_tool.raw = True
result = web_tool._run(query="test")
assert result == {"web": {"results": []}}
assert mock_get.call_count == 2
# Slept for the Retry-After value.
retry_sleeps = [c for c in mock_time.sleep.call_args_list if c[0][0] == 2.0]
assert len(retry_sleeps) == 1
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
@patch("crewai_tools.tools.brave_search_tool.base.time")
def test_5xx_is_retried(mock_time, mock_get, web_tool):
"""A 502 server error is retried; success on the second attempt."""
mock_time.time.return_value = 200.0
resp_502 = _mock_response(status_code=502, text="Bad Gateway")
resp_502.json.side_effect = ValueError("no json")
resp_200 = _mock_response(status_code=200, json_data={"web": {"results": []}})
mock_get.side_effect = [resp_502, resp_200]
web_tool.raw = True
result = web_tool._run(query="test")
assert result == {"web": {"results": []}}
assert mock_get.call_count == 2
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
@patch("crewai_tools.tools.brave_search_tool.base.time")
def test_429_rate_limited_exhausts_retries(mock_time, mock_get, web_tool):
"""Persistent RATE_LIMITED 429s exhaust retries and raise RuntimeError."""
mock_time.time.return_value = 200.0
resp_429 = _mock_response(
status_code=429,
json_data={"error": {"id": "r", "status": 429, "code": "RATE_LIMITED"}},
)
mock_get.return_value = resp_429
with pytest.raises(RuntimeError, match="RATE_LIMITED"):
web_tool._run(query="test")
# 3 attempts (default _max_retries).
assert mock_get.call_count == 3
@patch("crewai_tools.tools.brave_search_tool.base.requests.get")
@patch("crewai_tools.tools.brave_search_tool.base.time")
def test_retry_uses_exponential_backoff_when_no_retry_after(
mock_time, mock_get, web_tool
):
"""Without Retry-After, backoff is 2^attempt (1s, 2s, ...)."""
mock_time.time.return_value = 200.0
resp_503 = _mock_response(status_code=503, text="Service Unavailable")
resp_503.json.side_effect = ValueError("no json")
resp_200 = _mock_response(status_code=200, json_data={"ok": True})
mock_get.side_effect = [resp_503, resp_503, resp_200]
web_tool.raw = True
web_tool._run(query="test")
# Two retries: attempt 0 → sleep(1.0), attempt 1 → sleep(2.0).
retry_sleeps = [c[0][0] for c in mock_time.sleep.call_args_list]
assert 1.0 in retry_sleeps
assert 2.0 in retry_sleeps

File diff suppressed because it is too large Load Diff

View File

@@ -53,7 +53,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.10.1",
"crewai-tools==1.10.2a1",
]
embeddings = [
"tiktoken~=0.8.0"

View File

@@ -1,3 +1,4 @@
import contextvars
import threading
from typing import Any
import urllib.request
@@ -40,7 +41,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.10.1"
__version__ = "1.10.2a1"
_telemetry_submitted = False
@@ -66,7 +67,8 @@ def _track_install() -> None:
def _track_install_async() -> None:
"""Track installation in background thread to avoid blocking imports."""
if not Telemetry._is_telemetry_disabled():
thread = threading.Thread(target=_track_install, daemon=True)
ctx = contextvars.copy_context()
thread = threading.Thread(target=ctx.run, args=(_track_install,), daemon=True)
thread.start()

View File

@@ -5,6 +5,7 @@ from __future__ import annotations
import asyncio
from collections.abc import MutableMapping
import concurrent.futures
import contextvars
from functools import lru_cache
import ssl
import time
@@ -147,8 +148,9 @@ def fetch_agent_card(
has_running_loop = False
if has_running_loop:
ctx = contextvars.copy_context()
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool:
return pool.submit(asyncio.run, coro).result()
return pool.submit(ctx.run, asyncio.run, coro).result()
return asyncio.run(coro)
@@ -215,8 +217,9 @@ def _fetch_agent_card_cached(
has_running_loop = False
if has_running_loop:
ctx = contextvars.copy_context()
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool:
return pool.submit(asyncio.run, coro).result()
return pool.submit(ctx.run, asyncio.run, coro).result()
return asyncio.run(coro)

View File

@@ -7,6 +7,7 @@ import base64
from collections.abc import AsyncIterator, Callable, MutableMapping
import concurrent.futures
from contextlib import asynccontextmanager
import contextvars
import logging
from typing import TYPE_CHECKING, Any, Final, Literal
import uuid
@@ -229,8 +230,9 @@ def execute_a2a_delegation(
has_running_loop = False
if has_running_loop:
ctx = contextvars.copy_context()
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool:
return pool.submit(asyncio.run, coro).result()
return pool.submit(ctx.run, asyncio.run, coro).result()
return asyncio.run(coro)

View File

@@ -8,6 +8,7 @@ from __future__ import annotations
import asyncio
from collections.abc import Callable, Coroutine, Mapping
from concurrent.futures import ThreadPoolExecutor, as_completed
import contextvars
from functools import wraps
import json
from types import MethodType
@@ -278,7 +279,9 @@ def _fetch_agent_cards_concurrently(
max_workers = min(len(a2a_agents), 10)
with ThreadPoolExecutor(max_workers=max_workers) as executor:
futures = {
executor.submit(_fetch_card_from_config, config): config
executor.submit(
contextvars.copy_context().run, _fetch_card_from_config, config
): config
for config in a2a_agents
}
for future in as_completed(futures):

View File

@@ -2,6 +2,7 @@ from __future__ import annotations
import asyncio
from collections.abc import Callable, Coroutine, Sequence
import contextvars
import shutil
import subprocess
import time
@@ -513,9 +514,13 @@ class Agent(BaseAgent):
"""
import concurrent.futures
ctx = contextvars.copy_context()
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(
self._execute_without_timeout, task_prompt=task_prompt, task=task
ctx.run,
self._execute_without_timeout,
task_prompt=task_prompt,
task=task,
)
try:

View File

@@ -38,7 +38,7 @@ from crewai.utilities.string_utils import interpolate_only
_SLUG_RE: Final[re.Pattern[str]] = re.compile(
r"^(?:crewai-amp:)?[a-zA-Z0-9][a-zA-Z0-9_-]*(?:#\w+)?$"
r"^(?:crewai-amp:)?[a-zA-Z0-9][a-zA-Z0-9_-]*(?:#[\w-]+)?$"
)

View File

@@ -30,12 +30,9 @@ class CrewAgentExecutorMixin:
memory = getattr(self.agent, "memory", None) or (
getattr(self.crew, "_memory", None) if self.crew else None
)
if memory is None or not self.task or getattr(memory, "_read_only", False):
if memory is None or not self.task or memory.read_only:
return
if (
f"Action: {sanitize_tool_name('Delegate work to coworker')}"
in output.text
):
if f"Action: {sanitize_tool_name('Delegate work to coworker')}" in output.text:
return
try:
raw = (
@@ -48,6 +45,4 @@ class CrewAgentExecutorMixin:
if extracted:
memory.remember_many(extracted, agent_role=self.agent.role)
except Exception as e:
self.agent._logger.log(
"error", f"Failed to save to memory: {e}"
)
self.agent._logger.log("error", f"Failed to save to memory: {e}")

View File

@@ -9,6 +9,7 @@ from __future__ import annotations
import asyncio
from collections.abc import Callable
from concurrent.futures import ThreadPoolExecutor, as_completed
import contextvars
import inspect
import logging
from typing import TYPE_CHECKING, Any, Literal, cast
@@ -755,6 +756,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
with ThreadPoolExecutor(max_workers=max_workers) as pool:
futures = {
pool.submit(
contextvars.copy_context().run,
self._execute_single_native_tool_call,
call_id=call_id,
func_name=func_name,
@@ -893,7 +895,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
ToolUsageStartedEvent,
)
args_dict, parse_error = parse_tool_call_args(func_args, func_name, call_id, original_tool)
args_dict, parse_error = parse_tool_call_args(
func_args, func_name, call_id, original_tool
)
if parse_error is not None:
return parse_error

View File

@@ -182,15 +182,24 @@ def log_tasks_outputs() -> None:
@crewai.command()
@click.option("-m", "--memory", is_flag=True, help="Reset MEMORY")
@click.option(
"-l", "--long", is_flag=True, hidden=True,
"-l",
"--long",
is_flag=True,
hidden=True,
help="[Deprecated: use --memory] Reset memory",
)
@click.option(
"-s", "--short", is_flag=True, hidden=True,
"-s",
"--short",
is_flag=True,
hidden=True,
help="[Deprecated: use --memory] Reset memory",
)
@click.option(
"-e", "--entities", is_flag=True, hidden=True,
"-e",
"--entities",
is_flag=True,
hidden=True,
help="[Deprecated: use --memory] Reset memory",
)
@click.option("-kn", "--knowledge", is_flag=True, help="Reset KNOWLEDGE storage")
@@ -218,7 +227,13 @@ def reset_memories(
# Treat legacy flags as --memory with a deprecation warning
if long or short or entities:
legacy_used = [
f for f, v in [("--long", long), ("--short", short), ("--entities", entities)] if v
f
for f, v in [
("--long", long),
("--short", short),
("--entities", entities),
]
if v
]
click.echo(
f"Warning: {', '.join(legacy_used)} {'is' if len(legacy_used) == 1 else 'are'} "
@@ -238,9 +253,7 @@ def reset_memories(
"Please specify at least one memory type to reset using the appropriate flags."
)
return
reset_memories_command(
memory, knowledge, agent_knowledge, kickoff_outputs, all
)
reset_memories_command(memory, knowledge, agent_knowledge, kickoff_outputs, all)
except Exception as e:
click.echo(f"An error occurred while resetting memories: {e}", err=True)
@@ -669,18 +682,11 @@ def traces_enable():
from rich.console import Console
from rich.panel import Panel
from crewai.events.listeners.tracing.utils import (
_load_user_data,
_save_user_data,
)
from crewai.events.listeners.tracing.utils import update_user_data
console = Console()
# Update user data to enable traces
user_data = _load_user_data()
user_data["trace_consent"] = True
user_data["first_execution_done"] = True
_save_user_data(user_data)
update_user_data({"trace_consent": True, "first_execution_done": True})
panel = Panel(
"✅ Trace collection has been enabled!\n\n"
@@ -699,18 +705,11 @@ def traces_disable():
from rich.console import Console
from rich.panel import Panel
from crewai.events.listeners.tracing.utils import (
_load_user_data,
_save_user_data,
)
from crewai.events.listeners.tracing.utils import update_user_data
console = Console()
# Update user data to disable traces
user_data = _load_user_data()
user_data["trace_consent"] = False
user_data["first_execution_done"] = True
_save_user_data(user_data)
update_user_data({"trace_consent": False, "first_execution_done": True})
panel = Panel(
"❌ Trace collection has been disabled!\n\n"

View File

@@ -1,3 +1,4 @@
import contextvars
import json
from pathlib import Path
import platform
@@ -80,7 +81,10 @@ def run_chat() -> None:
# Start loading indicator
loading_complete = threading.Event()
loading_thread = threading.Thread(target=show_loading, args=(loading_complete,))
ctx = contextvars.copy_context()
loading_thread = threading.Thread(
target=ctx.run, args=(show_loading, loading_complete)
)
loading_thread.start()
try:

View File

@@ -125,13 +125,19 @@ class MemoryTUI(App[None]):
from crewai.memory.storage.lancedb_storage import LanceDBStorage
from crewai.memory.unified_memory import Memory
storage = LanceDBStorage(path=storage_path) if storage_path else LanceDBStorage()
storage = (
LanceDBStorage(path=storage_path) if storage_path else LanceDBStorage()
)
embedder = None
if embedder_config is not None:
from crewai.rag.embeddings.factory import build_embedder
embedder = build_embedder(embedder_config)
self._memory = Memory(storage=storage, embedder=embedder) if embedder else Memory(storage=storage)
self._memory = (
Memory(storage=storage, embedder=embedder)
if embedder
else Memory(storage=storage)
)
except Exception as e:
self._init_error = str(e)
@@ -200,11 +206,7 @@ class MemoryTUI(App[None]):
if len(record.content) > 80
else record.content
)
label = (
f"{date_str} "
f"[bold]{record.importance:.1f}[/] "
f"{preview}"
)
label = f"{date_str} [bold]{record.importance:.1f}[/] {preview}"
option_list.add_option(label)
def _populate_recall_list(self) -> None:
@@ -220,9 +222,7 @@ class MemoryTUI(App[None]):
else m.record.content
)
label = (
f"[bold]\\[{m.score:.2f}][/] "
f"{preview} "
f"[dim]scope={m.record.scope}[/]"
f"[bold]\\[{m.score:.2f}][/] {preview} [dim]scope={m.record.scope}[/]"
)
option_list.add_option(label)
@@ -251,8 +251,7 @@ class MemoryTUI(App[None]):
lines.append(f"[dim]Scope:[/] [bold]{record.scope}[/]")
lines.append(f"[dim]Importance:[/] [bold]{record.importance:.2f}[/]")
lines.append(
f"[dim]Created:[/] "
f"{record.created_at.strftime('%Y-%m-%d %H:%M:%S')}"
f"[dim]Created:[/] {record.created_at.strftime('%Y-%m-%d %H:%M:%S')}"
)
lines.append(
f"[dim]Last accessed:[/] "
@@ -362,17 +361,11 @@ class MemoryTUI(App[None]):
panel = self.query_one("#info-panel", Static)
panel.loading = True
try:
scope = (
self._selected_scope
if self._selected_scope != "/"
else None
)
scope = self._selected_scope if self._selected_scope != "/" else None
loop = asyncio.get_event_loop()
matches = await loop.run_in_executor(
None,
lambda: self._memory.recall(
query, scope=scope, limit=10, depth="deep"
),
lambda: self._memory.recall(query, scope=scope, limit=10, depth="deep"),
)
self._recall_matches = matches or []
self._view_mode = "recall"

View File

@@ -95,9 +95,7 @@ def reset_memories_command(
continue
if memory:
_reset_flow_memory(flow)
click.echo(
f"[Flow ({flow_name})] Memory has been reset."
)
click.echo(f"[Flow ({flow_name})] Memory has been reset.")
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while resetting the memories: {e}", err=True)

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.10.1"
"crewai[tools]==1.10.2a1"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.10.1"
"crewai[tools]==1.10.2a1"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.10.1"
"crewai[tools]==1.10.2a1"
]
[tool.crewai]

View File

@@ -442,9 +442,7 @@ def get_flows(flow_path: str = "main.py") -> list[Flow]:
for search_path in search_paths:
for root, dirs, files in os.walk(search_path):
dirs[:] = [
d
for d in dirs
if d not in _SKIP_DIRS and not d.startswith(".")
d for d in dirs if d not in _SKIP_DIRS and not d.startswith(".")
]
if flow_path in files and "cli/templates" not in root:
file_os_path = os.path.join(root, flow_path)
@@ -464,9 +462,7 @@ def get_flows(flow_path: str = "main.py") -> list[Flow]:
for attr_name in dir(module):
module_attr = getattr(module, attr_name)
try:
if flow_instance := get_flow_instance(
module_attr
):
if flow_instance := get_flow_instance(module_attr):
flow_instances.append(flow_instance)
except Exception: # noqa: S112
continue

View File

@@ -1410,9 +1410,7 @@ class Crew(FlowTrackable, BaseModel):
return self._merge_tools(tools, cast(list[BaseTool], code_tools))
return tools
def _add_memory_tools(
self, tools: list[BaseTool], memory: Any
) -> list[BaseTool]:
def _add_memory_tools(self, tools: list[BaseTool], memory: Any) -> list[BaseTool]:
"""Add recall and remember tools when memory is available.
Args:

View File

@@ -1,4 +1,5 @@
from collections.abc import Callable
import contextvars
from contextvars import ContextVar, Token
from datetime import datetime
import getpass
@@ -18,6 +19,7 @@ from rich.console import Console
from rich.panel import Panel
from rich.text import Text
from crewai.utilities.lock_store import lock as store_lock
from crewai.utilities.paths import db_storage_path
from crewai.utilities.serialization import to_serializable
@@ -137,12 +139,25 @@ def _load_user_data() -> dict[str, Any]:
return {}
def _save_user_data(data: dict[str, Any]) -> None:
def _user_data_lock_name() -> str:
"""Return a stable lock name for the user data file."""
return f"file:{os.path.realpath(_user_data_file())}"
def update_user_data(updates: dict[str, Any]) -> None:
"""Atomically read-modify-write the user data file.
Args:
updates: Key-value pairs to merge into the existing user data.
"""
try:
p = _user_data_file()
p.write_text(json.dumps(data, indent=2))
with store_lock(_user_data_lock_name()):
data = _load_user_data()
data.update(updates)
p = _user_data_file()
p.write_text(json.dumps(data, indent=2))
except (OSError, PermissionError) as e:
logger.warning(f"Failed to save user data: {e}")
logger.warning(f"Failed to update user data: {e}")
def has_user_declined_tracing() -> bool:
@@ -357,24 +372,30 @@ def _get_generic_system_id() -> str | None:
return None
def get_user_id() -> str:
"""Stable, anonymized user identifier with caching."""
data = _load_user_data()
if "user_id" in data:
return cast(str, data["user_id"])
def _generate_user_id() -> str:
"""Compute an anonymized user identifier from username and machine ID."""
try:
username = getpass.getuser()
except Exception:
username = "unknown"
seed = f"{username}|{_get_machine_id()}"
uid = hashlib.sha256(seed.encode()).hexdigest()
return hashlib.sha256(seed.encode()).hexdigest()
data["user_id"] = uid
_save_user_data(data)
return uid
def get_user_id() -> str:
"""Stable, anonymized user identifier with caching."""
with store_lock(_user_data_lock_name()):
data = _load_user_data()
if "user_id" in data:
return cast(str, data["user_id"])
uid = _generate_user_id()
data["user_id"] = uid
p = _user_data_file()
p.write_text(json.dumps(data, indent=2))
return uid
def is_first_execution() -> bool:
@@ -389,20 +410,23 @@ def mark_first_execution_done(user_consented: bool = False) -> None:
Args:
user_consented: Whether the user consented to trace collection.
"""
data = _load_user_data()
if data.get("first_execution_done", False):
return
with store_lock(_user_data_lock_name()):
data = _load_user_data()
if data.get("first_execution_done", False):
return
data.update(
{
"first_execution_done": True,
"first_execution_at": datetime.now().timestamp(),
"user_id": get_user_id(),
"machine_id": _get_machine_id(),
"trace_consent": user_consented,
}
)
_save_user_data(data)
uid = data.get("user_id") or _generate_user_id()
data.update(
{
"first_execution_done": True,
"first_execution_at": datetime.now().timestamp(),
"user_id": uid,
"machine_id": _get_machine_id(),
"trace_consent": user_consented,
}
)
p = _user_data_file()
p.write_text(json.dumps(data, indent=2))
def safe_serialize_to_dict(obj: Any, exclude: set[str] | None = None) -> dict[str, Any]:
@@ -509,7 +533,8 @@ def prompt_user_for_trace_viewing(timeout_seconds: int = 20) -> bool:
# Handle all input-related errors silently
result[0] = False
input_thread = threading.Thread(target=get_input, daemon=True)
ctx = contextvars.copy_context()
input_thread = threading.Thread(target=ctx.run, args=(get_input,), daemon=True)
input_thread.start()
input_thread.join(timeout=timeout_seconds)

View File

@@ -43,6 +43,7 @@ def should_suppress_console_output() -> bool:
class ConsoleFormatter:
tool_usage_counts: ClassVar[dict[str, int]] = {}
_tool_counts_lock: ClassVar[threading.Lock] = threading.Lock()
current_a2a_turn_count: int = 0
_pending_a2a_message: str | None = None
@@ -445,9 +446,11 @@ To enable tracing, do any one of these:
if not self.verbose:
return
# Update tool usage count
self.tool_usage_counts[tool_name] = self.tool_usage_counts.get(tool_name, 0) + 1
iteration = self.tool_usage_counts[tool_name]
with self._tool_counts_lock:
self.tool_usage_counts[tool_name] = (
self.tool_usage_counts.get(tool_name, 0) + 1
)
iteration = self.tool_usage_counts[tool_name]
content = Text()
content.append("Tool: ", style="white")
@@ -474,7 +477,8 @@ To enable tracing, do any one of these:
if not self.verbose:
return
iteration = self.tool_usage_counts.get(tool_name, 1)
with self._tool_counts_lock:
iteration = self.tool_usage_counts.get(tool_name, 1)
content = Text()
content.append("Tool Completed\n", style="green bold")
@@ -500,7 +504,8 @@ To enable tracing, do any one of these:
if not self.verbose:
return
iteration = self.tool_usage_counts.get(tool_name, 1)
with self._tool_counts_lock:
iteration = self.tool_usage_counts.get(tool_name, 1)
content = Text()
content.append("Tool Failed\n", style="red bold")

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
import asyncio
from collections.abc import Callable, Coroutine
from concurrent.futures import ThreadPoolExecutor, as_completed
import contextvars
from datetime import datetime
import inspect
import json
@@ -728,7 +729,11 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
max_workers = min(8, len(runnable_tool_calls))
with ThreadPoolExecutor(max_workers=max_workers) as pool:
future_to_idx = {
pool.submit(self._execute_single_native_tool_call, tool_call): idx
pool.submit(
contextvars.copy_context().run,
self._execute_single_native_tool_call,
tool_call,
): idx
for idx, tool_call in enumerate(runnable_tool_calls)
}
ordered_results: list[dict[str, Any] | None] = [None] * len(

View File

@@ -34,6 +34,7 @@ class ConsoleProvider:
```python
from crewai.flow.async_feedback import ConsoleProvider
@human_feedback(
message="Review this:",
provider=ConsoleProvider(),
@@ -46,6 +47,7 @@ class ConsoleProvider:
```python
from crewai.flow import Flow, start
class MyFlow(Flow):
@start()
def gather_info(self):

View File

@@ -17,6 +17,7 @@ from collections.abc import (
ValuesView,
)
from concurrent.futures import Future, ThreadPoolExecutor
import contextvars
import copy
import enum
import inspect
@@ -497,6 +498,52 @@ class LockedListProxy(list, Generic[T]): # type: ignore[type-arg]
def __bool__(self) -> bool:
return bool(self._list)
def index(
self, value: T, start: SupportsIndex = 0, stop: SupportsIndex | None = None
) -> int: # type: ignore[override]
if stop is None:
return self._list.index(value, start)
return self._list.index(value, start, stop)
def count(self, value: T) -> int:
return self._list.count(value)
def sort(self, *, key: Any = None, reverse: bool = False) -> None:
with self._lock:
self._list.sort(key=key, reverse=reverse)
def reverse(self) -> None:
with self._lock:
self._list.reverse()
def copy(self) -> list[T]:
return self._list.copy()
def __add__(self, other: list[T]) -> list[T]:
return self._list + other
def __radd__(self, other: list[T]) -> list[T]:
return other + self._list
def __iadd__(self, other: Iterable[T]) -> LockedListProxy[T]:
with self._lock:
self._list += list(other)
return self
def __mul__(self, n: SupportsIndex) -> list[T]:
return self._list * n
def __rmul__(self, n: SupportsIndex) -> list[T]:
return self._list * n
def __imul__(self, n: SupportsIndex) -> LockedListProxy[T]:
with self._lock:
self._list *= n
return self
def __reversed__(self) -> Iterator[T]:
return reversed(self._list)
def __eq__(self, other: object) -> bool:
"""Compare based on the underlying list contents."""
if isinstance(other, LockedListProxy):
@@ -579,6 +626,23 @@ class LockedDictProxy(dict, Generic[T]): # type: ignore[type-arg]
def __bool__(self) -> bool:
return bool(self._dict)
def copy(self) -> dict[str, T]:
return self._dict.copy()
def __or__(self, other: dict[str, T]) -> dict[str, T]:
return self._dict | other
def __ror__(self, other: dict[str, T]) -> dict[str, T]:
return other | self._dict
def __ior__(self, other: dict[str, T]) -> LockedDictProxy[T]:
with self._lock:
self._dict |= other
return self
def __reversed__(self) -> Iterator[str]:
return reversed(self._dict)
def __eq__(self, other: object) -> bool:
"""Compare based on the underlying dict contents."""
if isinstance(other, LockedDictProxy):
@@ -620,6 +684,10 @@ class StateProxy(Generic[T]):
if name in ("_proxy_state", "_proxy_lock"):
object.__setattr__(self, name, value)
else:
if isinstance(value, LockedListProxy):
value = value._list
elif isinstance(value, LockedDictProxy):
value = value._dict
with object.__getattribute__(self, "_proxy_lock"):
setattr(object.__getattribute__(self, "_proxy_state"), name, value)
@@ -1746,8 +1814,9 @@ class Flow(Generic[T], metaclass=FlowMeta):
try:
asyncio.get_running_loop()
ctx = contextvars.copy_context()
with ThreadPoolExecutor(max_workers=1) as pool:
return pool.submit(asyncio.run, _run_flow()).result()
return pool.submit(ctx.run, asyncio.run, _run_flow()).result()
except RuntimeError:
return asyncio.run(_run_flow())
@@ -2171,8 +2240,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
else:
# Run sync methods in thread pool for isolation
# This allows Agent.kickoff() to work synchronously inside Flow methods
import contextvars
ctx = contextvars.copy_context()
result = await asyncio.to_thread(ctx.run, method, *args, **kwargs)
finally:
@@ -2791,8 +2858,9 @@ class Flow(Generic[T], metaclass=FlowMeta):
# Manual executor management to avoid shutdown(wait=True)
# deadlock when the provider call outlives the timeout.
executor = ThreadPoolExecutor(max_workers=1)
ctx = contextvars.copy_context()
future = executor.submit(
provider.request_input, message, self, metadata
ctx.run, provider.request_input, message, self, metadata
)
try:
raw = future.result(timeout=timeout)

View File

@@ -188,7 +188,7 @@ def human_feedback(
metadata: dict[str, Any] | None = None,
provider: HumanFeedbackProvider | None = None,
learn: bool = False,
learn_source: str = "hitl"
learn_source: str = "hitl",
) -> Callable[[F], F]:
"""Decorator for Flow methods that require human feedback.
@@ -328,9 +328,7 @@ def human_feedback(
"""Recall past HITL lessons and use LLM to pre-review the output."""
try:
query = f"human feedback lessons for {func.__name__}: {method_output!s}"
matches = flow_instance.memory.recall(
query, source=learn_source
)
matches = flow_instance.memory.recall(query, source=learn_source)
if not matches:
return method_output
@@ -341,7 +339,10 @@ def human_feedback(
lessons=lessons,
)
messages = [
{"role": "system", "content": _get_hitl_prompt("hitl_pre_review_system")},
{
"role": "system",
"content": _get_hitl_prompt("hitl_pre_review_system"),
},
{"role": "user", "content": prompt},
]
if getattr(llm_inst, "supports_function_calling", lambda: False)():
@@ -366,7 +367,10 @@ def human_feedback(
feedback=raw_feedback,
)
messages = [
{"role": "system", "content": _get_hitl_prompt("hitl_distill_system")},
{
"role": "system",
"content": _get_hitl_prompt("hitl_distill_system"),
},
{"role": "user", "content": prompt},
]
@@ -408,7 +412,7 @@ def human_feedback(
emit=list(emit) if emit else None,
default_outcome=default_outcome,
metadata=metadata or {},
llm=llm if isinstance(llm, str) else None,
llm=llm if isinstance(llm, str) else getattr(llm, "model", None),
)
# Determine effective provider:
@@ -487,7 +491,11 @@ def human_feedback(
result = _process_feedback(self, method_output, raw_feedback)
# Distill: extract lessons from output + feedback, store in memory
if learn and getattr(self, "memory", None) is not None and raw_feedback.strip():
if (
learn
and getattr(self, "memory", None) is not None
and raw_feedback.strip()
):
_distill_and_store_lessons(self, method_output, raw_feedback)
return result
@@ -507,7 +515,11 @@ def human_feedback(
result = _process_feedback(self, method_output, raw_feedback)
# Distill: extract lessons from output + feedback, store in memory
if learn and getattr(self, "memory", None) is not None and raw_feedback.strip():
if (
learn
and getattr(self, "memory", None) is not None
and raw_feedback.strip()
):
_distill_and_store_lessons(self, method_output, raw_feedback)
return result
@@ -534,7 +546,7 @@ def human_feedback(
metadata=metadata,
provider=provider,
learn=learn,
learn_source=learn_source
learn_source=learn_source,
)
wrapper.__is_flow_method__ = True

View File

@@ -1,11 +1,10 @@
"""
SQLite-based implementation of flow state persistence.
"""
"""SQLite-based implementation of flow state persistence."""
from __future__ import annotations
from datetime import datetime, timezone
import json
import os
from pathlib import Path
import sqlite3
from typing import TYPE_CHECKING, Any
@@ -13,6 +12,7 @@ from typing import TYPE_CHECKING, Any
from pydantic import BaseModel
from crewai.flow.persistence.base import FlowPersistence
from crewai.utilities.lock_store import lock as store_lock
from crewai.utilities.paths import db_storage_path
@@ -68,11 +68,16 @@ class SQLiteFlowPersistence(FlowPersistence):
raise ValueError("Database path must be provided")
self.db_path = path # Now mypy knows this is str
self._lock_name = f"sqlite:{os.path.realpath(self.db_path)}"
self.init_db()
def init_db(self) -> None:
"""Create the necessary tables if they don't exist."""
with sqlite3.connect(self.db_path) as conn:
with (
store_lock(self._lock_name),
sqlite3.connect(self.db_path, timeout=30) as conn,
):
conn.execute("PRAGMA journal_mode=WAL")
# Main state table
conn.execute(
"""
@@ -113,6 +118,49 @@ class SQLiteFlowPersistence(FlowPersistence):
"""
)
def _save_state_sql(
self,
conn: sqlite3.Connection,
flow_uuid: str,
method_name: str,
state_dict: dict[str, Any],
) -> None:
"""Execute the save-state INSERT without acquiring the lock.
Args:
conn: An open SQLite connection.
flow_uuid: Unique identifier for the flow instance.
method_name: Name of the method that just completed.
state_dict: State data as a plain dict.
"""
conn.execute(
"""
INSERT INTO flow_states (
flow_uuid,
method_name,
timestamp,
state_json
) VALUES (?, ?, ?, ?)
""",
(
flow_uuid,
method_name,
datetime.now(timezone.utc).isoformat(),
json.dumps(state_dict),
),
)
@staticmethod
def _to_state_dict(state_data: dict[str, Any] | BaseModel) -> dict[str, Any]:
"""Convert state_data to a plain dict."""
if isinstance(state_data, BaseModel):
return state_data.model_dump()
if isinstance(state_data, dict):
return state_data
raise ValueError(
f"state_data must be either a Pydantic BaseModel or dict, got {type(state_data)}"
)
def save_state(
self,
flow_uuid: str,
@@ -126,33 +174,13 @@ class SQLiteFlowPersistence(FlowPersistence):
method_name: Name of the method that just completed
state_data: Current state data (either dict or Pydantic model)
"""
# Convert state_data to dict, handling both Pydantic and dict cases
if isinstance(state_data, BaseModel):
state_dict = state_data.model_dump()
elif isinstance(state_data, dict):
state_dict = state_data
else:
raise ValueError(
f"state_data must be either a Pydantic BaseModel or dict, got {type(state_data)}"
)
state_dict = self._to_state_dict(state_data)
with sqlite3.connect(self.db_path) as conn:
conn.execute(
"""
INSERT INTO flow_states (
flow_uuid,
method_name,
timestamp,
state_json
) VALUES (?, ?, ?, ?)
""",
(
flow_uuid,
method_name,
datetime.now(timezone.utc).isoformat(),
json.dumps(state_dict),
),
)
with (
store_lock(self._lock_name),
sqlite3.connect(self.db_path, timeout=30) as conn,
):
self._save_state_sql(conn, flow_uuid, method_name, state_dict)
def load_state(self, flow_uuid: str) -> dict[str, Any] | None:
"""Load the most recent state for a given flow UUID.
@@ -163,7 +191,7 @@ class SQLiteFlowPersistence(FlowPersistence):
Returns:
The most recent state as a dictionary, or None if no state exists
"""
with sqlite3.connect(self.db_path) as conn:
with sqlite3.connect(self.db_path, timeout=30) as conn:
cursor = conn.execute(
"""
SELECT state_json
@@ -197,24 +225,14 @@ class SQLiteFlowPersistence(FlowPersistence):
context: The pending feedback context with all resume information
state_data: Current state data
"""
# Import here to avoid circular imports
state_dict = self._to_state_dict(state_data)
# Convert state_data to dict
if isinstance(state_data, BaseModel):
state_dict = state_data.model_dump()
elif isinstance(state_data, dict):
state_dict = state_data
else:
raise ValueError(
f"state_data must be either a Pydantic BaseModel or dict, got {type(state_data)}"
)
with (
store_lock(self._lock_name),
sqlite3.connect(self.db_path, timeout=30) as conn,
):
self._save_state_sql(conn, flow_uuid, context.method_name, state_dict)
# Also save to regular state table for consistency
self.save_state(flow_uuid, context.method_name, state_data)
# Save pending feedback context
with sqlite3.connect(self.db_path) as conn:
# Use INSERT OR REPLACE to handle re-triggering feedback on same flow
conn.execute(
"""
INSERT OR REPLACE INTO pending_feedback (
@@ -248,7 +266,7 @@ class SQLiteFlowPersistence(FlowPersistence):
# Import here to avoid circular imports
from crewai.flow.async_feedback.types import PendingFeedbackContext
with sqlite3.connect(self.db_path) as conn:
with sqlite3.connect(self.db_path, timeout=30) as conn:
cursor = conn.execute(
"""
SELECT state_json, context_json
@@ -272,7 +290,10 @@ class SQLiteFlowPersistence(FlowPersistence):
Args:
flow_uuid: Unique identifier for the flow instance
"""
with sqlite3.connect(self.db_path) as conn:
with (
store_lock(self._lock_name),
sqlite3.connect(self.db_path, timeout=30) as conn,
):
conn.execute(
"""
DELETE FROM pending_feedback

View File

@@ -600,7 +600,7 @@ class LiteAgent(FlowTrackable, BaseModel):
def _save_to_memory(self, output_text: str) -> None:
"""Extract discrete memories from the run and remember each. No-op if _memory is None or read-only."""
if self._memory is None or getattr(self._memory, "_read_only", False):
if self._memory is None or self._memory.read_only:
return
input_str = self._get_last_user_content() or "User request"
try:

View File

@@ -22,7 +22,12 @@ if TYPE_CHECKING:
try:
from anthropic import Anthropic, AsyncAnthropic, transform_schema
from anthropic.types import Message, TextBlock, ThinkingBlock, ToolUseBlock
from anthropic.types import (
Message,
TextBlock,
ThinkingBlock,
ToolUseBlock,
)
from anthropic.types.beta import BetaMessage, BetaTextBlock, BetaToolUseBlock
import httpx
except ImportError:
@@ -31,6 +36,11 @@ except ImportError:
) from None
TOOL_SEARCH_TOOL_TYPES: Final[tuple[str, ...]] = (
"tool_search_tool_regex_20251119",
"tool_search_tool_bm25_20251119",
)
ANTHROPIC_FILES_API_BETA: Final = "files-api-2025-04-14"
ANTHROPIC_STRUCTURED_OUTPUTS_BETA: Final = "structured-outputs-2025-11-13"
@@ -117,6 +127,22 @@ class AnthropicThinkingConfig(BaseModel):
budget_tokens: int | None = None
class AnthropicToolSearchConfig(BaseModel):
"""Configuration for Anthropic's server-side tool search.
When enabled, tools marked with defer_loading=True are not loaded into
context immediately. Instead, Claude uses the tool search tool to
dynamically discover and load relevant tools on-demand.
Attributes:
type: The tool search variant to use.
- "regex": Claude constructs regex patterns to search tool names/descriptions.
- "bm25": Claude uses natural language queries to search tools.
"""
type: Literal["regex", "bm25"] = "bm25"
class AnthropicCompletion(BaseLLM):
"""Anthropic native completion implementation.
@@ -140,6 +166,7 @@ class AnthropicCompletion(BaseLLM):
interceptor: BaseInterceptor[httpx.Request, httpx.Response] | None = None,
thinking: AnthropicThinkingConfig | None = None,
response_format: type[BaseModel] | None = None,
tool_search: AnthropicToolSearchConfig | bool | None = None,
**kwargs: Any,
):
"""Initialize Anthropic chat completion client.
@@ -159,6 +186,10 @@ class AnthropicCompletion(BaseLLM):
interceptor: HTTP interceptor for modifying requests/responses at transport level.
response_format: Pydantic model for structured output. When provided, responses
will be validated against this model schema.
tool_search: Enable Anthropic's server-side tool search. When True, uses "bm25"
variant by default. Pass an AnthropicToolSearchConfig to choose "regex" or
"bm25". When enabled, tools are automatically marked with defer_loading=True
and a tool search tool is injected into the tools list.
**kwargs: Additional parameters
"""
super().__init__(
@@ -190,6 +221,13 @@ class AnthropicCompletion(BaseLLM):
self.thinking = thinking
self.previous_thinking_blocks: list[ThinkingBlock] = []
self.response_format = response_format
# Tool search config
if tool_search is True:
self.tool_search = AnthropicToolSearchConfig()
elif isinstance(tool_search, AnthropicToolSearchConfig):
self.tool_search = tool_search
else:
self.tool_search = None
# Model-specific settings
self.is_claude_3 = "claude-3" in model.lower()
self.supports_tools = True
@@ -432,10 +470,23 @@ class AnthropicCompletion(BaseLLM):
# Handle tools for Claude 3+
if tools and self.supports_tools:
converted_tools = self._convert_tools_for_interference(tools)
# When tool_search is enabled and there are 2+ regular tools,
# inject the search tool and mark regular tools with defer_loading.
# With only 1 tool there's nothing to search — skip tool search
# entirely so the normal forced tool_choice optimisation still works.
regular_tools = [
t
for t in converted_tools
if t.get("type", "") not in TOOL_SEARCH_TOOL_TYPES
]
if self.tool_search is not None and len(regular_tools) >= 2:
converted_tools = self._apply_tool_search(converted_tools)
params["tools"] = converted_tools
if available_functions and len(converted_tools) == 1:
tool_name = converted_tools[0].get("name")
if available_functions and len(regular_tools) == 1:
tool_name = regular_tools[0].get("name")
if tool_name and tool_name in available_functions:
params["tool_choice"] = {"type": "tool", "name": tool_name}
@@ -454,6 +505,12 @@ class AnthropicCompletion(BaseLLM):
anthropic_tools = []
for tool in tools:
# Pass through tool search tool definitions unchanged
tool_type = tool.get("type", "")
if tool_type in TOOL_SEARCH_TOOL_TYPES:
anthropic_tools.append(tool)
continue
if "input_schema" in tool and "name" in tool and "description" in tool:
anthropic_tools.append(tool)
continue
@@ -466,15 +523,15 @@ class AnthropicCompletion(BaseLLM):
logging.error(f"Error converting tool to Anthropic format: {e}")
raise e
anthropic_tool = {
anthropic_tool: dict[str, Any] = {
"name": name,
"description": description,
}
if parameters and isinstance(parameters, dict):
anthropic_tool["input_schema"] = parameters # type: ignore[assignment]
anthropic_tool["input_schema"] = parameters
else:
anthropic_tool["input_schema"] = { # type: ignore[assignment]
anthropic_tool["input_schema"] = {
"type": "object",
"properties": {},
"required": [],
@@ -484,6 +541,55 @@ class AnthropicCompletion(BaseLLM):
return anthropic_tools
def _apply_tool_search(self, tools: list[dict[str, Any]]) -> list[dict[str, Any]]:
"""Inject tool search tool and mark regular tools with defer_loading.
When tool_search is enabled, this method:
1. Adds the appropriate tool search tool definition (regex or bm25)
2. Marks all regular tools with defer_loading=True so they are only
loaded when Claude discovers them via search
Args:
tools: Converted tool definitions in Anthropic format.
Returns:
Updated tools list with tool search tool prepended and
regular tools marked as deferred.
"""
if self.tool_search is None:
return tools
# Check if a tool search tool is already present (user passed one manually)
has_search_tool = any(
t.get("type", "") in TOOL_SEARCH_TOOL_TYPES for t in tools
)
result: list[dict[str, Any]] = []
if not has_search_tool:
# Map config type to API type identifier
type_map = {
"regex": "tool_search_tool_regex_20251119",
"bm25": "tool_search_tool_bm25_20251119",
}
tool_type = type_map[self.tool_search.type]
# Tool search tool names follow the convention: tool_search_tool_{variant}
tool_name = f"tool_search_tool_{self.tool_search.type}"
result.append({"type": tool_type, "name": tool_name})
for tool in tools:
# Don't modify tool search tools
if tool.get("type", "") in TOOL_SEARCH_TOOL_TYPES:
result.append(tool)
continue
# Mark regular tools as deferred if not already set
if "defer_loading" not in tool:
tool = {**tool, "defer_loading": True}
result.append(tool)
return result
def _extract_thinking_block(
self, content_block: Any
) -> ThinkingBlock | dict[str, Any] | None:

View File

@@ -1781,6 +1781,7 @@ class BedrockCompletion(BaseLLM):
converse_messages: list[LLMMessage] = []
system_message: str | None = None
pending_tool_results: list[dict[str, Any]] = []
for message in formatted_messages:
role = message.get("role")
@@ -1794,53 +1795,56 @@ class BedrockCompletion(BaseLLM):
system_message += f"\n\n{content}"
else:
system_message = cast(str, content)
elif role == "assistant" and tool_calls:
# Convert OpenAI-style tool_calls to Bedrock toolUse format
bedrock_content = []
for tc in tool_calls:
func = tc.get("function", {})
tool_use_block = {
"toolUse": {
"toolUseId": tc.get("id", f"call_{id(tc)}"),
"name": func.get("name", ""),
"input": func.get("arguments", {})
if isinstance(func.get("arguments"), dict)
else json.loads(func.get("arguments", "{}") or "{}"),
}
}
bedrock_content.append(tool_use_block)
converse_messages.append(
{"role": "assistant", "content": bedrock_content}
)
elif role == "tool":
if not tool_call_id:
raise ValueError("Tool message missing required tool_call_id")
converse_messages.append(
pending_tool_results.append(
{
"role": "user",
"content": [
{
"toolResult": {
"toolUseId": tool_call_id,
"content": [
{"text": str(content) if content else ""}
],
}
}
],
"toolResult": {
"toolUseId": tool_call_id,
"content": [{"text": str(content) if content else ""}],
}
}
)
else:
# Convert to Converse API format with proper content structure
if isinstance(content, list):
# Already formatted as multimodal content blocks
converse_messages.append({"role": role, "content": content})
else:
# String content - wrap in text block
text_content = content if content else ""
if pending_tool_results:
converse_messages.append(
{"role": role, "content": [{"text": text_content}]}
{"role": "user", "content": pending_tool_results}
)
pending_tool_results = []
if role == "assistant" and tool_calls:
# Convert OpenAI-style tool_calls to Bedrock toolUse format
bedrock_content = []
for tc in tool_calls:
func = tc.get("function", {})
tool_use_block = {
"toolUse": {
"toolUseId": tc.get("id", f"call_{id(tc)}"),
"name": func.get("name", ""),
"input": func.get("arguments", {})
if isinstance(func.get("arguments"), dict)
else json.loads(func.get("arguments", "{}") or "{}"),
}
}
bedrock_content.append(tool_use_block)
converse_messages.append(
{"role": "assistant", "content": bedrock_content}
)
else:
# Convert to Converse API format with proper content structure
if isinstance(content, list):
# Already formatted as multimodal content blocks
converse_messages.append({"role": role, "content": content})
else:
# String content - wrap in text block
text_content = content if content else ""
converse_messages.append(
{"role": role, "content": [{"text": text_content}]}
)
if pending_tool_results:
converse_messages.append({"role": "user", "content": pending_tool_results})
# CRITICAL: Handle model-specific conversation requirements
# Cohere and some other models require conversation to end with user message

View File

@@ -11,6 +11,7 @@ into a standalone MCPToolResolver. It handles three flavours of MCP reference:
from __future__ import annotations
import asyncio
import contextvars
import time
from typing import TYPE_CHECKING, Any, Final, cast
from urllib.parse import urlparse
@@ -25,6 +26,7 @@ from crewai.mcp.config import (
from crewai.mcp.transports.http import HTTPTransport
from crewai.mcp.transports.sse import SSETransport
from crewai.mcp.transports.stdio import StdioTransport
from crewai.utilities.string_utils import sanitize_tool_name
if TYPE_CHECKING:
@@ -74,10 +76,9 @@ class MCPToolResolver:
elif isinstance(mcp_config, str):
amp_refs.append(self._parse_amp_ref(mcp_config))
else:
tools, client = self._resolve_native(mcp_config)
tools, clients = self._resolve_native(mcp_config)
all_tools.extend(tools)
if client:
self._clients.append(client)
self._clients.extend(clients)
if amp_refs:
tools, clients = self._resolve_amp(amp_refs)
@@ -131,7 +132,7 @@ class MCPToolResolver:
all_tools: list[BaseTool] = []
all_clients: list[Any] = []
resolved_cache: dict[str, tuple[list[BaseTool], Any | None]] = {}
resolved_cache: dict[str, tuple[list[BaseTool], list[Any]]] = {}
for slug in unique_slugs:
config_dict = amp_configs_map.get(slug)
@@ -149,10 +150,9 @@ class MCPToolResolver:
mcp_server_config = self._build_mcp_config_from_dict(config_dict)
try:
tools, client = self._resolve_native(mcp_server_config)
resolved_cache[slug] = (tools, client)
if client:
all_clients.append(client)
tools, clients = self._resolve_native(mcp_server_config)
resolved_cache[slug] = (tools, clients)
all_clients.extend(clients)
except Exception as e:
crewai_event_bus.emit(
self,
@@ -170,8 +170,9 @@ class MCPToolResolver:
slug_tools, _ = cached
if specific_tool:
sanitized = sanitize_tool_name(specific_tool)
all_tools.extend(
t for t in slug_tools if t.name.endswith(f"_{specific_tool}")
t for t in slug_tools if t.name.endswith(f"_{sanitized}")
)
else:
all_tools.extend(slug_tools)
@@ -198,7 +199,6 @@ class MCPToolResolver:
plus_api = PlusAPI(api_key=get_platform_integration_token())
response = plus_api.get_mcp_configs(slugs)
if response.status_code == 200:
configs: dict[str, dict[str, Any]] = response.json().get("configs", {})
return configs
@@ -218,6 +218,7 @@ class MCPToolResolver:
def _resolve_external(self, mcp_ref: str) -> list[BaseTool]:
"""Resolve an HTTPS MCP server URL into tools."""
from crewai.tools.base_tool import BaseTool
from crewai.tools.mcp_tool_wrapper import MCPToolWrapper
if "#" in mcp_ref:
@@ -227,6 +228,9 @@ class MCPToolResolver:
server_params = {"url": server_url}
server_name = self._extract_server_name(server_url)
sanitized_specific_tool = (
sanitize_tool_name(specific_tool) if specific_tool else None
)
try:
tool_schemas = self._get_mcp_tool_schemas(server_params)
@@ -239,7 +243,7 @@ class MCPToolResolver:
tools = []
for tool_name, schema in tool_schemas.items():
if specific_tool and tool_name != specific_tool:
if sanitized_specific_tool and tool_name != sanitized_specific_tool:
continue
try:
@@ -271,14 +275,16 @@ class MCPToolResolver:
)
return []
def _resolve_native(
self, mcp_config: MCPServerConfig
) -> tuple[list[BaseTool], Any | None]:
"""Resolve an ``MCPServerConfig`` into tools, returning the client for cleanup."""
from crewai.tools.base_tool import BaseTool
from crewai.tools.mcp_native_tool import MCPNativeTool
@staticmethod
def _create_transport(
mcp_config: MCPServerConfig,
) -> tuple[StdioTransport | HTTPTransport | SSETransport, str]:
"""Create a fresh transport instance from an MCP server config.
transport: StdioTransport | HTTPTransport | SSETransport
Returns a ``(transport, server_name)`` tuple. Each call produces an
independent transport so that parallel tool executions never share
state.
"""
if isinstance(mcp_config, MCPServerStdio):
transport = StdioTransport(
command=mcp_config.command,
@@ -292,38 +298,54 @@ class MCPToolResolver:
headers=mcp_config.headers,
streamable=mcp_config.streamable,
)
server_name = self._extract_server_name(mcp_config.url)
server_name = MCPToolResolver._extract_server_name(mcp_config.url)
elif isinstance(mcp_config, MCPServerSSE):
transport = SSETransport(
url=mcp_config.url,
headers=mcp_config.headers,
)
server_name = self._extract_server_name(mcp_config.url)
server_name = MCPToolResolver._extract_server_name(mcp_config.url)
else:
raise ValueError(f"Unsupported MCP server config type: {type(mcp_config)}")
return transport, server_name
client = MCPClient(
transport=transport,
def _resolve_native(
self, mcp_config: MCPServerConfig
) -> tuple[list[BaseTool], list[Any]]:
"""Resolve an ``MCPServerConfig`` into tools.
Returns ``(tools, clients)`` where *clients* is always empty for
native tools (clients are now created on-demand per invocation).
A ``client_factory`` closure is passed to each ``MCPNativeTool`` so
every call -- even concurrent calls to the *same* tool -- gets its
own ``MCPClient`` + transport with no shared mutable state.
"""
from crewai.tools.base_tool import BaseTool
from crewai.tools.mcp_native_tool import MCPNativeTool
discovery_transport, server_name = self._create_transport(mcp_config)
discovery_client = MCPClient(
transport=discovery_transport,
cache_tools_list=mcp_config.cache_tools_list,
)
async def _setup_client_and_list_tools() -> list[dict[str, Any]]:
try:
if not client.connected:
await client.connect()
if not discovery_client.connected:
await discovery_client.connect()
tools_list = await client.list_tools()
tools_list = await discovery_client.list_tools()
try:
await client.disconnect()
await discovery_client.disconnect()
await asyncio.sleep(0.1)
except Exception as e:
self._logger.log("error", f"Error during disconnect: {e}")
return tools_list
except Exception as e:
if client.connected:
await client.disconnect()
if discovery_client.connected:
await discovery_client.disconnect()
await asyncio.sleep(0.1)
raise RuntimeError(
f"Error during setup client and list tools: {e}"
@@ -334,9 +356,10 @@ class MCPToolResolver:
asyncio.get_running_loop()
import concurrent.futures
ctx = contextvars.copy_context()
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(
asyncio.run, _setup_client_and_list_tools()
ctx.run, asyncio.run, _setup_client_and_list_tools()
)
tools_list = future.result()
except RuntimeError:
@@ -376,6 +399,13 @@ class MCPToolResolver:
filtered_tools.append(tool)
tools_list = filtered_tools
def _client_factory() -> MCPClient:
transport, _ = self._create_transport(mcp_config)
return MCPClient(
transport=transport,
cache_tools_list=mcp_config.cache_tools_list,
)
tools = []
for tool_def in tools_list:
tool_name = tool_def.get("name", "")
@@ -396,7 +426,7 @@ class MCPToolResolver:
try:
native_tool = MCPNativeTool(
mcp_client=client,
client_factory=_client_factory,
tool_name=tool_name,
tool_schema=tool_schema,
server_name=server_name,
@@ -407,10 +437,10 @@ class MCPToolResolver:
self._logger.log("error", f"Failed to create native MCP tool: {e}")
continue
return cast(list[BaseTool], tools), client
return cast(list[BaseTool], tools), []
except Exception as e:
if client.connected:
asyncio.run(client.disconnect())
if discovery_client.connected:
asyncio.run(discovery_client.disconnect())
raise RuntimeError(f"Failed to get native MCP tools: {e}") from e

View File

@@ -308,7 +308,9 @@ def analyze_for_save(
return MemoryAnalysis.model_validate(response)
except Exception as e:
_logger.warning(
"Memory save analysis failed, using defaults: %s", e, exc_info=False,
"Memory save analysis failed, using defaults: %s",
e,
exc_info=False,
)
return _SAVE_DEFAULTS
@@ -366,6 +368,8 @@ def analyze_for_consolidation(
return ConsolidationPlan.model_validate(response)
except Exception as e:
_logger.warning(
"Consolidation analysis failed, defaulting to insert: %s", e, exc_info=False,
"Consolidation analysis failed, defaulting to insert: %s",
e,
exc_info=False,
)
return _CONSOLIDATION_DEFAULT

View File

@@ -11,6 +11,7 @@ Orchestrates the encoding side of memory in a single Flow with 5 steps:
from __future__ import annotations
from concurrent.futures import Future, ThreadPoolExecutor
import contextvars
from datetime import datetime
import math
from typing import Any
@@ -164,14 +165,20 @@ class EncodingFlow(Flow[EncodingState]):
def parallel_find_similar(self) -> None:
"""Search storage for similar records, concurrently for all active items."""
items = list(self.state.items)
active = [(i, item) for i, item in enumerate(items) if not item.dropped and item.embedding]
active = [
(i, item)
for i, item in enumerate(items)
if not item.dropped and item.embedding
]
if not active:
return
def _search_one(item: ItemState) -> list[tuple[MemoryRecord, float]]:
def _search_one(
item: ItemState,
) -> list[tuple[MemoryRecord, float]]:
scope_prefix = item.scope if item.scope and item.scope.strip("/") else None
return self._storage.search(
return self._storage.search( # type: ignore[no-any-return]
item.embedding,
scope_prefix=scope_prefix,
categories=None,
@@ -186,7 +193,14 @@ class EncodingFlow(Flow[EncodingState]):
item.top_similarity = float(raw[0][1]) if raw else 0.0
else:
with ThreadPoolExecutor(max_workers=min(len(active), 8)) as pool:
futures = [(i, item, pool.submit(_search_one, item)) for i, item in active]
futures = [
(
i,
item,
pool.submit(contextvars.copy_context().run, _search_one, item),
)
for i, item in active
]
for _, item, future in futures:
raw = future.result()
item.similar_records = [r for r, _ in raw]
@@ -250,24 +264,38 @@ class EncodingFlow(Flow[EncodingState]):
# Group B: consolidation only
self._apply_defaults(item)
consol_futures[i] = pool.submit(
contextvars.copy_context().run,
analyze_for_consolidation,
item.content, list(item.similar_records), self._llm,
item.content,
list(item.similar_records),
self._llm,
)
elif not fields_provided and not has_similar:
# Group C: field resolution only
save_futures[i] = pool.submit(
contextvars.copy_context().run,
analyze_for_save,
item.content, existing_scopes, existing_categories, self._llm,
item.content,
existing_scopes,
existing_categories,
self._llm,
)
else:
# Group D: both in parallel
save_futures[i] = pool.submit(
contextvars.copy_context().run,
analyze_for_save,
item.content, existing_scopes, existing_categories, self._llm,
item.content,
existing_scopes,
existing_categories,
self._llm,
)
consol_futures[i] = pool.submit(
contextvars.copy_context().run,
analyze_for_consolidation,
item.content, list(item.similar_records), self._llm,
item.content,
list(item.similar_records),
self._llm,
)
# Collect field-resolution results
@@ -300,8 +328,8 @@ class EncodingFlow(Flow[EncodingState]):
item.plan = ConsolidationPlan(actions=[], insert_new=True)
# Collect consolidation results
for i, future in consol_futures.items():
items[i].plan = future.result()
for i, consol_future in consol_futures.items():
items[i].plan = consol_future.result()
finally:
pool.shutdown(wait=False)
@@ -339,7 +367,9 @@ class EncodingFlow(Flow[EncodingState]):
# similar_records overlap). Collect one action per record_id, first wins.
# Also build a map from record_id to the original MemoryRecord for updates.
dedup_deletes: set[str] = set() # record_ids to delete
dedup_updates: dict[str, tuple[int, str]] = {} # record_id -> (item_idx, new_content)
dedup_updates: dict[
str, tuple[int, str]
] = {} # record_id -> (item_idx, new_content)
all_similar: dict[str, MemoryRecord] = {} # record_id -> MemoryRecord
for i, item in enumerate(items):
@@ -350,13 +380,24 @@ class EncodingFlow(Flow[EncodingState]):
all_similar[r.id] = r
for action in item.plan.actions:
rid = action.record_id
if action.action == "delete" and rid not in dedup_deletes and rid not in dedup_updates:
if (
action.action == "delete"
and rid not in dedup_deletes
and rid not in dedup_updates
):
dedup_deletes.add(rid)
elif action.action == "update" and action.new_content and rid not in dedup_deletes and rid not in dedup_updates:
elif (
action.action == "update"
and action.new_content
and rid not in dedup_deletes
and rid not in dedup_updates
):
dedup_updates[rid] = (i, action.new_content)
# --- Batch re-embed all update contents in ONE call ---
update_list = list(dedup_updates.items()) # [(record_id, (item_idx, new_content)), ...]
update_list = list(
dedup_updates.items()
) # [(record_id, (item_idx, new_content)), ...]
update_embeddings: list[list[float]] = []
if update_list:
update_contents = [content for _, (_, content) in update_list]
@@ -377,51 +418,52 @@ class EncodingFlow(Flow[EncodingState]):
if item.dropped or item.plan is None:
continue
if item.plan.insert_new:
to_insert.append((i, MemoryRecord(
content=item.content,
scope=item.resolved_scope,
categories=item.resolved_categories,
metadata=item.resolved_metadata,
importance=item.resolved_importance,
embedding=item.embedding if item.embedding else None,
source=item.resolved_source,
private=item.resolved_private,
)))
# All storage mutations under one lock so no other pipeline can
# interleave and cause version conflicts. The lock is reentrant
# (RLock) so the individual storage methods re-acquire it safely.
updated_records: dict[str, MemoryRecord] = {}
with self._storage.write_lock:
if dedup_deletes:
self._storage.delete(record_ids=list(dedup_deletes))
self.state.records_deleted += len(dedup_deletes)
for rid, (_item_idx, new_content) in dedup_updates.items():
existing = all_similar.get(rid)
if existing is not None:
new_emb = update_emb_map.get(rid, [])
updated = MemoryRecord(
id=existing.id,
content=new_content,
scope=existing.scope,
categories=existing.categories,
metadata=existing.metadata,
importance=existing.importance,
created_at=existing.created_at,
last_accessed=now,
embedding=new_emb if new_emb else existing.embedding,
to_insert.append(
(
i,
MemoryRecord(
content=item.content,
scope=item.resolved_scope,
categories=item.resolved_categories,
metadata=item.resolved_metadata,
importance=item.resolved_importance,
embedding=item.embedding if item.embedding else None,
source=item.resolved_source,
private=item.resolved_private,
),
)
self._storage.update(updated)
self.state.records_updated += 1
updated_records[rid] = updated
)
if to_insert:
records = [r for _, r in to_insert]
self._storage.save(records)
self.state.records_inserted += len(records)
for idx, record in to_insert:
items[idx].result_record = record
updated_records: dict[str, MemoryRecord] = {}
if dedup_deletes:
self._storage.delete(record_ids=list(dedup_deletes))
self.state.records_deleted += len(dedup_deletes)
for rid, (_item_idx, new_content) in dedup_updates.items():
existing = all_similar.get(rid)
if existing is not None:
new_emb = update_emb_map.get(rid, [])
updated = MemoryRecord(
id=existing.id,
content=new_content,
scope=existing.scope,
categories=existing.categories,
metadata=existing.metadata,
importance=existing.importance,
created_at=existing.created_at,
last_accessed=now,
embedding=new_emb if new_emb else existing.embedding,
)
self._storage.update(updated)
self.state.records_updated += 1
updated_records[rid] = updated
if to_insert:
records = [r for _, r in to_insert]
self._storage.save(records)
self.state.records_inserted += len(records)
for idx, record in to_insert:
items[idx].result_record = record
# Set result_record for non-insert items (after lock, using updated_records)
for _i, item in enumerate(items):

View File

@@ -3,11 +3,9 @@
from __future__ import annotations
from datetime import datetime
from typing import TYPE_CHECKING, Any
from typing import Any, Literal
if TYPE_CHECKING:
from crewai.memory.unified_memory import Memory
from pydantic import BaseModel, ConfigDict, Field, PrivateAttr, model_validator
from crewai.memory.types import (
_RECALL_OVERSAMPLE_FACTOR,
@@ -15,22 +13,38 @@ from crewai.memory.types import (
MemoryRecord,
ScopeInfo,
)
from crewai.memory.unified_memory import Memory
class MemoryScope:
class MemoryScope(BaseModel):
"""View of Memory restricted to a root path. All operations are scoped under that path."""
def __init__(self, memory: Memory, root_path: str) -> None:
"""Initialize scope.
model_config = ConfigDict(arbitrary_types_allowed=True)
Args:
memory: The underlying Memory instance.
root_path: Root path for this scope (e.g. /agent/1).
"""
self._memory = memory
self._root = root_path.rstrip("/") or ""
if self._root and not self._root.startswith("/"):
self._root = "/" + self._root
root_path: str = Field(default="/")
_memory: Memory = PrivateAttr()
_root: str = PrivateAttr()
@model_validator(mode="wrap")
@classmethod
def _accept_memory(cls, data: Any, handler: Any) -> MemoryScope:
"""Extract memory dependency and normalize root path before validation."""
if isinstance(data, MemoryScope):
return data
memory = data.pop("memory")
instance: MemoryScope = handler(data)
instance._memory = memory
root = instance.root_path.rstrip("/") or ""
if root and not root.startswith("/"):
root = "/" + root
instance._root = root
return instance
@property
def read_only(self) -> bool:
"""Whether the underlying memory is read-only."""
return self._memory.read_only
def _scope_path(self, scope: str | None) -> str:
if not scope or scope == "/":
@@ -52,7 +66,7 @@ class MemoryScope:
importance: float | None = None,
source: str | None = None,
private: bool = False,
) -> MemoryRecord:
) -> MemoryRecord | None:
"""Remember content; scope is relative to this scope's root."""
path = self._scope_path(scope)
return self._memory.remember(
@@ -71,7 +85,7 @@ class MemoryScope:
scope: str | None = None,
categories: list[str] | None = None,
limit: int = 10,
depth: str = "deep",
depth: Literal["shallow", "deep"] = "deep",
source: str | None = None,
include_private: bool = False,
) -> list[MemoryMatch]:
@@ -138,34 +152,34 @@ class MemoryScope:
"""Return a narrower scope under this scope."""
child = path.strip("/")
if not child:
return MemoryScope(self._memory, self._root or "/")
return MemoryScope(memory=self._memory, root_path=self._root or "/")
base = self._root.rstrip("/") or ""
new_root = f"{base}/{child}" if base else f"/{child}"
return MemoryScope(self._memory, new_root)
return MemoryScope(memory=self._memory, root_path=new_root)
class MemorySlice:
class MemorySlice(BaseModel):
"""View over multiple scopes: recall searches all, remember is a no-op when read_only."""
def __init__(
self,
memory: Memory,
scopes: list[str],
categories: list[str] | None = None,
read_only: bool = True,
) -> None:
"""Initialize slice.
model_config = ConfigDict(arbitrary_types_allowed=True)
Args:
memory: The underlying Memory instance.
scopes: List of scope paths to include.
categories: Optional category filter for recall.
read_only: If True, remember() is a silent no-op.
"""
self._memory = memory
self._scopes = [s.rstrip("/") or "/" for s in scopes]
self._categories = categories
self._read_only = read_only
scopes: list[str] = Field(default_factory=list)
categories: list[str] | None = Field(default=None)
read_only: bool = Field(default=True)
_memory: Memory = PrivateAttr()
@model_validator(mode="wrap")
@classmethod
def _accept_memory(cls, data: Any, handler: Any) -> MemorySlice:
"""Extract memory dependency and normalize scopes before validation."""
if isinstance(data, MemorySlice):
return data
memory = data.pop("memory")
data["scopes"] = [s.rstrip("/") or "/" for s in data.get("scopes", [])]
instance: MemorySlice = handler(data)
instance._memory = memory
return instance
def remember(
self,
@@ -178,7 +192,7 @@ class MemorySlice:
private: bool = False,
) -> MemoryRecord | None:
"""Remember into an explicit scope. No-op when read_only=True."""
if self._read_only:
if self.read_only:
return None
return self._memory.remember(
content,
@@ -196,14 +210,14 @@ class MemorySlice:
scope: str | None = None,
categories: list[str] | None = None,
limit: int = 10,
depth: str = "deep",
depth: Literal["shallow", "deep"] = "deep",
source: str | None = None,
include_private: bool = False,
) -> list[MemoryMatch]:
"""Recall across all slice scopes; results merged and re-ranked."""
cats = categories or self._categories
cats = categories or self.categories
all_matches: list[MemoryMatch] = []
for sc in self._scopes:
for sc in self.scopes:
matches = self._memory.recall(
query,
scope=sc,
@@ -231,7 +245,7 @@ class MemorySlice:
def list_scopes(self, path: str = "/") -> list[str]:
"""List scopes across all slice roots."""
out: list[str] = []
for sc in self._scopes:
for sc in self.scopes:
full = f"{sc.rstrip('/')}{path}" if sc != "/" else path
out.extend(self._memory.list_scopes(full))
return sorted(set(out))
@@ -243,15 +257,23 @@ class MemorySlice:
oldest: datetime | None = None
newest: datetime | None = None
children: list[str] = []
for sc in self._scopes:
for sc in self.scopes:
full = f"{sc.rstrip('/')}{path}" if sc != "/" else path
inf = self._memory.info(full)
total_records += inf.record_count
all_categories.update(inf.categories)
if inf.oldest_record:
oldest = inf.oldest_record if oldest is None else min(oldest, inf.oldest_record)
oldest = (
inf.oldest_record
if oldest is None
else min(oldest, inf.oldest_record)
)
if inf.newest_record:
newest = inf.newest_record if newest is None else max(newest, inf.newest_record)
newest = (
inf.newest_record
if newest is None
else max(newest, inf.newest_record)
)
children.extend(inf.child_scopes)
return ScopeInfo(
path=path,
@@ -265,7 +287,7 @@ class MemorySlice:
def list_categories(self, path: str | None = None) -> dict[str, int]:
"""Categories and counts across slice scopes."""
counts: dict[str, int] = {}
for sc in self._scopes:
for sc in self.scopes:
full = (f"{sc.rstrip('/')}{path}" if sc != "/" else path) if path else sc
for k, v in self._memory.list_categories(full).items():
counts[k] = counts.get(k, 0) + v

View File

@@ -11,6 +11,7 @@ Implements adaptive-depth retrieval with:
from __future__ import annotations
from concurrent.futures import ThreadPoolExecutor, as_completed
import contextvars
from datetime import datetime
from typing import Any
from uuid import uuid4
@@ -103,13 +104,12 @@ class RecallFlow(Flow[RecallState]):
)
# Post-filter by time cutoff
if self.state.time_cutoff and raw:
raw = [
(r, s) for r, s in raw if r.created_at >= self.state.time_cutoff
]
raw = [(r, s) for r, s in raw if r.created_at >= self.state.time_cutoff]
# Privacy filter
if not self.state.include_private and raw:
raw = [
(r, s) for r, s in raw
(r, s)
for r, s in raw
if not r.private or r.source == self.state.source
]
return scope, raw
@@ -130,15 +130,20 @@ class RecallFlow(Flow[RecallState]):
top_composite, _ = compute_composite_score(
results[0][0], results[0][1], self._config
)
findings.append({
"scope": scope,
"results": results,
"top_score": top_composite,
})
findings.append(
{
"scope": scope,
"results": results,
"top_score": top_composite,
}
)
else:
with ThreadPoolExecutor(max_workers=min(len(tasks), 4)) as pool:
futures = {
pool.submit(_search_one, emb, sc): (emb, sc)
pool.submit(contextvars.copy_context().run, _search_one, emb, sc): (
emb,
sc,
)
for emb, sc in tasks
}
for future in as_completed(futures):
@@ -147,16 +152,16 @@ class RecallFlow(Flow[RecallState]):
top_composite, _ = compute_composite_score(
results[0][0], results[0][1], self._config
)
findings.append({
"scope": scope,
"results": results,
"top_score": top_composite,
})
findings.append(
{
"scope": scope,
"results": results,
"top_score": top_composite,
}
)
self.state.chunk_findings = findings
self.state.confidence = max(
(f["top_score"] for f in findings), default=0.0
)
self.state.confidence = max((f["top_score"] for f in findings), default=0.0)
return findings
# ------------------------------------------------------------------
@@ -210,12 +215,16 @@ class RecallFlow(Flow[RecallState]):
# Parse time_filter into a datetime cutoff
if analysis.time_filter:
try:
self.state.time_cutoff = datetime.fromisoformat(analysis.time_filter)
self.state.time_cutoff = datetime.fromisoformat(
analysis.time_filter
)
except ValueError:
pass
# Batch-embed all sub-queries in ONE call
queries = analysis.recall_queries if analysis.recall_queries else [self.state.query]
queries = (
analysis.recall_queries if analysis.recall_queries else [self.state.query]
)
queries = queries[:3]
embeddings = embed_texts(self._embedder, queries)
pairs: list[tuple[str, list[float]]] = [
@@ -296,17 +305,21 @@ class RecallFlow(Flow[RecallState]):
response = self._llm.call([{"role": "user", "content": prompt}])
if isinstance(response, str) and "missing" in response.lower():
self.state.evidence_gaps.append(response[:200])
enhanced.append({
"scope": finding["scope"],
"extraction": response,
"results": finding["results"],
})
enhanced.append(
{
"scope": finding["scope"],
"extraction": response,
"results": finding["results"],
}
)
except Exception:
enhanced.append({
"scope": finding["scope"],
"extraction": "",
"results": finding["results"],
})
enhanced.append(
{
"scope": finding["scope"],
"extraction": "",
"results": finding["results"],
}
)
self.state.chunk_findings = enhanced
return enhanced
@@ -318,7 +331,7 @@ class RecallFlow(Flow[RecallState]):
@router(re_search)
def re_decide_depth(self) -> str:
"""Re-evaluate depth after re-search. Same logic as decide_depth."""
return self.decide_depth()
return self.decide_depth() # type: ignore[call-arg]
@listen("synthesize")
def synthesize_results(self) -> list[MemoryMatch]:

View File

@@ -1,5 +1,6 @@
import json
import logging
import os
from pathlib import Path
import sqlite3
from typing import Any
@@ -8,6 +9,7 @@ from crewai.task import Task
from crewai.utilities import Printer
from crewai.utilities.crew_json_encoder import CrewJSONEncoder
from crewai.utilities.errors import DatabaseError, DatabaseOperationError
from crewai.utilities.lock_store import lock as store_lock
from crewai.utilities.paths import db_storage_path
@@ -24,6 +26,7 @@ class KickoffTaskOutputsSQLiteStorage:
# Get the parent directory of the default db path and create our db file there
db_path = str(Path(db_storage_path()) / "latest_kickoff_task_outputs.db")
self.db_path = db_path
self._lock_name = f"sqlite:{os.path.realpath(self.db_path)}"
self._printer: Printer = Printer()
self._initialize_db()
@@ -38,23 +41,25 @@ class KickoffTaskOutputsSQLiteStorage:
DatabaseOperationError: If database initialization fails due to SQLite errors.
"""
try:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute(
with store_lock(self._lock_name):
with sqlite3.connect(self.db_path, timeout=30) as conn:
conn.execute("PRAGMA journal_mode=WAL")
cursor = conn.cursor()
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS latest_kickoff_task_outputs (
task_id TEXT PRIMARY KEY,
expected_output TEXT,
output JSON,
task_index INTEGER,
inputs JSON,
was_replayed BOOLEAN,
timestamp DATETIME DEFAULT CURRENT_TIMESTAMP
)
"""
CREATE TABLE IF NOT EXISTS latest_kickoff_task_outputs (
task_id TEXT PRIMARY KEY,
expected_output TEXT,
output JSON,
task_index INTEGER,
inputs JSON,
was_replayed BOOLEAN,
timestamp DATETIME DEFAULT CURRENT_TIMESTAMP
)
"""
)
conn.commit()
conn.commit()
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.INIT_ERROR, e)
logger.error(error_msg)
@@ -82,25 +87,26 @@ class KickoffTaskOutputsSQLiteStorage:
"""
inputs = inputs or {}
try:
with sqlite3.connect(self.db_path) as conn:
conn.execute("BEGIN TRANSACTION")
cursor = conn.cursor()
cursor.execute(
"""
INSERT OR REPLACE INTO latest_kickoff_task_outputs
(task_id, expected_output, output, task_index, inputs, was_replayed)
VALUES (?, ?, ?, ?, ?, ?)
""",
(
str(task.id),
task.expected_output,
json.dumps(output, cls=CrewJSONEncoder),
task_index,
json.dumps(inputs, cls=CrewJSONEncoder),
was_replayed,
),
)
conn.commit()
with store_lock(self._lock_name):
with sqlite3.connect(self.db_path, timeout=30) as conn:
conn.execute("BEGIN TRANSACTION")
cursor = conn.cursor()
cursor.execute(
"""
INSERT OR REPLACE INTO latest_kickoff_task_outputs
(task_id, expected_output, output, task_index, inputs, was_replayed)
VALUES (?, ?, ?, ?, ?, ?)
""",
(
str(task.id),
task.expected_output,
json.dumps(output, cls=CrewJSONEncoder),
task_index,
json.dumps(inputs, cls=CrewJSONEncoder),
was_replayed,
),
)
conn.commit()
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.SAVE_ERROR, e)
logger.error(error_msg)
@@ -125,30 +131,31 @@ class KickoffTaskOutputsSQLiteStorage:
DatabaseOperationError: If updating the task output fails due to SQLite errors.
"""
try:
with sqlite3.connect(self.db_path) as conn:
conn.execute("BEGIN TRANSACTION")
cursor = conn.cursor()
with store_lock(self._lock_name):
with sqlite3.connect(self.db_path, timeout=30) as conn:
conn.execute("BEGIN TRANSACTION")
cursor = conn.cursor()
fields = []
values = []
for key, value in kwargs.items():
fields.append(f"{key} = ?")
values.append(
json.dumps(value, cls=CrewJSONEncoder)
if isinstance(value, dict)
else value
)
fields = []
values = []
for key, value in kwargs.items():
fields.append(f"{key} = ?")
values.append(
json.dumps(value, cls=CrewJSONEncoder)
if isinstance(value, dict)
else value
)
query = f"UPDATE latest_kickoff_task_outputs SET {', '.join(fields)} WHERE task_index = ?" # nosec # noqa: S608
values.append(task_index)
query = f"UPDATE latest_kickoff_task_outputs SET {', '.join(fields)} WHERE task_index = ?" # nosec # noqa: S608
values.append(task_index)
cursor.execute(query, tuple(values))
conn.commit()
cursor.execute(query, tuple(values))
conn.commit()
if cursor.rowcount == 0:
logger.warning(
f"No row found with task_index {task_index}. No update performed."
)
if cursor.rowcount == 0:
logger.warning(
f"No row found with task_index {task_index}. No update performed."
)
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.UPDATE_ERROR, e)
logger.error(error_msg)
@@ -166,7 +173,7 @@ class KickoffTaskOutputsSQLiteStorage:
DatabaseOperationError: If loading task outputs fails due to SQLite errors.
"""
try:
with sqlite3.connect(self.db_path) as conn:
with sqlite3.connect(self.db_path, timeout=30) as conn:
cursor = conn.cursor()
cursor.execute("""
SELECT *
@@ -205,11 +212,12 @@ class KickoffTaskOutputsSQLiteStorage:
DatabaseOperationError: If deleting task outputs fails due to SQLite errors.
"""
try:
with sqlite3.connect(self.db_path) as conn:
conn.execute("BEGIN TRANSACTION")
cursor = conn.cursor()
cursor.execute("DELETE FROM latest_kickoff_task_outputs")
conn.commit()
with store_lock(self._lock_name):
with sqlite3.connect(self.db_path, timeout=30) as conn:
conn.execute("BEGIN TRANSACTION")
cursor = conn.cursor()
cursor.execute("DELETE FROM latest_kickoff_task_outputs")
conn.commit()
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.DELETE_ERROR, e)
logger.error(error_msg)

View File

@@ -2,6 +2,7 @@
from __future__ import annotations
import contextvars
from datetime import datetime
import json
import logging
@@ -9,11 +10,12 @@ import os
from pathlib import Path
import threading
import time
from typing import Any, ClassVar
from typing import Any
import lancedb
import lancedb # type: ignore[import-untyped]
from crewai.memory.types import MemoryRecord, ScopeInfo
from crewai.utilities.lock_store import lock as store_lock
_logger = logging.getLogger(__name__)
@@ -39,15 +41,6 @@ _RETRY_BASE_DELAY = 0.2 # seconds; doubles on each retry
class LanceDBStorage:
"""LanceDB-backed storage for the unified memory system."""
# Class-level registry: maps resolved database path -> shared write lock.
# When multiple Memory instances (e.g. agent + crew) independently create
# LanceDBStorage pointing at the same directory, they share one lock so
# their writes don't conflict.
# Uses RLock (reentrant) so callers can hold the lock for a batch of
# operations while the individual methods re-acquire it without deadlocking.
_path_locks: ClassVar[dict[str, threading.RLock]] = {}
_path_locks_guard: ClassVar[threading.Lock] = threading.Lock()
def __init__(
self,
path: str | Path | None = None,
@@ -83,39 +76,19 @@ class LanceDBStorage:
self._table_name = table_name
self._db = lancedb.connect(str(self._path))
# On macOS and Linux the default per-process open-file limit is 256.
# A LanceDB table stores one file per fragment (one fragment per save()
# call by default). With hundreds of fragments, a single full-table
# scan opens all of them simultaneously, exhausting the limit.
# Raise it proactively so scans on large tables never hit OS error 24.
try:
import resource
soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE)
if soft < 4096:
resource.setrlimit(resource.RLIMIT_NOFILE, (min(hard, 4096), hard))
except Exception: # noqa: S110
pass # Windows or already at the max hard limit — safe to ignore
self._compact_every = compact_every
self._save_count = 0
# Get or create a shared write lock for this database path.
resolved = str(self._path.resolve())
with LanceDBStorage._path_locks_guard:
if resolved not in LanceDBStorage._path_locks:
LanceDBStorage._path_locks[resolved] = threading.RLock()
self._write_lock = LanceDBStorage._path_locks[resolved]
self._lock_name = f"lancedb:{self._path.resolve()}"
# Try to open an existing table and infer dimension from its schema.
# If no table exists yet, defer creation until the first save so the
# dimension can be auto-detected from the embedder's actual output.
try:
self._table: lancedb.table.Table | None = self._db.open_table(self._table_name)
self._table: Any = self._db.open_table(self._table_name)
self._vector_dim: int = self._infer_dim_from_table(self._table)
# Best-effort: create the scope index if it doesn't exist yet.
self._ensure_scope_index()
# Compact in the background if the table has accumulated many
# fragments from previous runs (each save() creates one).
with store_lock(self._lock_name):
self._ensure_scope_index()
self._compact_if_needed()
except Exception:
self._table = None
@@ -124,43 +97,25 @@ class LanceDBStorage:
# Explicit dim provided: create the table immediately if it doesn't exist.
if self._table is None and vector_dim is not None:
self._vector_dim = vector_dim
self._table = self._create_table(vector_dim)
@property
def write_lock(self) -> threading.RLock:
"""The shared reentrant write lock for this database path.
Callers can acquire this to hold the lock across multiple storage
operations (e.g. delete + update + save as one atomic batch).
Individual methods also acquire it internally, but since it's
reentrant (RLock), the same thread won't deadlock.
"""
return self._write_lock
with store_lock(self._lock_name):
self._table = self._create_table(vector_dim)
@staticmethod
def _infer_dim_from_table(table: lancedb.table.Table) -> int:
def _infer_dim_from_table(table: Any) -> int:
"""Read vector dimension from an existing table's schema."""
schema = table.schema
for field in schema:
if field.name == "vector":
try:
return field.type.list_size
return int(field.type.list_size)
except Exception:
break
return DEFAULT_VECTOR_DIM
def _retry_write(self, op: str, *args: Any, **kwargs: Any) -> Any:
"""Execute a table operation with retry on LanceDB commit conflicts.
def _do_write(self, op: str, *args: Any, **kwargs: Any) -> Any:
"""Execute a single table write with retry on commit conflicts.
Args:
op: Method name on the table object (e.g. "add", "delete").
*args, **kwargs: Passed to the table method.
LanceDB uses optimistic concurrency: if two transactions overlap,
the second to commit fails with an ``OSError`` containing
"Commit conflict". This helper retries with exponential backoff,
refreshing the table reference before each retry so the retried
call uses the latest committed version (not a stale reference).
Caller must already hold ``store_lock(self._lock_name)``.
"""
delay = _RETRY_BASE_DELAY
for attempt in range(_MAX_RETRIES + 1):
@@ -171,20 +126,24 @@ class LanceDBStorage:
raise
_logger.debug(
"LanceDB commit conflict on %s (attempt %d/%d), retrying in %.1fs",
op, attempt + 1, _MAX_RETRIES, delay,
op,
attempt + 1,
_MAX_RETRIES,
delay,
)
# Refresh table to pick up the latest version before retrying.
# The next getattr(self._table, op) will use the fresh table.
try:
self._table = self._db.open_table(self._table_name)
except Exception: # noqa: S110
pass # table refresh is best-effort
pass
time.sleep(delay)
delay *= 2
return None # unreachable, but satisfies type checker
def _create_table(self, vector_dim: int) -> lancedb.table.Table:
"""Create a new table with the given vector dimension."""
def _create_table(self, vector_dim: int) -> Any:
"""Create a new table with the given vector dimension.
Caller must already hold ``store_lock(self._lock_name)``.
"""
placeholder = [
{
"id": "__schema_placeholder__",
@@ -200,8 +159,12 @@ class LanceDBStorage:
"vector": [0.0] * vector_dim,
}
]
table = self._db.create_table(self._table_name, placeholder)
table.delete("id = '__schema_placeholder__'")
try:
table = self._db.create_table(self._table_name, placeholder)
except ValueError:
table = self._db.open_table(self._table_name)
else:
table.delete("id = '__schema_placeholder__'")
return table
def _ensure_scope_index(self) -> None:
@@ -238,8 +201,10 @@ class LanceDBStorage:
def _compact_async(self) -> None:
"""Fire-and-forget: compact the table in a daemon background thread."""
ctx = contextvars.copy_context()
threading.Thread(
target=self._compact_safe,
target=ctx.run,
args=(self._compact_safe,),
daemon=True,
name="lancedb-compact",
).start()
@@ -248,13 +213,13 @@ class LanceDBStorage:
"""Run ``table.optimize()`` in a background thread, absorbing errors."""
try:
if self._table is not None:
self._table.optimize()
# Refresh the scope index so new fragments are covered.
self._ensure_scope_index()
with store_lock(self._lock_name):
self._table.optimize()
self._ensure_scope_index()
except Exception:
_logger.debug("LanceDB background compaction failed", exc_info=True)
def _ensure_table(self, vector_dim: int | None = None) -> lancedb.table.Table:
def _ensure_table(self, vector_dim: int | None = None) -> Any:
"""Return the table, creating it lazily if needed.
Args:
@@ -280,7 +245,9 @@ class LanceDBStorage:
"last_accessed": record.last_accessed.isoformat(),
"source": record.source or "",
"private": record.private,
"vector": record.embedding if record.embedding else [0.0] * self._vector_dim,
"vector": record.embedding
if record.embedding
else [0.0] * self._vector_dim,
}
def _row_to_record(self, row: dict[str, Any]) -> MemoryRecord:
@@ -296,7 +263,9 @@ class LanceDBStorage:
id=str(row["id"]),
content=str(row["content"]),
scope=str(row["scope"]),
categories=json.loads(row["categories_str"]) if row.get("categories_str") else [],
categories=json.loads(row["categories_str"])
if row.get("categories_str")
else [],
metadata=json.loads(row["metadata_str"]) if row.get("metadata_str") else {},
importance=float(row.get("importance", 0.5)),
created_at=_parse_dt(row.get("created_at")),
@@ -316,16 +285,15 @@ class LanceDBStorage:
dim = len(r.embedding)
break
is_new_table = self._table is None
with self._write_lock:
with store_lock(self._lock_name):
self._ensure_table(vector_dim=dim)
rows = [self._record_to_row(r) for r in records]
for r in rows:
if r["vector"] is None or len(r["vector"]) != self._vector_dim:
r["vector"] = [0.0] * self._vector_dim
self._retry_write("add", rows)
# Create the scope index on the first save so it covers the initial dataset.
if is_new_table:
self._ensure_scope_index()
rows = [self._record_to_row(rec) for rec in records]
for row in rows:
if row["vector"] is None or len(row["vector"]) != self._vector_dim:
row["vector"] = [0.0] * self._vector_dim
self._do_write("add", rows)
if is_new_table:
self._ensure_scope_index()
# Auto-compact every N saves so fragment files don't pile up.
self._save_count += 1
if self._compact_every > 0 and self._save_count % self._compact_every == 0:
@@ -333,14 +301,14 @@ class LanceDBStorage:
def update(self, record: MemoryRecord) -> None:
"""Update a record by ID. Preserves created_at, updates last_accessed."""
with self._write_lock:
with store_lock(self._lock_name):
self._ensure_table()
safe_id = str(record.id).replace("'", "''")
self._retry_write("delete", f"id = '{safe_id}'")
self._do_write("delete", f"id = '{safe_id}'")
row = self._record_to_row(record)
if row["vector"] is None or len(row["vector"]) != self._vector_dim:
row["vector"] = [0.0] * self._vector_dim
self._retry_write("add", [row])
self._do_write("add", [row])
def touch_records(self, record_ids: list[str]) -> None:
"""Update last_accessed to now for the given record IDs.
@@ -354,11 +322,11 @@ class LanceDBStorage:
"""
if not record_ids or self._table is None:
return
with self._write_lock:
with store_lock(self._lock_name):
now = datetime.utcnow().isoformat()
safe_ids = [str(rid).replace("'", "''") for rid in record_ids]
ids_expr = ", ".join(f"'{rid}'" for rid in safe_ids)
self._retry_write(
self._do_write(
"update",
where=f"id IN ({ids_expr})",
values={"last_accessed": now},
@@ -368,11 +336,12 @@ class LanceDBStorage:
"""Return a single record by ID, or None if not found."""
if self._table is None:
return None
safe_id = str(record_id).replace("'", "''")
rows = self._table.search().where(f"id = '{safe_id}'").limit(1).to_list()
if not rows:
return None
return self._row_to_record(rows[0])
with store_lock(self._lock_name):
safe_id = str(record_id).replace("'", "''")
rows = self._table.search().where(f"id = '{safe_id}'").limit(1).to_list()
if not rows:
return None
return self._row_to_record(rows[0])
def search(
self,
@@ -385,18 +354,23 @@ class LanceDBStorage:
) -> list[tuple[MemoryRecord, float]]:
if self._table is None:
return []
query = self._table.search(query_embedding)
if scope_prefix is not None and scope_prefix.strip("/"):
prefix = scope_prefix.rstrip("/")
like_val = prefix + "%"
query = query.where(f"scope LIKE '{like_val}'")
results = query.limit(limit * 3 if (categories or metadata_filter) else limit).to_list()
with store_lock(self._lock_name):
query = self._table.search(query_embedding)
if scope_prefix is not None and scope_prefix.strip("/"):
prefix = scope_prefix.rstrip("/")
like_val = prefix + "%"
query = query.where(f"scope LIKE '{like_val}'")
results = query.limit(
limit * 3 if (categories or metadata_filter) else limit
).to_list()
out: list[tuple[MemoryRecord, float]] = []
for row in results:
record = self._row_to_record(row)
if categories and not any(c in record.categories for c in categories):
continue
if metadata_filter and not all(record.metadata.get(k) == v for k, v in metadata_filter.items()):
if metadata_filter and not all(
record.metadata.get(k) == v for k, v in metadata_filter.items()
):
continue
distance = row.get("_distance", 0.0)
score = 1.0 / (1.0 + float(distance)) if distance is not None else 1.0
@@ -416,30 +390,34 @@ class LanceDBStorage:
) -> int:
if self._table is None:
return 0
with self._write_lock:
with store_lock(self._lock_name):
if record_ids and not (categories or metadata_filter):
before = self._table.count_rows()
before = int(self._table.count_rows())
ids_expr = ", ".join(f"'{rid}'" for rid in record_ids)
self._retry_write("delete", f"id IN ({ids_expr})")
return before - self._table.count_rows()
self._do_write("delete", f"id IN ({ids_expr})")
return before - int(self._table.count_rows())
if categories or metadata_filter:
rows = self._scan_rows(scope_prefix)
to_delete: list[str] = []
for row in rows:
record = self._row_to_record(row)
if categories and not any(c in record.categories for c in categories):
if categories and not any(
c in record.categories for c in categories
):
continue
if metadata_filter and not all(record.metadata.get(k) == v for k, v in metadata_filter.items()):
if metadata_filter and not all(
record.metadata.get(k) == v for k, v in metadata_filter.items()
):
continue
if older_than and record.created_at >= older_than:
continue
to_delete.append(record.id)
if not to_delete:
return 0
before = self._table.count_rows()
before = int(self._table.count_rows())
ids_expr = ", ".join(f"'{rid}'" for rid in to_delete)
self._retry_write("delete", f"id IN ({ids_expr})")
return before - self._table.count_rows()
self._do_write("delete", f"id IN ({ids_expr})")
return before - int(self._table.count_rows())
conditions = []
if scope_prefix is not None and scope_prefix.strip("/"):
prefix = scope_prefix.rstrip("/")
@@ -449,13 +427,13 @@ class LanceDBStorage:
if older_than is not None:
conditions.append(f"created_at < '{older_than.isoformat()}'")
if not conditions:
before = self._table.count_rows()
self._retry_write("delete", "id != ''")
return before - self._table.count_rows()
before = int(self._table.count_rows())
self._do_write("delete", "id != ''")
return before - int(self._table.count_rows())
where_expr = " AND ".join(conditions)
before = self._table.count_rows()
self._retry_write("delete", where_expr)
return before - self._table.count_rows()
before = int(self._table.count_rows())
self._do_write("delete", where_expr)
return before - int(self._table.count_rows())
def _scan_rows(
self,
@@ -468,6 +446,8 @@ class LanceDBStorage:
Uses a full table scan (no vector query) so the limit is applied after
the scope filter, not to ANN candidates before filtering.
Caller must hold ``store_lock(self._lock_name)``.
Args:
scope_prefix: Optional scope path prefix to filter by.
limit: Maximum number of rows to return (applied after filtering).
@@ -482,7 +462,8 @@ class LanceDBStorage:
q = q.where(f"scope LIKE '{scope_prefix.rstrip('/')}%'")
if columns is not None:
q = q.select(columns)
return q.limit(limit).to_list()
result: list[dict[str, Any]] = q.limit(limit).to_list()
return result
def list_records(
self, scope_prefix: str | None = None, limit: int = 200, offset: int = 0
@@ -497,7 +478,8 @@ class LanceDBStorage:
Returns:
List of MemoryRecord, ordered by created_at descending.
"""
rows = self._scan_rows(scope_prefix, limit=limit + offset)
with store_lock(self._lock_name):
rows = self._scan_rows(scope_prefix, limit=limit + offset)
records = [self._row_to_record(r) for r in rows]
records.sort(key=lambda r: r.created_at, reverse=True)
return records[offset : offset + limit]
@@ -507,10 +489,11 @@ class LanceDBStorage:
prefix = scope if scope != "/" else ""
if prefix and not prefix.startswith("/"):
prefix = "/" + prefix
rows = self._scan_rows(
prefix or None,
columns=["scope", "categories_str", "created_at"],
)
with store_lock(self._lock_name):
rows = self._scan_rows(
prefix or None,
columns=["scope", "categories_str", "created_at"],
)
if not rows:
return ScopeInfo(
path=scope or "/",
@@ -528,7 +511,7 @@ class LanceDBStorage:
for row in rows:
sc = str(row.get("scope", ""))
if child_prefix and sc.startswith(child_prefix):
rest = sc[len(child_prefix):]
rest = sc[len(child_prefix) :]
first_component = rest.split("/", 1)[0]
if first_component:
children.add(child_prefix + first_component)
@@ -539,7 +522,11 @@ class LanceDBStorage:
pass
created = row.get("created_at")
if created:
dt = datetime.fromisoformat(str(created).replace("Z", "+00:00")) if isinstance(created, str) else created
dt = (
datetime.fromisoformat(str(created).replace("Z", "+00:00"))
if isinstance(created, str)
else created
)
if isinstance(dt, datetime):
if oldest is None or dt < oldest:
oldest = dt
@@ -557,19 +544,21 @@ class LanceDBStorage:
def list_scopes(self, parent: str = "/") -> list[str]:
parent = parent.rstrip("/") or ""
prefix = (parent + "/") if parent else "/"
rows = self._scan_rows(prefix if prefix != "/" else None, columns=["scope"])
with store_lock(self._lock_name):
rows = self._scan_rows(prefix if prefix != "/" else None, columns=["scope"])
children: set[str] = set()
for row in rows:
sc = str(row.get("scope", ""))
if sc.startswith(prefix) and sc != (prefix.rstrip("/") or "/"):
rest = sc[len(prefix):]
rest = sc[len(prefix) :]
first_component = rest.split("/", 1)[0]
if first_component:
children.add(prefix + first_component)
return sorted(children)
def list_categories(self, scope_prefix: str | None = None) -> dict[str, int]:
rows = self._scan_rows(scope_prefix, columns=["categories_str"])
with store_lock(self._lock_name):
rows = self._scan_rows(scope_prefix, columns=["categories_str"])
counts: dict[str, int] = {}
for row in rows:
cat_str = row.get("categories_str") or "[]"
@@ -585,22 +574,25 @@ class LanceDBStorage:
if self._table is None:
return 0
if scope_prefix is None or scope_prefix.strip("/") == "":
return self._table.count_rows()
with store_lock(self._lock_name):
return int(self._table.count_rows())
info = self.get_scope_info(scope_prefix)
return info.record_count
def reset(self, scope_prefix: str | None = None) -> None:
if scope_prefix is None or scope_prefix.strip("/") == "":
if self._table is not None:
self._db.drop_table(self._table_name)
self._table = None
# Dimension is preserved; table will be recreated on next save.
return
if self._table is None:
return
prefix = scope_prefix.rstrip("/")
if prefix:
self._table.delete(f"scope >= '{prefix}' AND scope < '{prefix}/\uFFFF'")
with store_lock(self._lock_name):
if scope_prefix is None or scope_prefix.strip("/") == "":
if self._table is not None:
self._db.drop_table(self._table_name)
self._table = None
return
if self._table is None:
return
prefix = scope_prefix.rstrip("/")
if prefix:
self._do_write(
"delete", f"scope >= '{prefix}' AND scope < '{prefix}/\uffff'"
)
def optimize(self) -> None:
"""Compact the table synchronously and refresh the scope index.
@@ -614,8 +606,9 @@ class LanceDBStorage:
"""
if self._table is None:
return
self._table.optimize()
self._ensure_scope_index()
with store_lock(self._lock_name):
self._table.optimize()
self._ensure_scope_index()
async def asave(self, records: list[MemoryRecord]) -> None:
self.save(records)

View File

@@ -3,10 +3,13 @@
from __future__ import annotations
from concurrent.futures import Future, ThreadPoolExecutor
import contextvars
from datetime import datetime
import threading
import time
from typing import TYPE_CHECKING, Any, Literal
from typing import TYPE_CHECKING, Annotated, Any, Literal
from pydantic import BaseModel, ConfigDict, Field, PlainValidator, PrivateAttr
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.memory_events import (
@@ -39,13 +42,18 @@ if TYPE_CHECKING:
)
def _passthrough(v: Any) -> Any:
"""PlainValidator that accepts any value, bypassing strict union discrimination."""
return v
def _default_embedder() -> OpenAIEmbeddingFunction:
"""Build default OpenAI embedder for memory."""
spec: OpenAIProviderSpec = {"provider": "openai", "config": {}}
return build_embedder(spec)
class Memory:
class Memory(BaseModel):
"""Unified memory: standalone, LLM-analyzed, with intelligent recall flow.
Works without agent/crew. Uses LLM to infer scope, categories, importance on save.
@@ -53,116 +61,119 @@ class Memory:
pluggable storage (LanceDB default).
"""
def __init__(
self,
llm: BaseLLM | str = "gpt-4o-mini",
storage: StorageBackend | str = "lancedb",
embedder: Any = None,
# -- Scoring weights --
# These three weights control how recall results are ranked.
# The composite score is: semantic_weight * similarity + recency_weight * decay + importance_weight * importance.
# They should sum to ~1.0 for intuitive scoring.
recency_weight: float = 0.3,
semantic_weight: float = 0.5,
importance_weight: float = 0.2,
# How quickly old memories lose relevance. The recency score halves every
# N days (exponential decay). Lower = faster forgetting; higher = longer relevance.
recency_half_life_days: int = 30,
# -- Consolidation --
# When remembering new content, if an existing record has similarity >= this
# threshold, the LLM is asked to merge/update/delete. Set to 1.0 to disable.
consolidation_threshold: float = 0.85,
# Max existing records to compare against when checking for consolidation.
consolidation_limit: int = 5,
# -- Save defaults --
# Importance assigned to new memories when no explicit value is given and
# the LLM analysis path is skipped (all fields provided by the caller).
default_importance: float = 0.5,
# -- Recall depth control --
# These thresholds govern the RecallFlow router that decides between
# returning results immediately ("synthesize") vs. doing an extra
# LLM-driven exploration round ("explore_deeper").
# confidence >= confidence_threshold_high => always synthesize
# confidence < confidence_threshold_low => explore deeper (if budget > 0)
# complex query + confidence < complex_query_threshold => explore deeper
confidence_threshold_high: float = 0.8,
confidence_threshold_low: float = 0.5,
complex_query_threshold: float = 0.7,
# How many LLM-driven exploration rounds the RecallFlow is allowed to run.
# 0 = always shallow (vector search only); higher = more thorough but slower.
exploration_budget: int = 1,
# Queries shorter than this skip LLM analysis (saving ~1-3s).
# Longer queries (full task descriptions) benefit from LLM distillation.
query_analysis_threshold: int = 200,
# When True, all write operations (remember, remember_many) are silently
# skipped. Useful for sharing a read-only view of memory across agents
# without any of them persisting new memories.
read_only: bool = False,
) -> None:
"""Initialize Memory.
model_config = ConfigDict(arbitrary_types_allowed=True)
Args:
llm: LLM for analysis (model name or BaseLLM instance).
storage: Backend: "lancedb" or a StorageBackend instance.
embedder: Embedding callable, provider config dict, or None (default OpenAI).
recency_weight: Weight for recency in the composite relevance score.
semantic_weight: Weight for semantic similarity in the composite relevance score.
importance_weight: Weight for importance in the composite relevance score.
recency_half_life_days: Recency score halves every N days (exponential decay).
consolidation_threshold: Similarity above which consolidation is triggered on save.
consolidation_limit: Max existing records to compare during consolidation.
default_importance: Default importance when not provided or inferred.
confidence_threshold_high: Recall confidence above which results are returned directly.
confidence_threshold_low: Recall confidence below which deeper exploration is triggered.
complex_query_threshold: For complex queries, explore deeper below this confidence.
exploration_budget: Number of LLM-driven exploration rounds during deep recall.
query_analysis_threshold: Queries shorter than this skip LLM analysis during deep recall.
read_only: If True, remember() and remember_many() are silent no-ops.
"""
self._read_only = read_only
llm: Annotated[BaseLLM | str, PlainValidator(_passthrough)] = Field(
default="gpt-4o-mini",
description="LLM for analysis (model name or BaseLLM instance).",
)
storage: Annotated[StorageBackend | str, PlainValidator(_passthrough)] = Field(
default="lancedb",
description="Storage backend instance or path string.",
)
embedder: Any = Field(
default=None,
description="Embedding callable, provider config dict, or None for default OpenAI.",
)
recency_weight: float = Field(
default=0.3,
description="Weight for recency in the composite relevance score.",
)
semantic_weight: float = Field(
default=0.5,
description="Weight for semantic similarity in the composite relevance score.",
)
importance_weight: float = Field(
default=0.2,
description="Weight for importance in the composite relevance score.",
)
recency_half_life_days: int = Field(
default=30,
description="Recency score halves every N days (exponential decay).",
)
consolidation_threshold: float = Field(
default=0.85,
description="Similarity above which consolidation is triggered on save.",
)
consolidation_limit: int = Field(
default=5,
description="Max existing records to compare during consolidation.",
)
default_importance: float = Field(
default=0.5,
description="Default importance when not provided or inferred.",
)
confidence_threshold_high: float = Field(
default=0.8,
description="Recall confidence above which results are returned directly.",
)
confidence_threshold_low: float = Field(
default=0.5,
description="Recall confidence below which deeper exploration is triggered.",
)
complex_query_threshold: float = Field(
default=0.7,
description="For complex queries, explore deeper below this confidence.",
)
exploration_budget: int = Field(
default=1,
description="Number of LLM-driven exploration rounds during deep recall.",
)
query_analysis_threshold: int = Field(
default=200,
description="Queries shorter than this skip LLM analysis during deep recall.",
)
read_only: bool = Field(
default=False,
description="If True, remember() and remember_many() are silent no-ops.",
)
_config: MemoryConfig = PrivateAttr()
_llm_instance: BaseLLM | None = PrivateAttr(default=None)
_embedder_instance: Any = PrivateAttr(default=None)
_storage: StorageBackend = PrivateAttr()
_save_pool: ThreadPoolExecutor = PrivateAttr(
default_factory=lambda: ThreadPoolExecutor(
max_workers=1, thread_name_prefix="memory-save"
)
)
_pending_saves: list[Future[Any]] = PrivateAttr(default_factory=list)
_pending_lock: threading.Lock = PrivateAttr(default_factory=threading.Lock)
def model_post_init(self, __context: Any) -> None:
"""Initialize runtime state from field values."""
self._config = MemoryConfig(
recency_weight=recency_weight,
semantic_weight=semantic_weight,
importance_weight=importance_weight,
recency_half_life_days=recency_half_life_days,
consolidation_threshold=consolidation_threshold,
consolidation_limit=consolidation_limit,
default_importance=default_importance,
confidence_threshold_high=confidence_threshold_high,
confidence_threshold_low=confidence_threshold_low,
complex_query_threshold=complex_query_threshold,
exploration_budget=exploration_budget,
query_analysis_threshold=query_analysis_threshold,
recency_weight=self.recency_weight,
semantic_weight=self.semantic_weight,
importance_weight=self.importance_weight,
recency_half_life_days=self.recency_half_life_days,
consolidation_threshold=self.consolidation_threshold,
consolidation_limit=self.consolidation_limit,
default_importance=self.default_importance,
confidence_threshold_high=self.confidence_threshold_high,
confidence_threshold_low=self.confidence_threshold_low,
complex_query_threshold=self.complex_query_threshold,
exploration_budget=self.exploration_budget,
query_analysis_threshold=self.query_analysis_threshold,
)
# Store raw config for lazy initialization. LLM and embedder are only
# built on first access so that Memory() never fails at construction
# time (e.g. when auto-created by Flow without an API key set).
self._llm_config: BaseLLM | str = llm
self._llm_instance: BaseLLM | None = None if isinstance(llm, str) else llm
self._embedder_config: Any = embedder
self._embedder_instance: Any = (
embedder
if (embedder is not None and not isinstance(embedder, dict))
self._llm_instance = None if isinstance(self.llm, str) else self.llm
self._embedder_instance = (
self.embedder
if (self.embedder is not None and not isinstance(self.embedder, dict))
else None
)
if isinstance(storage, str):
if isinstance(self.storage, str):
from crewai.memory.storage.lancedb_storage import LanceDBStorage
self._storage = LanceDBStorage() if storage == "lancedb" else LanceDBStorage(path=storage)
self._storage = (
LanceDBStorage()
if self.storage == "lancedb"
else LanceDBStorage(path=self.storage)
)
else:
self._storage = storage
# Background save queue. max_workers=1 serializes saves to avoid
# concurrent storage mutations (two saves finding the same similar
# record and both trying to update/delete it). Within each save,
# the parallel LLM calls still run on their own thread pool.
self._save_pool = ThreadPoolExecutor(
max_workers=1, thread_name_prefix="memory-save"
)
self._pending_saves: list[Future[Any]] = []
self._pending_lock = threading.Lock()
self._storage = self.storage
_MEMORY_DOCS_URL = "https://docs.crewai.com/concepts/memory"
@@ -173,11 +184,7 @@ class Memory:
from crewai.llm import LLM
try:
model_name = (
self._llm_config
if isinstance(self._llm_config, str)
else str(self._llm_config)
)
model_name = self.llm if isinstance(self.llm, str) else str(self.llm)
self._llm_instance = LLM(model=model_name)
except Exception as e:
raise RuntimeError(
@@ -197,8 +204,8 @@ class Memory:
"""Lazy embedder initialization -- only created when first needed."""
if self._embedder_instance is None:
try:
if isinstance(self._embedder_config, dict):
self._embedder_instance = build_embedder(self._embedder_config)
if isinstance(self.embedder, dict):
self._embedder_instance = build_embedder(self.embedder)
else:
self._embedder_instance = _default_embedder()
except Exception as e:
@@ -223,8 +230,9 @@ class Memory:
If the pool has been shut down (e.g. after ``close()``), the save
runs synchronously as a fallback so late saves still succeed.
"""
ctx = contextvars.copy_context()
try:
future: Future[Any] = self._save_pool.submit(fn, *args, **kwargs)
future: Future[Any] = self._save_pool.submit(ctx.run, fn, *args, **kwargs)
except RuntimeError:
# Pool shut down -- run synchronously as fallback
future = Future()
@@ -356,7 +364,7 @@ class Memory:
Raises:
Exception: On save failure (events emitted).
"""
if self._read_only:
if self.read_only:
return None
_source_type = "unified_memory"
try:
@@ -444,7 +452,7 @@ class Memory:
Returns:
Empty list (records are not available until the background save completes).
"""
if not contents or self._read_only:
if not contents or self.read_only:
return []
self._submit_save(

View File

@@ -4,6 +4,7 @@ from __future__ import annotations
import asyncio
from collections.abc import Callable
import contextvars
from functools import wraps
import inspect
from typing import TYPE_CHECKING, Any, Concatenate, ParamSpec, TypeVar, overload
@@ -169,8 +170,9 @@ def _call_method(method: Callable[..., Any], *args: Any, **kwargs: Any) -> Any:
if loop and loop.is_running():
import concurrent.futures
ctx = contextvars.copy_context()
with concurrent.futures.ThreadPoolExecutor() as pool:
return pool.submit(asyncio.run, result).result()
return pool.submit(ctx.run, asyncio.run, result).result()
return asyncio.run(result)
return result

View File

@@ -4,6 +4,7 @@ from __future__ import annotations
import asyncio
from collections.abc import Callable
import contextvars
from functools import partial
import inspect
from pathlib import Path
@@ -146,8 +147,9 @@ def _resolve_result(result: Any) -> Any:
if loop and loop.is_running():
import concurrent.futures
ctx = contextvars.copy_context()
with concurrent.futures.ThreadPoolExecutor() as pool:
return pool.submit(asyncio.run, result).result()
return pool.submit(ctx.run, asyncio.run, result).result()
return asyncio.run(result)
return result

View File

@@ -1,5 +1,8 @@
"""ChromaDB client implementation."""
import asyncio
from collections.abc import AsyncIterator
from contextlib import AbstractContextManager, asynccontextmanager, nullcontext
import logging
from typing import Any
@@ -29,6 +32,7 @@ from crewai.rag.core.base_client import (
BaseCollectionParams,
)
from crewai.rag.types import SearchResult
from crewai.utilities.lock_store import lock as store_lock
from crewai.utilities.logger_utils import suppress_logging
@@ -52,6 +56,7 @@ class ChromaDBClient(BaseClient):
default_limit: int = 5,
default_score_threshold: float = 0.6,
default_batch_size: int = 100,
lock_name: str = "",
) -> None:
"""Initialize ChromaDBClient with client and embedding function.
@@ -61,12 +66,32 @@ class ChromaDBClient(BaseClient):
default_limit: Default number of results to return in searches.
default_score_threshold: Default minimum score for search results.
default_batch_size: Default batch size for adding documents.
lock_name: Optional lock name for cross-process synchronization.
"""
self.client = client
self.embedding_function = embedding_function
self.default_limit = default_limit
self.default_score_threshold = default_score_threshold
self.default_batch_size = default_batch_size
self._lock_name = lock_name
def _locked(self) -> AbstractContextManager[None]:
"""Return a cross-process lock context manager, or nullcontext if no lock name."""
return store_lock(self._lock_name) if self._lock_name else nullcontext()
@asynccontextmanager
async def _alocked(self) -> AsyncIterator[None]:
"""Async cross-process lock that acquires/releases in an executor."""
if not self._lock_name:
yield
return
lock_cm = store_lock(self._lock_name)
loop = asyncio.get_event_loop()
await loop.run_in_executor(None, lock_cm.__enter__)
try:
yield
finally:
await loop.run_in_executor(None, lock_cm.__exit__, None, None, None)
def create_collection(
self, **kwargs: Unpack[ChromaDBCollectionCreateParams]
@@ -313,23 +338,24 @@ class ChromaDBClient(BaseClient):
if not documents:
raise ValueError("Documents list cannot be empty")
collection = self.client.get_or_create_collection(
name=_sanitize_collection_name(collection_name),
embedding_function=self.embedding_function,
)
prepared = _prepare_documents_for_chromadb(documents)
for i in range(0, len(prepared.ids), batch_size):
batch_ids, batch_texts, batch_metadatas = _create_batch_slice(
prepared=prepared, start_index=i, batch_size=batch_size
with self._locked():
collection = self.client.get_or_create_collection(
name=_sanitize_collection_name(collection_name),
embedding_function=self.embedding_function,
)
collection.upsert(
ids=batch_ids,
documents=batch_texts,
metadatas=batch_metadatas, # type: ignore[arg-type]
)
prepared = _prepare_documents_for_chromadb(documents)
for i in range(0, len(prepared.ids), batch_size):
batch_ids, batch_texts, batch_metadatas = _create_batch_slice(
prepared=prepared, start_index=i, batch_size=batch_size
)
collection.upsert(
ids=batch_ids,
documents=batch_texts,
metadatas=batch_metadatas, # type: ignore[arg-type]
)
async def aadd_documents(self, **kwargs: Unpack[BaseCollectionAddParams]) -> None:
"""Add documents with their embeddings to a collection asynchronously.
@@ -363,22 +389,23 @@ class ChromaDBClient(BaseClient):
if not documents:
raise ValueError("Documents list cannot be empty")
collection = await self.client.get_or_create_collection(
name=_sanitize_collection_name(collection_name),
embedding_function=self.embedding_function,
)
prepared = _prepare_documents_for_chromadb(documents)
for i in range(0, len(prepared.ids), batch_size):
batch_ids, batch_texts, batch_metadatas = _create_batch_slice(
prepared=prepared, start_index=i, batch_size=batch_size
async with self._alocked():
collection = await self.client.get_or_create_collection(
name=_sanitize_collection_name(collection_name),
embedding_function=self.embedding_function,
)
prepared = _prepare_documents_for_chromadb(documents)
await collection.upsert(
ids=batch_ids,
documents=batch_texts,
metadatas=batch_metadatas, # type: ignore[arg-type]
)
for i in range(0, len(prepared.ids), batch_size):
batch_ids, batch_texts, batch_metadatas = _create_batch_slice(
prepared=prepared, start_index=i, batch_size=batch_size
)
await collection.upsert(
ids=batch_ids,
documents=batch_texts,
metadatas=batch_metadatas, # type: ignore[arg-type]
)
def search(
self, **kwargs: Unpack[ChromaDBCollectionSearchParams]
@@ -419,29 +446,30 @@ class ChromaDBClient(BaseClient):
params = _extract_search_params(kwargs)
collection = self.client.get_or_create_collection(
name=_sanitize_collection_name(params.collection_name),
embedding_function=self.embedding_function,
)
where = params.where if params.where is not None else params.metadata_filter
with suppress_logging(
"chromadb.segment.impl.vector.local_persistent_hnsw", logging.ERROR
):
results: QueryResult = collection.query(
query_texts=[params.query],
n_results=params.limit,
where=where,
where_document=params.where_document,
include=params.include,
with self._locked():
collection = self.client.get_or_create_collection(
name=_sanitize_collection_name(params.collection_name),
embedding_function=self.embedding_function,
)
return _process_query_results(
collection=collection,
results=results,
params=params,
)
where = params.where if params.where is not None else params.metadata_filter
with suppress_logging(
"chromadb.segment.impl.vector.local_persistent_hnsw", logging.ERROR
):
results: QueryResult = collection.query(
query_texts=[params.query],
n_results=params.limit,
where=where,
where_document=params.where_document,
include=params.include,
)
return _process_query_results(
collection=collection,
results=results,
params=params,
)
async def asearch(
self, **kwargs: Unpack[ChromaDBCollectionSearchParams]
@@ -482,29 +510,30 @@ class ChromaDBClient(BaseClient):
params = _extract_search_params(kwargs)
collection = await self.client.get_or_create_collection(
name=_sanitize_collection_name(params.collection_name),
embedding_function=self.embedding_function,
)
where = params.where if params.where is not None else params.metadata_filter
with suppress_logging(
"chromadb.segment.impl.vector.local_persistent_hnsw", logging.ERROR
):
results: QueryResult = await collection.query(
query_texts=[params.query],
n_results=params.limit,
where=where,
where_document=params.where_document,
include=params.include,
async with self._alocked():
collection = await self.client.get_or_create_collection(
name=_sanitize_collection_name(params.collection_name),
embedding_function=self.embedding_function,
)
return _process_query_results(
collection=collection,
results=results,
params=params,
)
where = params.where if params.where is not None else params.metadata_filter
with suppress_logging(
"chromadb.segment.impl.vector.local_persistent_hnsw", logging.ERROR
):
results: QueryResult = await collection.query(
query_texts=[params.query],
n_results=params.limit,
where=where,
where_document=params.where_document,
include=params.include,
)
return _process_query_results(
collection=collection,
results=results,
params=params,
)
def delete_collection(self, **kwargs: Unpack[BaseCollectionParams]) -> None:
"""Delete a collection and all its data.
@@ -531,7 +560,10 @@ class ChromaDBClient(BaseClient):
)
collection_name = kwargs["collection_name"]
self.client.delete_collection(name=_sanitize_collection_name(collection_name))
with self._locked():
self.client.delete_collection(
name=_sanitize_collection_name(collection_name)
)
async def adelete_collection(self, **kwargs: Unpack[BaseCollectionParams]) -> None:
"""Delete a collection and all its data asynchronously.
@@ -561,9 +593,10 @@ class ChromaDBClient(BaseClient):
)
collection_name = kwargs["collection_name"]
await self.client.delete_collection(
name=_sanitize_collection_name(collection_name)
)
async with self._alocked():
await self.client.delete_collection(
name=_sanitize_collection_name(collection_name)
)
def reset(self) -> None:
"""Reset the vector database by deleting all collections and data.
@@ -586,7 +619,8 @@ class ChromaDBClient(BaseClient):
"Use areset() for AsyncClientAPI."
)
self.client.reset()
with self._locked():
self.client.reset()
async def areset(self) -> None:
"""Reset the vector database by deleting all collections and data asynchronously.
@@ -612,4 +646,5 @@ class ChromaDBClient(BaseClient):
"Use reset() for ClientAPI."
)
await self.client.reset()
async with self._alocked():
await self.client.reset()

View File

@@ -1,13 +1,12 @@
"""Factory functions for creating ChromaDB clients."""
from hashlib import md5
import os
from chromadb import PersistentClient
import portalocker
from crewai.rag.chromadb.client import ChromaDBClient
from crewai.rag.chromadb.config import ChromaDBConfig
from crewai.utilities.lock_store import lock
def create_client(config: ChromaDBConfig) -> ChromaDBClient:
@@ -25,10 +24,8 @@ def create_client(config: ChromaDBConfig) -> ChromaDBClient:
persist_dir = config.settings.persist_directory
os.makedirs(persist_dir, exist_ok=True)
lock_id = md5(persist_dir.encode(), usedforsecurity=False).hexdigest()
lockfile = os.path.join(persist_dir, f"chromadb-{lock_id}.lock")
with portalocker.Lock(lockfile):
with lock(f"chromadb:{persist_dir}"):
client = PersistentClient(
path=persist_dir,
settings=config.settings,
@@ -42,4 +39,5 @@ def create_client(config: ChromaDBConfig) -> ChromaDBClient:
default_limit=config.limit,
default_score_threshold=config.score_threshold,
default_batch_size=config.batch_size,
lock_name=f"chromadb:{persist_dir}",
)

View File

@@ -2,6 +2,7 @@ from __future__ import annotations
import asyncio
from concurrent.futures import Future
import contextvars
from copy import copy as shallow_copy
import datetime
from hashlib import md5
@@ -524,10 +525,11 @@ class Task(BaseModel):
) -> Future[TaskOutput]:
"""Execute the task asynchronously."""
future: Future[TaskOutput] = Future()
ctx = contextvars.copy_context()
threading.Thread(
daemon=True,
target=self._execute_task_async,
args=(agent, context, tools, future),
target=ctx.run,
args=(self._execute_task_async, agent, context, tools, future),
).start()
return future

View File

@@ -1,29 +1,31 @@
"""Native MCP tool wrapper for CrewAI agents.
This module provides a tool wrapper that reuses existing MCP client sessions
for better performance and connection management.
This module provides a tool wrapper that creates a fresh MCP client for every
invocation, ensuring safe parallel execution even when the same tool is called
concurrently by the executor.
"""
import asyncio
from collections.abc import Callable
import contextvars
from typing import Any
from crewai.tools import BaseTool
class MCPNativeTool(BaseTool):
"""Native MCP tool that reuses client sessions.
"""Native MCP tool that creates a fresh client per invocation.
This tool wrapper is used when agents connect to MCP servers using
structured configurations. It reuses existing client sessions for
better performance and proper connection lifecycle management.
Unlike MCPToolWrapper which connects on-demand, this tool uses
a shared MCP client instance that maintains a persistent connection.
A ``client_factory`` callable produces an independent ``MCPClient`` +
transport for every ``_run_async`` call. This guarantees that parallel
invocations -- whether of the *same* tool or *different* tools from the
same server -- never share mutable connection state (which would cause
anyio cancel-scope errors).
"""
def __init__(
self,
mcp_client: Any,
client_factory: Callable[[], Any],
tool_name: str,
tool_schema: dict[str, Any],
server_name: str,
@@ -32,19 +34,16 @@ class MCPNativeTool(BaseTool):
"""Initialize native MCP tool.
Args:
mcp_client: MCPClient instance with active session.
client_factory: Zero-arg callable that returns a new MCPClient.
tool_name: Name of the tool (may be prefixed).
tool_schema: Schema information for the tool.
server_name: Name of the MCP server for prefixing.
original_tool_name: Original name of the tool on the MCP server.
"""
# Create tool name with server prefix to avoid conflicts
prefixed_name = f"{server_name}_{tool_name}"
# Handle args_schema properly - BaseTool expects a BaseModel subclass
args_schema = tool_schema.get("args_schema")
# Only pass args_schema if it's provided
kwargs = {
"name": prefixed_name,
"description": tool_schema.get(
@@ -57,16 +56,9 @@ class MCPNativeTool(BaseTool):
super().__init__(**kwargs)
# Set instance attributes after super().__init__
self._mcp_client = mcp_client
self._client_factory = client_factory
self._original_tool_name = original_tool_name or tool_name
self._server_name = server_name
# self._logger = logging.getLogger(__name__)
@property
def mcp_client(self) -> Any:
"""Get the MCP client instance."""
return self._mcp_client
@property
def original_tool_name(self) -> str:
@@ -93,9 +85,10 @@ class MCPNativeTool(BaseTool):
import concurrent.futures
ctx = contextvars.copy_context()
with concurrent.futures.ThreadPoolExecutor() as executor:
coro = self._run_async(**kwargs)
future = executor.submit(asyncio.run, coro)
future = executor.submit(ctx.run, asyncio.run, coro)
return future.result()
except RuntimeError:
return asyncio.run(self._run_async(**kwargs))
@@ -108,51 +101,26 @@ class MCPNativeTool(BaseTool):
async def _run_async(self, **kwargs) -> str:
"""Async implementation of tool execution.
A fresh ``MCPClient`` is created for every invocation so that
concurrent calls never share transport or session state.
Args:
**kwargs: Arguments to pass to the MCP tool.
Returns:
Result from the MCP tool execution.
"""
# Note: Since we use asyncio.run() which creates a new event loop each time,
# Always reconnect on-demand because asyncio.run() creates new event loops per call
# All MCP transport context managers (stdio, streamablehttp_client, sse_client)
# use anyio.create_task_group() which can't span different event loops
if self._mcp_client.connected:
await self._mcp_client.disconnect()
await self._mcp_client.connect()
client = self._client_factory()
await client.connect()
try:
result = await self._mcp_client.call_tool(self.original_tool_name, kwargs)
except Exception as e:
error_str = str(e).lower()
if (
"not connected" in error_str
or "connection" in error_str
or "send" in error_str
):
await self._mcp_client.disconnect()
await self._mcp_client.connect()
# Retry the call
result = await self._mcp_client.call_tool(
self.original_tool_name, kwargs
)
else:
raise
result = await client.call_tool(self.original_tool_name, kwargs)
finally:
# Always disconnect after tool call to ensure clean context manager lifecycle
# This prevents "exit cancel scope in different task" errors
# All transport context managers must be exited in the same event loop they were entered
await self._mcp_client.disconnect()
await client.disconnect()
# Extract result content
if isinstance(result, str):
return result
# Handle various result formats
if hasattr(result, "content") and result.content:
if isinstance(result.content, list) and len(result.content) > 0:
content_item = result.content[0]

View File

@@ -121,7 +121,7 @@ def create_memory_tools(memory: Any) -> list[BaseTool]:
description=i18n.tools("recall_memory"),
),
]
if not getattr(memory, "_read_only", False):
if not memory.read_only:
tools.append(
RememberTool(
memory=memory,

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
import asyncio
from collections.abc import Callable, Sequence
import concurrent.futures
import contextvars
import inspect
import json
import re
@@ -907,8 +908,9 @@ def summarize_messages(
chunks=chunks, llm=llm, callbacks=callbacks, i18n=i18n
)
if is_inside_event_loop():
ctx = contextvars.copy_context()
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool:
summarized_contents = pool.submit(asyncio.run, coro).result()
summarized_contents = pool.submit(ctx.run, asyncio.run, coro).result()
else:
summarized_contents = asyncio.run(coro)

View File

@@ -6,6 +6,8 @@ from typing import Any, TypedDict
from typing_extensions import Unpack
from crewai.utilities.lock_store import lock as store_lock
class LogEntry(TypedDict, total=False):
"""TypedDict for log entry kwargs with optional fields for flexibility."""
@@ -90,33 +92,36 @@ class FileHandler:
ValueError: If logging fails.
"""
try:
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
log_entry = {"timestamp": now, **kwargs}
with store_lock(f"file:{os.path.realpath(self._path)}"):
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
log_entry = {"timestamp": now, **kwargs}
if self._path.endswith(".json"):
# Append log in JSON format
try:
# Try reading existing content to avoid overwriting
with open(self._path, encoding="utf-8") as read_file:
existing_data = json.load(read_file)
existing_data.append(log_entry)
except (json.JSONDecodeError, FileNotFoundError):
# If no valid JSON or file doesn't exist, start with an empty list
existing_data = [log_entry]
if self._path.endswith(".json"):
# Append log in JSON format
try:
# Try reading existing content to avoid overwriting
with open(self._path, encoding="utf-8") as read_file:
existing_data = json.load(read_file)
existing_data.append(log_entry)
except (json.JSONDecodeError, FileNotFoundError):
# If no valid JSON or file doesn't exist, start with an empty list
existing_data = [log_entry]
with open(self._path, "w", encoding="utf-8") as write_file:
json.dump(existing_data, write_file, indent=4)
write_file.write("\n")
with open(self._path, "w", encoding="utf-8") as write_file:
json.dump(existing_data, write_file, indent=4)
write_file.write("\n")
else:
# Append log in plain text format
message = (
f"{now}: "
+ ", ".join([f'{key}="{value}"' for key, value in kwargs.items()])
+ "\n"
)
with open(self._path, "a", encoding="utf-8") as file:
file.write(message)
else:
# Append log in plain text format
message = (
f"{now}: "
+ ", ".join(
[f'{key}="{value}"' for key, value in kwargs.items()]
)
+ "\n"
)
with open(self._path, "a", encoding="utf-8") as file:
file.write(message)
except Exception as e:
raise ValueError(f"Failed to log message: {e!s}") from e
@@ -153,8 +158,9 @@ class PickleHandler:
Args:
data: The data to be saved to the file.
"""
with open(self.file_path, "wb") as f:
pickle.dump(obj=data, file=f)
with store_lock(f"file:{os.path.realpath(self.file_path)}"):
with open(self.file_path, "wb") as f:
pickle.dump(obj=data, file=f)
def load(self) -> Any:
"""Load the data from the specified file using pickle.
@@ -162,13 +168,17 @@ class PickleHandler:
Returns:
The data loaded from the file.
"""
if not os.path.exists(self.file_path) or os.path.getsize(self.file_path) == 0:
return {} # Return an empty dictionary if the file does not exist or is empty
with store_lock(f"file:{os.path.realpath(self.file_path)}"):
if (
not os.path.exists(self.file_path)
or os.path.getsize(self.file_path) == 0
):
return {}
with open(self.file_path, "rb") as file:
try:
return pickle.load(file) # noqa: S301
except EOFError:
return {} # Return an empty dictionary if the file is empty or corrupted
except Exception:
raise # Raise any other exceptions that occur during loading
with open(self.file_path, "rb") as file:
try:
return pickle.load(file) # noqa: S301
except EOFError:
return {}
except Exception:
raise

View File

@@ -5,6 +5,7 @@ from __future__ import annotations
import asyncio
from collections.abc import Coroutine
import concurrent.futures
import contextvars
import logging
from typing import TYPE_CHECKING, TypeVar
from uuid import UUID
@@ -46,8 +47,9 @@ def _run_sync(coro: Coroutine[None, None, T]) -> T:
"""
try:
asyncio.get_running_loop()
ctx = contextvars.copy_context()
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
future = executor.submit(asyncio.run, coro)
future = executor.submit(ctx.run, asyncio.run, coro)
return future.result()
except RuntimeError:
return asyncio.run(coro)

View File

@@ -100,7 +100,12 @@ class I18N(BaseModel):
def retrieve(
self,
kind: Literal[
"slices", "errors", "tools", "reasoning", "hierarchical_manager_agent", "memory"
"slices",
"errors",
"tools",
"reasoning",
"hierarchical_manager_agent",
"memory",
],
key: str,
) -> str:

View File

@@ -0,0 +1,61 @@
"""Centralised lock factory.
If ``REDIS_URL`` is set, locks are distributed via ``portalocker.RedisLock``. Otherwise, falls
back to the standard ``portalocker.Lock``.
"""
from __future__ import annotations
from collections.abc import Iterator
from contextlib import contextmanager
from functools import lru_cache
from hashlib import md5
import os
import tempfile
from typing import TYPE_CHECKING, Final
import portalocker
if TYPE_CHECKING:
import redis
_REDIS_URL: str | None = os.environ.get("REDIS_URL")
_DEFAULT_TIMEOUT: Final[int] = 120
@lru_cache(maxsize=1)
def _redis_connection() -> redis.Redis:
"""Return a cached Redis connection, creating one on first call."""
from redis import Redis
if _REDIS_URL is None:
raise ValueError("REDIS_URL environment variable is not set")
return Redis.from_url(_REDIS_URL)
@contextmanager
def lock(name: str, *, timeout: float = _DEFAULT_TIMEOUT) -> Iterator[None]:
"""Acquire a named lock, yielding while it is held.
Args:
name: A human-readable lock name (e.g. ``"chromadb_init"``).
Automatically namespaced to avoid collisions.
timeout: Maximum seconds to wait for the lock before raising.
"""
channel = f"crewai:{md5(name.encode(), usedforsecurity=False).hexdigest()}"
if _REDIS_URL:
with portalocker.RedisLock(
channel=channel,
connection=_redis_connection(),
timeout=timeout,
):
yield
else:
lock_dir = tempfile.gettempdir()
lock_path = os.path.join(lock_dir, f"{channel}.lock")
with portalocker.Lock(lock_path, timeout=timeout):
yield

View File

@@ -657,7 +657,10 @@ def _json_schema_to_pydantic_field(
A tuple of (type, Field) for use with create_model.
"""
type_ = _json_schema_to_pydantic_type(
json_schema, root_schema, name_=name.title(), enrich_descriptions=enrich_descriptions
json_schema,
root_schema,
name_=name.title(),
enrich_descriptions=enrich_descriptions,
)
is_required = name in required
@@ -806,7 +809,10 @@ def _json_schema_to_pydantic_type(
if ref:
ref_schema = _resolve_ref(ref, root_schema)
return _json_schema_to_pydantic_type(
ref_schema, root_schema, name_=name_, enrich_descriptions=enrich_descriptions
ref_schema,
root_schema,
name_=name_,
enrich_descriptions=enrich_descriptions,
)
enum_values = json_schema.get("enum")
@@ -835,12 +841,16 @@ def _json_schema_to_pydantic_type(
if all_of_schemas:
if len(all_of_schemas) == 1:
return _json_schema_to_pydantic_type(
all_of_schemas[0], root_schema, name_=name_,
all_of_schemas[0],
root_schema,
name_=name_,
enrich_descriptions=enrich_descriptions,
)
merged = _merge_all_of_schemas(all_of_schemas, root_schema)
return _json_schema_to_pydantic_type(
merged, root_schema, name_=name_,
merged,
root_schema,
name_=name_,
enrich_descriptions=enrich_descriptions,
)
@@ -858,7 +868,9 @@ def _json_schema_to_pydantic_type(
items_schema = json_schema.get("items")
if items_schema:
item_type = _json_schema_to_pydantic_type(
items_schema, root_schema, name_=name_,
items_schema,
root_schema,
name_=name_,
enrich_descriptions=enrich_descriptions,
)
return list[item_type] # type: ignore[valid-type]
@@ -870,7 +882,8 @@ def _json_schema_to_pydantic_type(
if json_schema_.get("title") is None:
json_schema_["title"] = name_ or "DynamicModel"
return create_model_from_schema(
json_schema_, root_schema=root_schema,
json_schema_,
root_schema=root_schema,
enrich_descriptions=enrich_descriptions,
)
return dict

View File

@@ -2,6 +2,7 @@
import asyncio
from collections.abc import AsyncIterator, Callable, Iterator
import contextvars
import queue
import threading
from typing import Any, NamedTuple
@@ -240,7 +241,8 @@ def create_chunk_generator(
Yields:
StreamChunk objects as they arrive.
"""
thread = threading.Thread(target=run_func, daemon=True)
ctx = contextvars.copy_context()
thread = threading.Thread(target=ctx.run, args=(run_func,), daemon=True)
thread.start()
try:

View File

@@ -2353,3 +2353,68 @@ def test_agent_without_apps_no_platform_tools():
tools = crew._prepare_tools(agent, task, [])
assert tools == []
def test_agent_mcps_accepts_slug_with_specific_tool():
"""Agent(mcps=["notion#get_page"]) must pass validation (_SLUG_RE)."""
agent = Agent(
role="MCP Agent",
goal="Test MCP validation",
backstory="Test agent",
mcps=["notion#get_page"],
)
assert agent.mcps == ["notion#get_page"]
def test_agent_mcps_accepts_slug_with_hyphenated_tool():
agent = Agent(
role="MCP Agent",
goal="Test MCP validation",
backstory="Test agent",
mcps=["notion#get-page"],
)
assert agent.mcps == ["notion#get-page"]
def test_agent_mcps_accepts_multiple_hash_refs():
agent = Agent(
role="MCP Agent",
goal="Test MCP validation",
backstory="Test agent",
mcps=["notion#get_page", "notion#search", "github#list_repos"],
)
assert len(agent.mcps) == 3
def test_agent_mcps_accepts_mixed_ref_types():
agent = Agent(
role="MCP Agent",
goal="Test MCP validation",
backstory="Test agent",
mcps=[
"notion#get_page",
"notion",
"https://mcp.example.com/api",
],
)
assert len(agent.mcps) == 3
def test_agent_mcps_rejects_hash_without_slug():
with pytest.raises(ValueError, match="Invalid MCP reference"):
Agent(
role="MCP Agent",
goal="Test MCP validation",
backstory="Test agent",
mcps=["#get_page"],
)
def test_agent_mcps_accepts_legacy_prefix_with_tool():
agent = Agent(
role="MCP Agent",
goal="Test MCP validation",
backstory="Test agent",
mcps=["crewai-amp:notion#get_page"],
)
assert agent.mcps == ["crewai-amp:notion#get_page"]

View File

@@ -1136,7 +1136,7 @@ def test_lite_agent_memory_instance_recall_and_save_called():
successful_requests=1,
)
mock_memory = Mock()
mock_memory._read_only = False
mock_memory.read_only = False
mock_memory.recall.return_value = []
mock_memory.extract_memories.return_value = ["Fact one.", "Fact two."]

View File

@@ -0,0 +1,137 @@
interactions:
- request:
body: '{"max_tokens":4096,"messages":[{"role":"user","content":"What is the weather
in Tokyo?"}],"model":"claude-sonnet-4-5","stream":false,"tools":[{"type":"tool_search_tool_bm25_20251119","name":"tool_search_tool_bm25"},{"name":"get_weather","description":"Get
current weather conditions for a specified location","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for get_weather"}},"required":["input"]},"defer_loading":true},{"name":"search_files","description":"Search
through files in the workspace by name or content","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for search_files"}},"required":["input"]},"defer_loading":true},{"name":"read_database","description":"Read
records from a database table with optional filtering","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for read_database"}},"required":["input"]},"defer_loading":true},{"name":"write_database","description":"Write
or update records in a database table","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for write_database"}},"required":["input"]},"defer_loading":true},{"name":"send_email","description":"Send
an email message to one or more recipients","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for send_email"}},"required":["input"]},"defer_loading":true},{"name":"read_email","description":"Read
emails from inbox with filtering options","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for read_email"}},"required":["input"]},"defer_loading":true},{"name":"create_ticket","description":"Create
a new support ticket in the ticketing system","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for create_ticket"}},"required":["input"]},"defer_loading":true},{"name":"update_ticket","description":"Update
an existing support ticket status or description","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for update_ticket"}},"required":["input"]},"defer_loading":true},{"name":"list_users","description":"List
all users in the system with optional filters","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for list_users"}},"required":["input"]},"defer_loading":true},{"name":"get_user_profile","description":"Get
detailed profile information for a specific user","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for get_user_profile"}},"required":["input"]},"defer_loading":true},{"name":"deploy_service","description":"Deploy
a service to the specified environment","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for deploy_service"}},"required":["input"]},"defer_loading":true},{"name":"rollback_service","description":"Rollback
a service deployment to a previous version","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for rollback_service"}},"required":["input"]},"defer_loading":true},{"name":"get_service_logs","description":"Get
service logs filtered by time range and severity","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for get_service_logs"}},"required":["input"]},"defer_loading":true},{"name":"run_sql_query","description":"Run
a read-only SQL query against the analytics database","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for run_sql_query"}},"required":["input"]},"defer_loading":true},{"name":"create_dashboard","description":"Create
a new monitoring dashboard with widgets","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for create_dashboard"}},"required":["input"]},"defer_loading":true}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
anthropic-version:
- '2023-06-01'
connection:
- keep-alive
content-length:
- '3952'
content-type:
- application/json
host:
- api.anthropic.com
x-api-key:
- X-API-KEY-XXX
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 0.73.0
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
x-stainless-timeout:
- NOT_GIVEN
method: POST
uri: https://api.anthropic.com/v1/messages
response:
body:
string: '{"model":"claude-sonnet-4-5-20250929","id":"msg_01DAGCoL6C12u6yAgR1UqNAs","type":"message","role":"assistant","content":[{"type":"text","text":"I''ll
search for a weather-related tool to help you get the weather information
for Tokyo."},{"type":"server_tool_use","id":"srvtoolu_0176qgHeeBpSygYAnUzKHCfh","name":"tool_search_tool_bm25","input":{"query":"weather
Tokyo current conditions forecast"},"caller":{"type":"direct"}},{"type":"tool_search_tool_result","tool_use_id":"srvtoolu_0176qgHeeBpSygYAnUzKHCfh","content":{"type":"tool_search_tool_search_result","tool_references":[{"type":"tool_reference","tool_name":"get_weather"}]}},{"type":"text","text":"Great!
I found a weather tool. Let me get the current weather conditions for Tokyo."},{"type":"tool_use","id":"toolu_01R3FavQLuTrwNvEk9gMaViK","name":"get_weather","input":{"input":"Tokyo"},"caller":{"type":"direct"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":1566,"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":155,"service_tier":"standard","inference_geo":"not_available","server_tool_use":{"web_search_requests":0,"web_fetch_requests":0}}}'
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Security-Policy:
- CSP-FILTERED
Content-Type:
- application/json
Date:
- Sun, 08 Mar 2026 21:04:12 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Robots-Tag:
- none
anthropic-organization-id:
- ANTHROPIC-ORGANIZATION-ID-XXX
anthropic-ratelimit-input-tokens-limit:
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-LIMIT-XXX
anthropic-ratelimit-input-tokens-remaining:
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-REMAINING-XXX
anthropic-ratelimit-input-tokens-reset:
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-RESET-XXX
anthropic-ratelimit-output-tokens-limit:
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-LIMIT-XXX
anthropic-ratelimit-output-tokens-remaining:
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-REMAINING-XXX
anthropic-ratelimit-output-tokens-reset:
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-RESET-XXX
anthropic-ratelimit-requests-limit:
- '20000'
anthropic-ratelimit-requests-remaining:
- '19999'
anthropic-ratelimit-requests-reset:
- '2026-03-08T21:04:07Z'
anthropic-ratelimit-tokens-limit:
- ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX
anthropic-ratelimit-tokens-remaining:
- ANTHROPIC-RATELIMIT-TOKENS-REMAINING-XXX
anthropic-ratelimit-tokens-reset:
- ANTHROPIC-RATELIMIT-TOKENS-RESET-XXX
cf-cache-status:
- DYNAMIC
request-id:
- REQUEST-ID-XXX
strict-transport-security:
- STS-XXX
vary:
- Accept-Encoding
x-envoy-upstream-service-time:
- '4330'
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,112 @@
interactions:
- request:
body: '{"max_tokens":4096,"messages":[{"role":"user","content":"What is the weather
in Tokyo?"}],"model":"claude-sonnet-4-5","stream":false,"tools":[{"name":"get_weather","description":"Get
current weather conditions for a specified location","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for get_weather"}},"required":["input"]}},{"name":"search_files","description":"Search
through files in the workspace by name or content","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for search_files"}},"required":["input"]}},{"name":"read_database","description":"Read
records from a database table with optional filtering","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for read_database"}},"required":["input"]}},{"name":"write_database","description":"Write
or update records in a database table","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for write_database"}},"required":["input"]}},{"name":"send_email","description":"Send
an email message to one or more recipients","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for send_email"}},"required":["input"]}},{"name":"read_email","description":"Read
emails from inbox with filtering options","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for read_email"}},"required":["input"]}},{"name":"create_ticket","description":"Create
a new support ticket in the ticketing system","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for create_ticket"}},"required":["input"]}},{"name":"update_ticket","description":"Update
an existing support ticket status or description","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for update_ticket"}},"required":["input"]}},{"name":"list_users","description":"List
all users in the system with optional filters","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for list_users"}},"required":["input"]}},{"name":"get_user_profile","description":"Get
detailed profile information for a specific user","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for get_user_profile"}},"required":["input"]}},{"name":"deploy_service","description":"Deploy
a service to the specified environment","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for deploy_service"}},"required":["input"]}},{"name":"rollback_service","description":"Rollback
a service deployment to a previous version","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for rollback_service"}},"required":["input"]}},{"name":"get_service_logs","description":"Get
service logs filtered by time range and severity","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for get_service_logs"}},"required":["input"]}},{"name":"run_sql_query","description":"Run
a read-only SQL query against the analytics database","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for run_sql_query"}},"required":["input"]}},{"name":"create_dashboard","description":"Create
a new monitoring dashboard with widgets","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for create_dashboard"}},"required":["input"]}}]}'
headers:
accept:
- application/json
anthropic-version:
- '2023-06-01'
connection:
- keep-alive
content-type:
- application/json
host:
- api.anthropic.com
method: POST
uri: https://api.anthropic.com/v1/messages
response:
body:
string: '{"model":"claude-sonnet-4-5-20250929","id":"msg_01NoSearch001","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01NoSearch001","name":"get_weather","input":{"input":"Tokyo"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":1943,"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"output_tokens":54,"service_tier":"standard"}}'
headers:
Content-Type:
- application/json
status:
code: 200
message: OK
- request:
body: '{"max_tokens":4096,"messages":[{"role":"user","content":"What is the weather
in Tokyo?"}],"model":"claude-sonnet-4-5","stream":false,"tools":[{"type":"tool_search_tool_bm25_20251119","name":"tool_search_tool_bm25"},{"name":"get_weather","description":"Get
current weather conditions for a specified location","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for get_weather"}},"required":["input"]},"defer_loading":true},{"name":"search_files","description":"Search
through files in the workspace by name or content","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for search_files"}},"required":["input"]},"defer_loading":true},{"name":"read_database","description":"Read
records from a database table with optional filtering","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for read_database"}},"required":["input"]},"defer_loading":true},{"name":"write_database","description":"Write
or update records in a database table","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for write_database"}},"required":["input"]},"defer_loading":true},{"name":"send_email","description":"Send
an email message to one or more recipients","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for send_email"}},"required":["input"]},"defer_loading":true},{"name":"read_email","description":"Read
emails from inbox with filtering options","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for read_email"}},"required":["input"]},"defer_loading":true},{"name":"create_ticket","description":"Create
a new support ticket in the ticketing system","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for create_ticket"}},"required":["input"]},"defer_loading":true},{"name":"update_ticket","description":"Update
an existing support ticket status or description","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for update_ticket"}},"required":["input"]},"defer_loading":true},{"name":"list_users","description":"List
all users in the system with optional filters","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for list_users"}},"required":["input"]},"defer_loading":true},{"name":"get_user_profile","description":"Get
detailed profile information for a specific user","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for get_user_profile"}},"required":["input"]},"defer_loading":true},{"name":"deploy_service","description":"Deploy
a service to the specified environment","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for deploy_service"}},"required":["input"]},"defer_loading":true},{"name":"rollback_service","description":"Rollback
a service deployment to a previous version","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for rollback_service"}},"required":["input"]},"defer_loading":true},{"name":"get_service_logs","description":"Get
service logs filtered by time range and severity","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for get_service_logs"}},"required":["input"]},"defer_loading":true},{"name":"run_sql_query","description":"Run
a read-only SQL query against the analytics database","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for run_sql_query"}},"required":["input"]},"defer_loading":true},{"name":"create_dashboard","description":"Create
a new monitoring dashboard with widgets","input_schema":{"type":"object","properties":{"input":{"type":"string","description":"Input
for create_dashboard"}},"required":["input"]},"defer_loading":true}]}'
headers:
accept:
- application/json
anthropic-version:
- '2023-06-01'
connection:
- keep-alive
content-type:
- application/json
host:
- api.anthropic.com
method: POST
uri: https://api.anthropic.com/v1/messages
response:
body:
string: '{"model":"claude-sonnet-4-5-20250929","id":"msg_01WithSearch001","type":"message","role":"assistant","content":[{"type":"text","text":"I''ll search for a weather tool."},{"type":"server_tool_use","id":"srvtoolu_01Search001","name":"tool_search_tool_bm25","input":{"query":"weather conditions"},"caller":{"type":"direct"}},{"type":"tool_search_tool_result","tool_use_id":"srvtoolu_01Search001","content":{"type":"tool_search_tool_search_result","tool_references":[{"type":"tool_reference","tool_name":"get_weather"}]}},{"type":"text","text":"Found it. Let me get the weather for Tokyo."},{"type":"tool_use","id":"toolu_01WithSearch001","name":"get_weather","input":{"input":"Tokyo"},"caller":{"type":"direct"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":1566,"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"output_tokens":155,"service_tier":"standard"}}'
headers:
Content-Type:
- application/json
status:
code: 200
message: OK
version: 1

View File

@@ -1,828 +1,109 @@
interactions:
- request:
body: !!binary |
CvP7AQokCiIKDHNlcnZpY2UubmFtZRISChBjcmV3QUktdGVsZW1ldHJ5Esn7AQoSChBjcmV3YWku
dGVsZW1ldHJ5Ep4HChBGdupVRwCZRqXxk3FnMwCbEghSR8rOc1qkfCoMQ3JldyBDcmVhdGVkMAE5
8GzO7sagGhhBOAHe7sagGhhKGgoOY3Jld2FpX3ZlcnNpb24SCAoGMC45NS4wShoKDnB5dGhvbl92
ZXJzaW9uEggKBjMuMTIuN0ouCghjcmV3X2tleRIiCiBjOTdiNWZlYjVkMWI2NmJiNTkwMDZhYWEw
MWEyOWNkNkoxCgdjcmV3X2lkEiYKJDk1NGM2OTJmLTc5Y2ItNGZlZi05NjNkLWUyMGRkMjFhMjAw
MUocCgxjcmV3X3Byb2Nlc3MSDAoKc2VxdWVudGlhbEoRCgtjcmV3X21lbW9yeRICEABKGgoUY3Jl
d19udW1iZXJfb2ZfdGFza3MSAhgBShsKFWNyZXdfbnVtYmVyX29mX2FnZW50cxICGAFKzAIKC2Ny
ZXdfYWdlbnRzErwCCrkCW3sia2V5IjogIjA3ZDk5YjYzMDQxMWQzNWZkOTA0N2E1MzJkNTNkZGE3
IiwgImlkIjogImQ5ZjkyYTBlLTVlZTYtNGY0NS04NzZiLWIwOWMyZTcwZWZkZiIsICJyb2xlIjog
IlJlc2VhcmNoZXIiLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBt
IjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvIiwgImRl
bGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNl
LCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUr/AQoKY3Jld190YXNr
cxLwAQrtAVt7ImtleSI6ICI2Mzk5NjUxN2YzZjNmMWM5NGQ2YmI2MTdhYTBiMWM0ZiIsICJpZCI6
ICIzZDc0NDlkYi0wMzU3LTQ3NTMtOGNmNS03NGY2ZmMzMGEwYTkiLCAiYXN5bmNfZXhlY3V0aW9u
PyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIlJlc2VhcmNo
ZXIiLCAiYWdlbnRfa2V5IjogIjA3ZDk5YjYzMDQxMWQzNWZkOTA0N2E1MzJkNTNkZGE3IiwgInRv
b2xzX25hbWVzIjogW119XXoCGAGFAQABAAASjgIKEP1sZDWz95ImNTj+qx9ckqUSCAmsHrq64Y/u
KgxUYXNrIENyZWF0ZWQwATnQXu3uxqAaGEFgxO3uxqAaGEouCghjcmV3X2tleRIiCiBjOTdiNWZl
YjVkMWI2NmJiNTkwMDZhYWEwMWEyOWNkNkoxCgdjcmV3X2lkEiYKJDk1NGM2OTJmLTc5Y2ItNGZl
Zi05NjNkLWUyMGRkMjFhMjAwMUouCgh0YXNrX2tleRIiCiA2Mzk5NjUxN2YzZjNmMWM5NGQ2YmI2
MTdhYTBiMWM0ZkoxCgd0YXNrX2lkEiYKJDNkNzQ0OWRiLTAzNTctNDc1My04Y2Y1LTc0ZjZmYzMw
YTBhOXoCGAGFAQABAAASngcKEBNuju55KsgJoN1+Y7gEx24SCCoSNPvs01ScKgxDcmV3IENyZWF0
ZWQwATlIpr3wxqAaGEHwVMbwxqAaGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjk1LjBKGgoOcHl0
aG9uX3ZlcnNpb24SCAoGMy4xMi43Si4KCGNyZXdfa2V5EiIKIDhjMjc1MmY0OWU1YjlkMmI2OGNi
MzVjYWM4ZmNjODZkSjEKB2NyZXdfaWQSJgokMTY2ODBmZjMtMjM1Yy00MzZlLTk2MWMtZGNhYWNh
YTFiMjA4ShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQAEoa
ChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAUrM
AgoLY3Jld19hZ2VudHMSvAIKuQJbeyJrZXkiOiAiOGJkMjEzOWI1OTc1MTgxNTA2ZTQxZmQ5YzQ1
NjNkNzUiLCAiaWQiOiAiMzY5NmM3ZDktNjcyYS00NmIzLWJlMGMtMzNmNjI2YjEwMGU3IiwgInJv
bGUiOiAiUmVzZWFyY2hlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1h
eF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8i
LCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/Ijog
ZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX1dSv8BCgpjcmV3
X3Rhc2tzEvABCu0BW3sia2V5IjogIjBkNjg1YTIxOTk0ZDk0OTA5N2JjNWE1NmQ3MzdlNmQxIiwg
ImlkIjogIjIzYWM1MzA1LTg5YTUtNDM1NC1hODUyLTNmNGNlNDk4NjY4NCIsICJhc3luY19leGVj
dXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAiUmVz
ZWFyY2hlciIsICJhZ2VudF9rZXkiOiAiOGJkMjEzOWI1OTc1MTgxNTA2ZTQxZmQ5YzQ1NjNkNzUi
LCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUBAAEAABKOAgoQt0jLLt+z7mZzw/JaxaWi4xII/o7T
QUAqVu8qDFRhc2sgQ3JlYXRlZDABOYg71PDGoBoYQZCN1PDGoBoYSi4KCGNyZXdfa2V5EiIKIDhj
Mjc1MmY0OWU1YjlkMmI2OGNiMzVjYWM4ZmNjODZkSjEKB2NyZXdfaWQSJgokMTY2ODBmZjMtMjM1
Yy00MzZlLTk2MWMtZGNhYWNhYTFiMjA4Si4KCHRhc2tfa2V5EiIKIDBkNjg1YTIxOTk0ZDk0OTA5
N2JjNWE1NmQ3MzdlNmQxSjEKB3Rhc2tfaWQSJgokMjNhYzUzMDUtODlhNS00MzU0LWE4NTItM2Y0
Y2U0OTg2Njg0egIYAYUBAAEAABKeBwoQAddeR+5jHI68iED9tmGToRIIqsyiA/tKs2QqDENyZXcg
Q3JlYXRlZDABOcC+UPrGoBoYQchXWvrGoBoYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuOTUuMEoa
Cg5weXRob25fdmVyc2lvbhIICgYzLjEyLjdKLgoIY3Jld19rZXkSIgogYjY3MzY4NmZjODIyYzIw
M2M3ZTg3OWM2NzU0MjQ2OTlKMQoHY3Jld19pZBImCiRmYjJjNzYwZi00ZTdhLTQ0ZDctOWI4My1i
NDA3MjY5YjVjZDRKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkS
AhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMS
AhgBSswCCgtjcmV3X2FnZW50cxK8Agq5Alt7ImtleSI6ICJiNTljZjc3YjZlNzY1ODQ4NzBlYjFj
Mzg4MjNkN2UyOCIsICJpZCI6ICJhMTA3Y2M4My1jZjM0LTRhMDctYWFmNi1lNzA4MTU0MmNiOTUi
LCAicm9sZSI6ICJSZXNlYXJjaGVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIw
LCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdw
dC00byIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlv
bj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K/wEK
CmNyZXdfdGFza3MS8AEK7QFbeyJrZXkiOiAiYTVlNWM1OGNlYTFiOWQwMDMzMmU2ODQ0MWQzMjdi
ZGYiLCAiaWQiOiAiNTYzNjc0NmQtNmQ4YS00YzBjLTgyNmEtNDA2YzRlMzc0MTg5IiwgImFzeW5j
X2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6
ICJSZXNlYXJjaGVyIiwgImFnZW50X2tleSI6ICJiNTljZjc3YjZlNzY1ODQ4NzBlYjFjMzg4MjNk
N2UyOCIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEo4CChDxrID3kZmdkWC//z9+mfuy
EgjUxsn2MojVPioMVGFzayBDcmVhdGVkMAE5IIRs+sagGhhB4OFs+sagGhhKLgoIY3Jld19rZXkS
IgogYjY3MzY4NmZjODIyYzIwM2M3ZTg3OWM2NzU0MjQ2OTlKMQoHY3Jld19pZBImCiRmYjJjNzYw
Zi00ZTdhLTQ0ZDctOWI4My1iNDA3MjY5YjVjZDRKLgoIdGFza19rZXkSIgogYTVlNWM1OGNlYTFi
OWQwMDMzMmU2ODQ0MWQzMjdiZGZKMQoHdGFza19pZBImCiQ1NjM2NzQ2ZC02ZDhhLTRjMGMtODI2
YS00MDZjNGUzNzQxODl6AhgBhQEAAQAAErgJChCvyf8lGSXM52eSUv8BPeh1EghI6rK/hduMWSoM
Q3JldyBDcmVhdGVkMAE5mJtE/MagGhhB+NhM/MagGhhKGgoOY3Jld2FpX3ZlcnNpb24SCAoGMC45
NS4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMuMTIuN0ouCghjcmV3X2tleRIiCiBlM2ZkYTBmMzEx
MGZlODBiMTg5NDdjMDE0NzE0MzBhNEoxCgdjcmV3X2lkEiYKJDQ5ZWRjNGIwLWZlNzctNDc0Yy1i
OGE0LTljMDlkNDUzMWIxY0oeCgxjcmV3X3Byb2Nlc3MSDgoMaGllcmFyY2hpY2FsShEKC2NyZXdf
bWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2Zf
YWdlbnRzEgIYAkqIBQoLY3Jld19hZ2VudHMS+AQK9QRbeyJrZXkiOiAiOGJkMjEzOWI1OTc1MTgx
NTA2ZTQxZmQ5YzQ1NjNkNzUiLCAiaWQiOiAiMzY5NmM3ZDktNjcyYS00NmIzLWJlMGMtMzNmNjI2
YjEwMGU3IiwgInJvbGUiOiAiUmVzZWFyY2hlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0
ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxs
bSI6ICJncHQtNG8iLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3dfY29kZV9l
eGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBb
XX0sIHsia2V5IjogIjlhNTAxNWVmNDg5NWRjNjI3OGQ1NDgxOGJhNDQ2YWY3IiwgImlkIjogImE5
OTRlNjZlLWE5OTEtNDRhNi04OTIxLWE4OGQ0M2QyNjZiYyIsICJyb2xlIjogIlNlbmlvciBXcml0
ZXIiLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwg
ImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvIiwgImRlbGVnYXRpb25f
ZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3Jl
dHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUrbAQoKY3Jld190YXNrcxLMAQrJAVt7
ImtleSI6ICI1ZmE2NWMwNmE5ZTMxZjJjNjk1NDMyNjY4YWNkNjJkZCIsICJpZCI6ICJiOTY5MGI1
OC1hYmNhLTRjYzktOGZlYS01ZTZmNDZjNmQ5ZDUiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNl
LCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIk5vbmUiLCAiYWdlbnRfa2V5
IjogbnVsbCwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAASuAkKECCrkzgLIi2bqMUA6kHF
B1ESCFsUbfXKnCROKgxDcmV3IENyZWF0ZWQwATnAlbP8xqAaGEGwPrv8xqAaGEoaCg5jcmV3YWlf
dmVyc2lvbhIICgYwLjk1LjBKGgoOcHl0aG9uX3ZlcnNpb24SCAoGMy4xMi43Si4KCGNyZXdfa2V5
EiIKIGUzZmRhMGYzMTEwZmU4MGIxODk0N2MwMTQ3MTQzMGE0SjEKB2NyZXdfaWQSJgokNDJlMGQ1
MmYtYWVjYS00MTMzLTlmMDItZDZiOGU0OTRkYjYxSh4KDGNyZXdfcHJvY2VzcxIOCgxoaWVyYXJj
aGljYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVj
cmV3X251bWJlcl9vZl9hZ2VudHMSAhgCSogFCgtjcmV3X2FnZW50cxL4BAr1BFt7ImtleSI6ICI4
YmQyMTM5YjU5NzUxODE1MDZlNDFmZDljNDU2M2Q3NSIsICJpZCI6ICIzNjk2YzdkOS02NzJhLTQ2
YjMtYmUwYy0zM2Y2MjZiMTAwZTciLCAicm9sZSI6ICJSZXNlYXJjaGVyIiwgInZlcmJvc2U/Ijog
ZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5n
X2xsbSI6ICIiLCAibGxtIjogImdwdC00byIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2Us
ICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0
b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiOWE1MDE1ZWY0ODk1ZGM2Mjc4ZDU0ODE4YmE0NDZh
ZjciLCAiaWQiOiAiYTk5NGU2NmUtYTk5MS00NGE2LTg5MjEtYTg4ZDQzZDI2NmJjIiwgInJvbGUi
OiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1h
eF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8i
LCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/Ijog
ZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX1dStsBCgpjcmV3
X3Rhc2tzEswBCskBW3sia2V5IjogIjVmYTY1YzA2YTllMzFmMmM2OTU0MzI2NjhhY2Q2MmRkIiwg
ImlkIjogImM3MGNmMzliLTE2YzktNDNiOC1hN2VhLTY5MTgzZmZmZDg5ZiIsICJhc3luY19leGVj
dXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAiTm9u
ZSIsICJhZ2VudF9rZXkiOiBudWxsLCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUBAAEAABLKCwoQ
Nu3FGKmDx1jRbaca6HH3TRIIb9vd1api6NYqDENyZXcgQ3JlYXRlZDABOaiMR/3GoBoYQRjxT/3G
oBoYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuOTUuMEoaCg5weXRob25fdmVyc2lvbhIICgYzLjEy
LjdKLgoIY3Jld19rZXkSIgogZDM4NDZjOWQyNzZlOGU2ZTQzZTMxZjYxNzYzNTdiNGZKMQoHY3Jl
d19pZBImCiQ2MDE5NzNhNy04NDlmLTQ4ZWQtOGM4MS04YzY5N2QyY2ViNGRKHAoMY3Jld19wcm9j
ZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rh
c2tzEgIYAkobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgCSogFCgtjcmV3X2FnZW50cxL4BAr1
BFt7ImtleSI6ICI4YmQyMTM5YjU5NzUxODE1MDZlNDFmZDljNDU2M2Q3NSIsICJpZCI6ICIzNjk2
YzdkOS02NzJhLTQ2YjMtYmUwYy0zM2Y2MjZiMTAwZTciLCAicm9sZSI6ICJSZXNlYXJjaGVyIiwg
InZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5j
dGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00byIsICJkZWxlZ2F0aW9uX2VuYWJs
ZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9s
aW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiOWE1MDE1ZWY0ODk1ZGM2Mjc4
ZDU0ODE4YmE0NDZhZjciLCAiaWQiOiAiYTk5NGU2NmUtYTk5MS00NGE2LTg5MjEtYTg4ZDQzZDI2
NmJjIiwgInJvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0
ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxs
bSI6ICJncHQtNG8iLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3dfY29kZV9l
eGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBb
XX1dSu8DCgpjcmV3X3Rhc2tzEuADCt0DW3sia2V5IjogImU5ZTZiNzJhYWMzMjY0NTlkZDcwNjhm
MGIxNzE3YzFjIiwgImlkIjogImYzNGM5ZGZjLWU4NzYtNDkzNS04NTNmLTMyM2EwYzhhZGViMiIs
ICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwgImFnZW50
X3JvbGUiOiAiUmVzZWFyY2hlciIsICJhZ2VudF9rZXkiOiAiOGJkMjEzOWI1OTc1MTgxNTA2ZTQx
ZmQ5YzQ1NjNkNzUiLCAidG9vbHNfbmFtZXMiOiBbXX0sIHsia2V5IjogImVlZWU3ZTczZDVkZjY2
ZDQ4ZDJkODA3YmFmZjg3NGYzIiwgImlkIjogImNjOGMxZGQ0LTUxNzktNDdlMC1iMTk0LTU3NmNh
MjFkZjllOCIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxz
ZSwgImFnZW50X3JvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJhZ2VudF9rZXkiOiAiOWE1MDE1ZWY0
ODk1ZGM2Mjc4ZDU0ODE4YmE0NDZhZjciLCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUBAAEAABKm
BwoQYZWMzWnoYys7S/fnI87iGRIIla+Vilm2/HgqDENyZXcgQ3JlYXRlZDABOaDT6f3GoBoYQZB8
8f3GoBoYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuOTUuMEoaCg5weXRob25fdmVyc2lvbhIICgYz
LjEyLjdKLgoIY3Jld19rZXkSIgogNjczOGFkNWI4Y2IzZTZmMWMxYzkzNTBiOTZjMmU2NzhKMQoH
Y3Jld19pZBImCiRjYjJmYWQ2NS1jZmVlLTQ5MjMtYmE4ZS1jYzllYTM4YmRlZDVKHAoMY3Jld19w
cm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29m
X3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgBStACCgtjcmV3X2FnZW50cxLA
Agq9Alt7ImtleSI6ICI1MTJhNmRjMzc5ZjY2YjIxZWVhYjI0ZTYzNDgzNmY3MiIsICJpZCI6ICJl
ZmM1ZmYyNC1lNGRlLTQwMDctOTE0Ni03MzQ2ODkyMzMxNmEiLCAicm9sZSI6ICJDb250ZW50IFdy
aXRlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxs
LCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8iLCAiZGVsZWdhdGlv
bl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhf
cmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX1dSoMCCgpjcmV3X3Rhc2tzEvQBCvEB
W3sia2V5IjogIjM0NzcwNzZiZTNhZjcxMzA0NjJlZGFhMmViOGEwNDhlIiwgImlkIjogImI1YTU1
ZDIxLWM0YWQtNGY3MS1hNzlmLTc5MmI3MzcwZDM0MSIsICJhc3luY19leGVjdXRpb24/IjogZmFs
c2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAiQ29udGVudCBXcml0ZXIi
LCAiYWdlbnRfa2V5IjogIjUxMmE2ZGMzNzlmNjZiMjFlZWFiMjRlNjM0ODM2ZjcyIiwgInRvb2xz
X25hbWVzIjogW119XXoCGAGFAQABAAASjg8KEPffWTWZFpn8wcrgD+eyhrMSCHU6W3vsK6dIKgxD
cmV3IENyZWF0ZWQwATmAXFj+xqAaGEHQ72D+xqAaGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjk1
LjBKGgoOcHl0aG9uX3ZlcnNpb24SCAoGMy4xMi43Si4KCGNyZXdfa2V5EiIKIDRhY2I5MzNmZThk
ZTRjZDU3NzJlZGIwZTgyMDZlMjhmSjEKB2NyZXdfaWQSJgokZjQ4NDAzYjUtZjRjMi00NjA4LWE1
YzYtMjc4NGU5ZTY0MDNlShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVt
b3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGARKGwoVY3Jld19udW1iZXJfb2ZfYWdl
bnRzEgIYAkqBBQoLY3Jld19hZ2VudHMS8QQK7gRbeyJrZXkiOiAiMmJlZmZkY2FjNjVjY2VhYTY1
Mzk2ZjJjN2Y1NjhlNmEiLCAiaWQiOiAiNzlkY2E1NjgtOTUxNy00ZWM0LThkODctMDMxZWFlM2Ji
OTk1IiwgInJvbGUiOiAiUmVzZWFyY2hlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIi
OiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6
ICJncHQtNG8iLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3dfY29kZV9leGVj
dXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX0s
IHsia2V5IjogIjFjZGNhOGRlMDdiMjhkMDc0ZDc4NjQ3NDhiZGIxNzY3IiwgImlkIjogIjgzZWI3
MGNkLWIzODEtNDYwMy05Nzg5LTkyN2IxYmNlYTU2ZCIsICJyb2xlIjogIldyaXRlciIsICJ2ZXJi
b3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25f
Y2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8iLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6
IGZhbHNlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQi
OiAyLCAidG9vbHNfbmFtZXMiOiBbXX1dSroHCgpjcmV3X3Rhc2tzEqsHCqgHW3sia2V5IjogImVi
YWVhYTk2ZThjODU1N2YwNDYxNzM2ZDRiZWY5MzE3IiwgImlkIjogImRkMGVkMzgxLTZhNzUtNDVh
My1iZGUyLTRlNzdiOTU0YmI2OCIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9p
bnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAiUmVzZWFyY2hlciIsICJhZ2VudF9rZXkiOiAi
MmJlZmZkY2FjNjVjY2VhYTY1Mzk2ZjJjN2Y1NjhlNmEiLCAidG9vbHNfbmFtZXMiOiBbXX0sIHsi
a2V5IjogIjYwZjM1MjI4ZWMxY2I3M2ZlZDM1ZDk5MTBhNmQ3OWYzIiwgImlkIjogImE0OGZmMzgx
LTI2ZDEtNDVjNy04MGVkLWJlODM0NTkxYWIzYyIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2Us
ICJodW1hbl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAiV3JpdGVyIiwgImFnZW50X2tl
eSI6ICIxY2RjYThkZTA3YjI4ZDA3NGQ3ODY0NzQ4YmRiMTc2NyIsICJ0b29sc19uYW1lcyI6IFtd
fSwgeyJrZXkiOiAiYmUyYTcxNGFjMzVlM2E2YjBhYmJhMjRjZWMyZTA0Y2MiLCAiaWQiOiAiMDkx
YWE2YjMtZGYyMC00YTMzLTk1MzUtOGJiNDllMzlhMGQyIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBm
YWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJXcml0ZXIiLCAiYWdl
bnRfa2V5IjogIjFjZGNhOGRlMDdiMjhkMDc0ZDc4NjQ3NDhiZGIxNzY3IiwgInRvb2xzX25hbWVz
IjogW119LCB7ImtleSI6ICI0YTU2YTYyNzk4ODZhNmZlNThkNjc1NzgxZDFmNWFkOSIsICJpZCI6
ICIxMDFlOGNhNC04MTk1LTQyNDYtYjg2Ny05ZjYxYzM1NWJjOGIiLCAiYXN5bmNfZXhlY3V0aW9u
PyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIldyaXRlciIs
ICJhZ2VudF9rZXkiOiAiMWNkY2E4ZGUwN2IyOGQwNzRkNzg2NDc0OGJkYjE3NjciLCAidG9vbHNf
bmFtZXMiOiBbXX1degIYAYUBAAEAABKLCQoQgHmumMETjYmEZpveDu3dwBIIByVlUIAMTMEqDENy
ZXcgQ3JlYXRlZDABOfgtEgDHoBoYQTC/GwDHoBoYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuOTUu
MEoaCg5weXRob25fdmVyc2lvbhIICgYzLjEyLjdKLgoIY3Jld19rZXkSIgogODBjNzk4ZjYyMjhm
MzJhNzQ4M2Y3MmFmZTM2NmVkY2FKMQoHY3Jld19pZBImCiQ0YzM3YTFhNS1lMzA5LTQ2N2EtYWJk
ZC0zZDY1YThlNjY5ZjBKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1v
cnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAkobChVjcmV3X251bWJlcl9vZl9hZ2Vu
dHMSAhgBSswCCgtjcmV3X2FnZW50cxK8Agq5Alt7ImtleSI6ICIzN2Q3MTNkM2RjZmFlMWRlNTNi
NGUyZGFjNzU1M2ZkNyIsICJpZCI6ICJmNGY2NmQxMi01M2Q0LTQ2NTQtODRiZC1lMjJmYzk2ZDU0
NTEiLCAicm9sZSI6ICJ0ZXN0X2FnZW50IiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6
IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjog
ImdwdC00byIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1
dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K
7AMKCmNyZXdfdGFza3MS3QMK2gNbeyJrZXkiOiAiY2M0YTQyYzE4NmVlMWEyZTY2YjAyOGVjNWI3
MmJkNGUiLCAiaWQiOiAiMmUyMmZiMDMtMzIxMS00NTgxLTkzN2EtZjY1Zjk5MjY3ZmIyIiwgImFz
eW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9s
ZSI6ICJ0ZXN0X2FnZW50IiwgImFnZW50X2tleSI6ICIzN2Q3MTNkM2RjZmFlMWRlNTNiNGUyZGFj
NzU1M2ZkNyIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiNzRlNmIyNDQ5YzQ1NzRhY2Jj
MmJmNDk3MjczYTVjYzEiLCAiaWQiOiAiODIzYmRlYzUtMTRkMS00ZDdjLWJkYWMtODkzNTY1YmFi
YmM1IiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAi
YWdlbnRfcm9sZSI6ICJ0ZXN0X2FnZW50IiwgImFnZW50X2tleSI6ICIzN2Q3MTNkM2RjZmFlMWRl
NTNiNGUyZGFjNzU1M2ZkNyIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEo4CChDXwUEa
LzdRrsWweePQjNzuEgjgSUXh0IH0OyoMVGFzayBDcmVhdGVkMAE5aKkrAMegGhhBaCYsAMegGhhK
LgoIY3Jld19rZXkSIgogODBjNzk4ZjYyMjhmMzJhNzQ4M2Y3MmFmZTM2NmVkY2FKMQoHY3Jld19p
ZBImCiQ0YzM3YTFhNS1lMzA5LTQ2N2EtYWJkZC0zZDY1YThlNjY5ZjBKLgoIdGFza19rZXkSIgog
Y2M0YTQyYzE4NmVlMWEyZTY2YjAyOGVjNWI3MmJkNGVKMQoHdGFza19pZBImCiQyZTIyZmIwMy0z
MjExLTQ1ODEtOTM3YS1mNjVmOTkyNjdmYjJ6AhgBhQEAAQAAEo4CChDxJ8ZFykKBgfaipCQ/ggPb
EgguzV65sDQE1yoMVGFzayBDcmVhdGVkMAE5OBNvAMegGhhBgIRvAMegGhhKLgoIY3Jld19rZXkS
IgogODBjNzk4ZjYyMjhmMzJhNzQ4M2Y3MmFmZTM2NmVkY2FKMQoHY3Jld19pZBImCiQ0YzM3YTFh
NS1lMzA5LTQ2N2EtYWJkZC0zZDY1YThlNjY5ZjBKLgoIdGFza19rZXkSIgogNzRlNmIyNDQ5YzQ1
NzRhY2JjMmJmNDk3MjczYTVjYzFKMQoHdGFza19pZBImCiQ4MjNiZGVjNS0xNGQxLTRkN2MtYmRh
Yy04OTM1NjViYWJiYzV6AhgBhQEAAQAAEo4CChC0QeqqmE8Dp/Ee9DEhuLMuEggOnt12q4mouioM
VGFzayBDcmVhdGVkMAE5eBbHAMegGhhB2IPHAMegGhhKLgoIY3Jld19rZXkSIgogODBjNzk4ZjYy
MjhmMzJhNzQ4M2Y3MmFmZTM2NmVkY2FKMQoHY3Jld19pZBImCiQ0YzM3YTFhNS1lMzA5LTQ2N2Et
YWJkZC0zZDY1YThlNjY5ZjBKLgoIdGFza19rZXkSIgogNzRlNmIyNDQ5YzQ1NzRhY2JjMmJmNDk3
MjczYTVjYzFKMQoHdGFza19pZBImCiQ4MjNiZGVjNS0xNGQxLTRkN2MtYmRhYy04OTM1NjViYWJi
YzV6AhgBhQEAAQAAEsoLChAQHimti07LsJEmR4M5P2iQEgjeCnwCLR02XyoMQ3JldyBDcmVhdGVk
MAE5IOlAAsegGhhBAGVJAsegGhhKGgoOY3Jld2FpX3ZlcnNpb24SCAoGMC45NS4wShoKDnB5dGhv
bl92ZXJzaW9uEggKBjMuMTIuN0ouCghjcmV3X2tleRIiCiBhYzdlNzQ1OTA3MmM3ZWMwNmRlYWY5
ZDMyZWNlYzE1YUoxCgdjcmV3X2lkEiYKJGI1NTdkNDliLTkxZTktNDllMy1iNjA4LTUyZTdiMGE1
YzZjM0ocCgxjcmV3X3Byb2Nlc3MSDAoKc2VxdWVudGlhbEoRCgtjcmV3X21lbW9yeRICEABKGgoU
Y3Jld19udW1iZXJfb2ZfdGFza3MSAhgCShsKFWNyZXdfbnVtYmVyX29mX2FnZW50cxICGAJKiAUK
C2NyZXdfYWdlbnRzEvgECvUEW3sia2V5IjogIjhiZDIxMzliNTk3NTE4MTUwNmU0MWZkOWM0NTYz
ZDc1IiwgImlkIjogIjM2OTZjN2Q5LTY3MmEtNDZiMy1iZTBjLTMzZjYyNmIxMDBlNyIsICJyb2xl
IjogIlJlc2VhcmNoZXIiLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjAsICJtYXhf
cnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvIiwg
ImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZh
bHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119LCB7ImtleSI6ICI5
YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4MThiYTQ0NmFmNyIsICJpZCI6ICJhOTk0ZTY2ZS1hOTkxLTQ0
YTYtODkyMS1hODhkNDNkMjY2YmMiLCAicm9sZSI6ICJTZW5pb3IgV3JpdGVyIiwgInZlcmJvc2U/
IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxs
aW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00byIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFs
c2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIs
ICJ0b29sc19uYW1lcyI6IFtdfV1K7wMKCmNyZXdfdGFza3MS4AMK3QNbeyJrZXkiOiAiYTgwNjE3
MTcyZmZjYjkwZjg5N2MxYThjMzJjMzEwMmEiLCAiaWQiOiAiZjNmMDYxNWItMDg3NS00NWM0LWFm
YmMtYWI1OGQxMGQyZDA0IiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0
PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJSZXNlYXJjaGVyIiwgImFnZW50X2tleSI6ICI4YmQy
MTM5YjU5NzUxODE1MDZlNDFmZDljNDU2M2Q3NSIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXki
OiAiNWZhNjVjMDZhOWUzMWYyYzY5NTQzMjY2OGFjZDYyZGQiLCAiaWQiOiAiNGUwZTEyOTQtZjdi
ZS00OTBhLThiYmUtNjliYjQ5ODc1YTUzIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1
bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJTZW5pb3IgV3JpdGVyIiwgImFnZW50
X2tleSI6ICI5YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4MThiYTQ0NmFmNyIsICJ0b29sc19uYW1lcyI6
IFtdfV16AhgBhQEAAQAAEo4CChBu6pl3tRo8XQcOz1dOfEiREgi+aKvpuUNN/ioMVGFzayBDcmVh
dGVkMAE5QCRZAsegGhhBKKVZAsegGhhKLgoIY3Jld19rZXkSIgogYWM3ZTc0NTkwNzJjN2VjMDZk
ZWFmOWQzMmVjZWMxNWFKMQoHY3Jld19pZBImCiRiNTU3ZDQ5Yi05MWU5LTQ5ZTMtYjYwOC01MmU3
YjBhNWM2YzNKLgoIdGFza19rZXkSIgogYTgwNjE3MTcyZmZjYjkwZjg5N2MxYThjMzJjMzEwMmFK
MQoHdGFza19pZBImCiRmM2YwNjE1Yi0wODc1LTQ1YzQtYWZiYy1hYjU4ZDEwZDJkMDR6AhgBhQEA
AQAAEo4CChBNL9q8o7PtXvaR6poXIlx6EggIBAybRwvpyCoMVGFzayBDcmVhdGVkMAE5qP2oAseg
GhhB6JmpAsegGhhKLgoIY3Jld19rZXkSIgogYWM3ZTc0NTkwNzJjN2VjMDZkZWFmOWQzMmVjZWMx
NWFKMQoHY3Jld19pZBImCiRiNTU3ZDQ5Yi05MWU5LTQ5ZTMtYjYwOC01MmU3YjBhNWM2YzNKLgoI
dGFza19rZXkSIgogNWZhNjVjMDZhOWUzMWYyYzY5NTQzMjY2OGFjZDYyZGRKMQoHdGFza19pZBIm
CiQ0ZTBlMTI5NC1mN2JlLTQ5MGEtOGJiZS02OWJiNDk4NzVhNTN6AhgBhQEAAQAAEsoLChAxUBRb
Q0xWxbf9ef52QMDSEgihBkurLl3qiSoMQ3JldyBDcmVhdGVkMAE5eE9hBcegGhhBCIVpBcegGhhK
GgoOY3Jld2FpX3ZlcnNpb24SCAoGMC45NS4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMuMTIuN0ou
CghjcmV3X2tleRIiCiBhYzdlNzQ1OTA3MmM3ZWMwNmRlYWY5ZDMyZWNlYzE1YUoxCgdjcmV3X2lk
EiYKJGU1YmYwYTFjLTg2YjctNDhkZC04YzJlLTdjMThhZTZhODJhZUocCgxjcmV3X3Byb2Nlc3MS
DAoKc2VxdWVudGlhbEoRCgtjcmV3X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MS
AhgCShsKFWNyZXdfbnVtYmVyX29mX2FnZW50cxICGAJKiAUKC2NyZXdfYWdlbnRzEvgECvUEW3si
a2V5IjogIjhiZDIxMzliNTk3NTE4MTUwNmU0MWZkOWM0NTYzZDc1IiwgImlkIjogIjM2OTZjN2Q5
LTY3MmEtNDZiMy1iZTBjLTMzZjYyNmIxMDBlNyIsICJyb2xlIjogIlJlc2VhcmNoZXIiLCAidmVy
Ym9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9u
X2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvIiwgImRlbGVnYXRpb25fZW5hYmxlZD8i
OiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0
IjogMiwgInRvb2xzX25hbWVzIjogW119LCB7ImtleSI6ICI5YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4
MThiYTQ0NmFmNyIsICJpZCI6ICJhOTk0ZTY2ZS1hOTkxLTQ0YTYtODkyMS1hODhkNDNkMjY2YmMi
LCAicm9sZSI6ICJTZW5pb3IgV3JpdGVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6
IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjog
ImdwdC00byIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1
dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K
7wMKCmNyZXdfdGFza3MS4AMK3QNbeyJrZXkiOiAiYTgwNjE3MTcyZmZjYjkwZjg5N2MxYThjMzJj
MzEwMmEiLCAiaWQiOiAiMDJlMTk1ODMtZmY3OS00N2YzLThkNDMtNWJhMGY4NmYxOTllIiwgImFz
eW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9s
ZSI6ICJSZXNlYXJjaGVyIiwgImFnZW50X2tleSI6ICI4YmQyMTM5YjU5NzUxODE1MDZlNDFmZDlj
NDU2M2Q3NSIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiNWZhNjVjMDZhOWUzMWYyYzY5
NTQzMjY2OGFjZDYyZGQiLCAiaWQiOiAiY2ViMjZhOTUtODc5ZS00OGFmLTg2MmItNzAyZmIyODA3
MzM5IiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAi
YWdlbnRfcm9sZSI6ICJTZW5pb3IgV3JpdGVyIiwgImFnZW50X2tleSI6ICI5YTUwMTVlZjQ4OTVk
YzYyNzhkNTQ4MThiYTQ0NmFmNyIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEo4CChD9
XNrHzMkqfERO3pxva7qVEgi+KDMFQWeCXioMVGFzayBDcmVhdGVkMAE5KHl4BcegGhhBKPZ4Bceg
GhhKLgoIY3Jld19rZXkSIgogYWM3ZTc0NTkwNzJjN2VjMDZkZWFmOWQzMmVjZWMxNWFKMQoHY3Jl
d19pZBImCiRlNWJmMGExYy04NmI3LTQ4ZGQtOGMyZS03YzE4YWU2YTgyYWVKLgoIdGFza19rZXkS
IgogYTgwNjE3MTcyZmZjYjkwZjg5N2MxYThjMzJjMzEwMmFKMQoHdGFza19pZBImCiQwMmUxOTU4
My1mZjc5LTQ3ZjMtOGQ0My01YmEwZjg2ZjE5OWV6AhgBhQEAAQAAEsoLChBy2/tEpjdjZeT9McCa
zn1ZEghPIBt/a/+PUyoMQ3JldyBDcmVhdGVkMAE5ABE/BsegGhhB+PlJBsegGhhKGgoOY3Jld2Fp
X3ZlcnNpb24SCAoGMC45NS4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMuMTIuN0ouCghjcmV3X2tl
eRIiCiBkMjdkNDVhZDlkYTE1ODU0MzI1YjBhZjNiMGZiYzMyYkoxCgdjcmV3X2lkEiYKJGM4OGMx
ZDc1LWZlN2QtNDQwMi04N2QwLWFkYzQ3MWFiMWI3YUocCgxjcmV3X3Byb2Nlc3MSDAoKc2VxdWVu
dGlhbEoRCgtjcmV3X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgCShsKFWNy
ZXdfbnVtYmVyX29mX2FnZW50cxICGAJKiAUKC2NyZXdfYWdlbnRzEvgECvUEW3sia2V5IjogIjhi
ZDIxMzliNTk3NTE4MTUwNmU0MWZkOWM0NTYzZDc1IiwgImlkIjogIjM2OTZjN2Q5LTY3MmEtNDZi
My1iZTBjLTMzZjYyNmIxMDBlNyIsICJyb2xlIjogIlJlc2VhcmNoZXIiLCAidmVyYm9zZT8iOiBm
YWxzZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdf
bGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvIiwgImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwg
ImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRv
b2xzX25hbWVzIjogW119LCB7ImtleSI6ICI5YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4MThiYTQ0NmFm
NyIsICJpZCI6ICJhOTk0ZTY2ZS1hOTkxLTQ0YTYtODkyMS1hODhkNDNkMjY2YmMiLCAicm9sZSI6
ICJTZW5pb3IgV3JpdGVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4
X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00byIs
ICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBm
YWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K7wMKCmNyZXdf
dGFza3MS4AMK3QNbeyJrZXkiOiAiODE2ZTllYmM2OWRiNjdjNjhiYjRmM2VhNjVjY2RhNTgiLCAi
aWQiOiAiZDM1YjllMjUtODE1MC00ODQ0LWFhMTctYzk0MTRhMDE2NjcyIiwgImFzeW5jX2V4ZWN1
dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJSZXNl
YXJjaGVyIiwgImFnZW50X2tleSI6ICI4YmQyMTM5YjU5NzUxODE1MDZlNDFmZDljNDU2M2Q3NSIs
ICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiNWZhNjVjMDZhOWUzMWYyYzY5NTQzMjY2OGFj
ZDYyZGQiLCAiaWQiOiAiYjIwMjdlZWUtYjNjYi00MGMxLWI1NDEtNmY0ZTA5ZGRhNTU5IiwgImFz
eW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9s
ZSI6ICJTZW5pb3IgV3JpdGVyIiwgImFnZW50X2tleSI6ICI5YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4
MThiYTQ0NmFmNyIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEsoLChD//jBA0L4Z7qgQ
5xomV5+TEgjd+k4M+YdqbCoMQ3JldyBDcmVhdGVkMAE5uAq/BsegGhhB6EPJBsegGhhKGgoOY3Jl
d2FpX3ZlcnNpb24SCAoGMC45NS4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMuMTIuN0ouCghjcmV3
X2tleRIiCiBkMjdkNDVhZDlkYTE1ODU0MzI1YjBhZjNiMGZiYzMyYkoxCgdjcmV3X2lkEiYKJGY3
OTg0ZWVlLWZjMGItNGFjYy1iNWE3LWExYjgwMWU0NGM1MEocCgxjcmV3X3Byb2Nlc3MSDAoKc2Vx
dWVudGlhbEoRCgtjcmV3X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgCShsK
FWNyZXdfbnVtYmVyX29mX2FnZW50cxICGAJKiAUKC2NyZXdfYWdlbnRzEvgECvUEW3sia2V5Ijog
IjhiZDIxMzliNTk3NTE4MTUwNmU0MWZkOWM0NTYzZDc1IiwgImlkIjogIjM2OTZjN2Q5LTY3MmEt
NDZiMy1iZTBjLTMzZjYyNmIxMDBlNyIsICJyb2xlIjogIlJlc2VhcmNoZXIiLCAidmVyYm9zZT8i
OiBmYWxzZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxp
bmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvIiwgImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxz
ZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwg
InRvb2xzX25hbWVzIjogW119LCB7ImtleSI6ICI5YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4MThiYTQ0
NmFmNyIsICJpZCI6ICJhOTk0ZTY2ZS1hOTkxLTQ0YTYtODkyMS1hODhkNDNkMjY2YmMiLCAicm9s
ZSI6ICJTZW5pb3IgV3JpdGVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAi
bWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00
byIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8i
OiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K7wMKCmNy
ZXdfdGFza3MS4AMK3QNbeyJrZXkiOiAiODE2ZTllYmM2OWRiNjdjNjhiYjRmM2VhNjVjY2RhNTgi
LCAiaWQiOiAiOTcxMDdmNTUtY2U2Yi00NWI4LWI4Y2QtZjhjNmIyOGI1YjI5IiwgImFzeW5jX2V4
ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJS
ZXNlYXJjaGVyIiwgImFnZW50X2tleSI6ICI4YmQyMTM5YjU5NzUxODE1MDZlNDFmZDljNDU2M2Q3
NSIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiNWZhNjVjMDZhOWUzMWYyYzY5NTQzMjY2
OGFjZDYyZGQiLCAiaWQiOiAiNzZlMTYxMDEtNTY3ZC00YmVlLTg3MGQtNjlkNjUzNWUxM2Y0Iiwg
ImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRf
cm9sZSI6ICJTZW5pb3IgV3JpdGVyIiwgImFnZW50X2tleSI6ICI5YTUwMTVlZjQ4OTVkYzYyNzhk
NTQ4MThiYTQ0NmFmNyIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEv4BChBUyY/ccsE1
R24CGyVtHLqZEgiwrBqbcxAHeCoTQ3JldyBUZXN0IEV4ZWN1dGlvbjABOSiyJAfHoBoYQZiNLgfH
oBoYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuOTUuMEouCghjcmV3X2tleRIiCiAzOTQ5M2UxNjE2
MzRhOWVjNGRjNGUzOTdhOTc2OTU3MkoxCgdjcmV3X2lkEiYKJGUwZWJlYWE2LTFjMmItNGMxZi1i
MzY1LTE4YmNmMjZhOGIwNkoRCgppdGVyYXRpb25zEgMKATJKGwoKbW9kZWxfbmFtZRINCgtncHQt
NG8tbWluaXoCGAGFAQABAAASuAkKEPPNALYHa18lwaRtQDvBnDESCJJZx6P/4qPDKgxDcmV3IENy
ZWF0ZWQwATnIzZ8Hx6AaGEFIWagHx6AaGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjk1LjBKGgoO
cHl0aG9uX3ZlcnNpb24SCAoGMy4xMi43Si4KCGNyZXdfa2V5EiIKIGUzZmRhMGYzMTEwZmU4MGIx
ODk0N2MwMTQ3MTQzMGE0SjEKB2NyZXdfaWQSJgokMTBhYzc4ODQtOTA2ZC00YTg0LWIxMTYtMWMx
MTg5NDg3OTc3Sh4KDGNyZXdfcHJvY2VzcxIOCgxoaWVyYXJjaGljYWxKEQoLY3Jld19tZW1vcnkS
AhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMS
AhgCSogFCgtjcmV3X2FnZW50cxL4BAr1BFt7ImtleSI6ICI4YmQyMTM5YjU5NzUxODE1MDZlNDFm
ZDljNDU2M2Q3NSIsICJpZCI6ICIzNjk2YzdkOS02NzJhLTQ2YjMtYmUwYy0zM2Y2MjZiMTAwZTci
LCAicm9sZSI6ICJSZXNlYXJjaGVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIw
LCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdw
dC00byIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlv
bj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJr
ZXkiOiAiOWE1MDE1ZWY0ODk1ZGM2Mjc4ZDU0ODE4YmE0NDZhZjciLCAiaWQiOiAiYTk5NGU2NmUt
YTk5MS00NGE2LTg5MjEtYTg4ZDQzZDI2NmJjIiwgInJvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJ2
ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rp
b25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8iLCAiZGVsZWdhdGlvbl9lbmFibGVk
PyI6IGZhbHNlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGlt
aXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX1dStsBCgpjcmV3X3Rhc2tzEswBCskBW3sia2V5Ijog
IjVmYTY1YzA2YTllMzFmMmM2OTU0MzI2NjhhY2Q2MmRkIiwgImlkIjogIjYzYmEzZTVmLWNlOWIt
NDE4Zi04NGNmLWJjOWNlYjUwYTMwNyIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1h
bl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAiTm9uZSIsICJhZ2VudF9rZXkiOiBudWxs
LCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUBAAEAABKOAgoQlnr9jeEDn0IZusmEkE/xBxIIbyk0
sNkOWxwqDFRhc2sgQ3JlYXRlZDABOdAdygfHoBoYQQCTygfHoBoYSi4KCGNyZXdfa2V5EiIKIGUz
ZmRhMGYzMTEwZmU4MGIxODk0N2MwMTQ3MTQzMGE0SjEKB2NyZXdfaWQSJgokMTBhYzc4ODQtOTA2
ZC00YTg0LWIxMTYtMWMxMTg5NDg3OTc3Si4KCHRhc2tfa2V5EiIKIDVmYTY1YzA2YTllMzFmMmM2
OTU0MzI2NjhhY2Q2MmRkSjEKB3Rhc2tfaWQSJgokNjNiYTNlNWYtY2U5Yi00MThmLTg0Y2YtYmM5
Y2ViNTBhMzA3egIYAYUBAAEAABKcAQoQbJPP7Nx3r3ewgPHdeJybDBIIlUb3D4pi3dkqClRvb2wg
VXNhZ2UwATmonCAKx6AaGEEgUykKx6AaGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjk1LjBKKAoJ
dG9vbF9uYW1lEhsKGURlbGVnYXRlIHdvcmsgdG8gY293b3JrZXJKDgoIYXR0ZW1wdHMSAhgBegIY
AYUBAAEAABKcAQoQ1SSOOcoVWGrQIs6azsmxmBIIGSOj86a7GPsqClRvb2wgVXNhZ2UwATmA8e4O
x6AaGEGo3vcOx6AaGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjk1LjBKKAoJdG9vbF9uYW1lEhsK
GURlbGVnYXRlIHdvcmsgdG8gY293b3JrZXJKDgoIYXR0ZW1wdHMSAhgBegIYAYUBAAEAABK4CQoQ
EQHO/mvzkyYWgZwwn+Rc5BIIv4Hy3+pCFpYqDENyZXcgQ3JlYXRlZDABOTgFvg/HoBoYQfi1xQ/H
oBoYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuOTUuMEoaCg5weXRob25fdmVyc2lvbhIICgYzLjEy
LjdKLgoIY3Jld19rZXkSIgogZTNmZGEwZjMxMTBmZTgwYjE4OTQ3YzAxNDcxNDMwYTRKMQoHY3Jl
d19pZBImCiQxYTNiYWYyMi04ZDA3LTRiOTctOGM4Ni1kMmM0NDNlYTZkZjdKHgoMY3Jld19wcm9j
ZXNzEg4KDGhpZXJhcmNoaWNhbEoRCgtjcmV3X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2Zf
dGFza3MSAhgBShsKFWNyZXdfbnVtYmVyX29mX2FnZW50cxICGAJKiAUKC2NyZXdfYWdlbnRzEvgE
CvUEW3sia2V5IjogIjhiZDIxMzliNTk3NTE4MTUwNmU0MWZkOWM0NTYzZDc1IiwgImlkIjogIjM2
OTZjN2Q5LTY3MmEtNDZiMy1iZTBjLTMzZjYyNmIxMDBlNyIsICJyb2xlIjogIlJlc2VhcmNoZXIi
LCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwgImZ1
bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvIiwgImRlbGVnYXRpb25fZW5h
YmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5
X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119LCB7ImtleSI6ICI5YTUwMTVlZjQ4OTVkYzYy
NzhkNTQ4MThiYTQ0NmFmNyIsICJpZCI6ICJhOTk0ZTY2ZS1hOTkxLTQ0YTYtODkyMS1hODhkNDNk
MjY2YmMiLCAicm9sZSI6ICJTZW5pb3IgV3JpdGVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhf
aXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAi
bGxtIjogImdwdC00byIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2Rl
X2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6
IFtdfV1K2wEKCmNyZXdfdGFza3MSzAEKyQFbeyJrZXkiOiAiNWZhNjVjMDZhOWUzMWYyYzY5NTQz
MjY2OGFjZDYyZGQiLCAiaWQiOiAiZWYxYjNhN2MtOTMxYi00MjRjLTkxMzQtZDY1OTM1N2I3ODNi
IiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdl
bnRfcm9sZSI6ICJOb25lIiwgImFnZW50X2tleSI6IG51bGwsICJ0b29sc19uYW1lcyI6IFtdfV16
AhgBhQEAAQAAEo4CChBZkLAu5xnAQh/ILJnU7h1REggAGIt5Pa4D3ioMVGFzayBDcmVhdGVkMAE5
AMXlD8egGhhBwCLmD8egGhhKLgoIY3Jld19rZXkSIgogZTNmZGEwZjMxMTBmZTgwYjE4OTQ3YzAx
NDcxNDMwYTRKMQoHY3Jld19pZBImCiQxYTNiYWYyMi04ZDA3LTRiOTctOGM4Ni1kMmM0NDNlYTZk
ZjdKLgoIdGFza19rZXkSIgogNWZhNjVjMDZhOWUzMWYyYzY5NTQzMjY2OGFjZDYyZGRKMQoHdGFz
a19pZBImCiRlZjFiM2E3Yy05MzFiLTQyNGMtOTEzNC1kNjU5MzU3Yjc4M2J6AhgBhQEAAQAAEpwB
ChBl/QzggjWFEfDigYrgsKMhEgjIhVTOpOyNnioKVG9vbCBVc2FnZTABOWi8pxHHoBoYQYhdrxHH
oBoYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuOTUuMEooCgl0b29sX25hbWUSGwoZRGVsZWdhdGUg
d29yayB0byBjb3dvcmtlckoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAAEpwBChC1Cxzix7ErLK5V
rNWRMj7jEgjEMld4I2kVXCoKVG9vbCBVc2FnZTABOSh2whjHoBoYQSi9yxjHoBoYShoKDmNyZXdh
aV92ZXJzaW9uEggKBjAuOTUuMEooCgl0b29sX25hbWUSGwoZRGVsZWdhdGUgd29yayB0byBjb3dv
cmtlckoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAAEuEJChCh/OOje68hh/B1dkfbmjf/Egje+GUm
CUGqZCoMQ3JldyBDcmVhdGVkMAE5cBtkV8egGhhBcD5zV8egGhhKGgoOY3Jld2FpX3ZlcnNpb24S
CAoGMC45NS4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMuMTIuN0ouCghjcmV3X2tleRIiCiBjYWEx
YWViM2RkNDM2Mzg2NTY4YTVjM2ZlMjEwMWFmNUoxCgdjcmV3X2lkEiYKJDdlZWUxNTA4LWQwNGIt
NDczYy1iZjhmLTJkODgxNGU1MjNhN0ocCgxjcmV3X3Byb2Nlc3MSDAoKc2VxdWVudGlhbEoRCgtj
cmV3X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgBShsKFWNyZXdfbnVtYmVy
X29mX2FnZW50cxICGAJKhAUKC2NyZXdfYWdlbnRzEvQECvEEW3sia2V5IjogIjk3ZjQxN2YzZTFl
MzFjZjBjMTA5Zjc1MjlhYzhmNmJjIiwgImlkIjogIjQwM2ZkM2Q2LTAxNTYtNDIwMS04OGFmLTU0
MjU5YjczNzJkYSIsICJyb2xlIjogIlByb2dyYW1tZXIiLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1h
eF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIs
ICJsbG0iOiAiZ3B0LTRvIiwgImRlbGVnYXRpb25fZW5hYmxlZD8iOiB0cnVlLCAiYWxsb3dfY29k
ZV9leGVjdXRpb24/IjogdHJ1ZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6
IFtdfSwgeyJrZXkiOiAiOTJhMjRiMGJjY2ZiMGRjMGU0MzlkN2Q1OWJhOWY2ZjMiLCAiaWQiOiAi
YzIxMTQ4ZmQtOGU3NS00NDlhLTg2MmMtNWRiNjQ5Yzc0OTYzIiwgInJvbGUiOiAiQ29kZSBSZXZp
ZXdlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxs
LCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8iLCAiZGVsZWdhdGlv
bl9lbmFibGVkPyI6IHRydWUsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiB0cnVlLCAibWF4X3Jl
dHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUqKAgoKY3Jld190YXNrcxL7AQr4AVt7
ImtleSI6ICI3OWFhMjdkZjc0ZTYyNzllMzRhODg4ODE3NDgxYzQwZiIsICJpZCI6ICI0ZWYzZWEy
OS0xMzNjLTQxNjktODgyMS1jZDI4ZTgxMTYxYmIiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNl
LCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIlByb2dyYW1tZXIiLCAiYWdl
bnRfa2V5IjogIjk3ZjQxN2YzZTFlMzFjZjBjMTA5Zjc1MjlhYzhmNmJjIiwgInRvb2xzX25hbWVz
IjogWyJ0ZXN0IHRvb2wiXX1degIYAYUBAAEAABKuBwoQjpMoNMb5Vz8kFm796AmokxIIPavlOS8Y
ZJ0qDENyZXcgQ3JlYXRlZDABOZg1IVjHoBoYQXBfKVjHoBoYShoKDmNyZXdhaV92ZXJzaW9uEggK
BjAuOTUuMEoaCg5weXRob25fdmVyc2lvbhIICgYzLjEyLjdKLgoIY3Jld19rZXkSIgogNzczYTg3
NmI1NzkyZGI2OTU1OWZlODJjM2FkMjM1OWZKMQoHY3Jld19pZBImCiQwNDQzNzU1MS0yN2RmLTQ3
YTQtOTliNS1iOWNkYmYxMDFhNjZKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jl
d19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9v
Zl9hZ2VudHMSAhgBStQCCgtjcmV3X2FnZW50cxLEAgrBAlt7ImtleSI6ICIwNzdjN2E4NjdlMjBk
MGE2OGI5NzRlNDc2MDcxMDlmMyIsICJpZCI6ICIzMDMzZmZkYy03YjI0LTRmMDgtYmNmZS1iYzQz
NzhkM2U5NjAiLCAicm9sZSI6ICJNdWx0aW1vZGFsIEFuYWx5c3QiLCAidmVyYm9zZT8iOiBmYWxz
ZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxt
IjogIiIsICJsbG0iOiAiZ3B0LTRvIiwgImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFs
bG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xz
X25hbWVzIjogW119XUqHAgoKY3Jld190YXNrcxL4AQr1AVt7ImtleSI6ICJjNzUzYzY4MDYzNTk0
MzZhNTg5NmZlYzA5YmFhMTI1ZSIsICJpZCI6ICI3Y2YxYTRkNC0xMmRjLTRjOWUtOWY1Ny0xZjhk
MTc5YmNlZGEiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFs
c2UsICJhZ2VudF9yb2xlIjogIk11bHRpbW9kYWwgQW5hbHlzdCIsICJhZ2VudF9rZXkiOiAiMDc3
YzdhODY3ZTIwZDBhNjhiOTc0ZTQ3NjA3MTA5ZjMiLCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUB
AAEAABKkBwoQ7zp57STyOlOLCoDVAFh15hIInYYk7J+gZ94qDENyZXcgQ3JlYXRlZDABOYjOfljH
oBoYQZhIhljHoBoYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuOTUuMEoaCg5weXRob25fdmVyc2lv
bhIICgYzLjEyLjdKLgoIY3Jld19rZXkSIgogY2Q0ZGE2NGU2ZGMzYjllYmRjYTI0NDRjMWQ3MzAy
ODFKMQoHY3Jld19pZBImCiQ1OTlmMjViNS0xMTgzLTQ2OTctODNjMy03OWUzZmQ3MmQ0NDlKHAoM
Y3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVt
YmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSs8CCgtjcmV3X2Fn
ZW50cxK/Agq8Alt7ImtleSI6ICJkODUxMDY0YjliNDg0MThhYzI1ZjhkMzdjN2UzMmJiNiIsICJp
ZCI6ICJiY2I5ZjA4Ny1iMzI2LTRmYTQtOWJlZS0wMGVjODlmZTEwMzEiLCAicm9sZSI6ICJJbWFn
ZSBBbmFseXN0IiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6
IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00byIsICJkZWxl
Z2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwg
Im1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1KggIKCmNyZXdfdGFza3MS
8wEK8AFbeyJrZXkiOiAiZWU4NzI5Njk0MTBjOTRjMzM0ZjljZmZhMGE0MTVmZWMiLCAiaWQiOiAi
NmFlMDcxYmItMjU4ZS00ZWRkLThhOGItODIxNzU4ZTFhNmRkIiwgImFzeW5jX2V4ZWN1dGlvbj8i
OiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJJbWFnZSBBbmFs
eXN0IiwgImFnZW50X2tleSI6ICJkODUxMDY0YjliNDg0MThhYzI1ZjhkMzdjN2UzMmJiNiIsICJ0
b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEqMHChBetHqqjbX/OlqTuIZkVppxEgirl8FuUewu
TSoMQ3JldyBDcmVhdGVkMAE5aGwoWcegGhhBOCw0WcegGhhKGgoOY3Jld2FpX3ZlcnNpb24SCAoG
MC45NS4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMuMTIuN0ouCghjcmV3X2tleRIiCiBlMzk1Njdi
NTA1MjkwOWNhMzM0MDk4NGI4Mzg5ODBlYUoxCgdjcmV3X2lkEiYKJDA2ZTljN2FjLTEzZDItNGU4
MS1hNzI2LTBlYjIyYzdlNWQ3MEocCgxjcmV3X3Byb2Nlc3MSDAoKc2VxdWVudGlhbEoRCgtjcmV3
X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgBShsKFWNyZXdfbnVtYmVyX29m
X2FnZW50cxICGAFKzgIKC2NyZXdfYWdlbnRzEr4CCrsCW3sia2V5IjogIjlkYzhjY2UwMzA0Njgx
OTYwNDFiNGMzODBiNjE3Y2IwIiwgImlkIjogImI1ZGZkNmEyLTA1ZWYtNDIzNS1iZDVjLTI3ZTAy
MGExYzk4ZiIsICJyb2xlIjogIkltYWdlIEFuYWx5c3QiLCAidmVyYm9zZT8iOiB0cnVlLCAibWF4
X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwg
ImxsbSI6ICJncHQtNG8iLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3dfY29k
ZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFtZXMi
OiBbXX1dSoICCgpjcmV3X3Rhc2tzEvMBCvABW3sia2V5IjogImE5YTc2Y2E2OTU3ZDBiZmZhNjll
YWIyMGI2NjQ4MjJiIiwgImlkIjogIjJhMmQ4MDYzLTBkMmQtNDhmZi04NjJhLWNiOGM1NGEyMDYx
NiIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwgImFn
ZW50X3JvbGUiOiAiSW1hZ2UgQW5hbHlzdCIsICJhZ2VudF9rZXkiOiAiOWRjOGNjZTAzMDQ2ODE5
NjA0MWI0YzM4MGI2MTdjYjAiLCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUBAAEAABKOAgoQj49w
ugM/XFoNkMEnAmaPnRIIcFM/RoDbVhcqDFRhc2sgQ3JlYXRlZDABOViFR1nHoBoYQfgRSFnHoBoY
Si4KCGNyZXdfa2V5EiIKIGUzOTU2N2I1MDUyOTA5Y2EzMzQwOTg0YjgzODk4MGVhSjEKB2NyZXdf
aWQSJgokMDZlOWM3YWMtMTNkMi00ZTgxLWE3MjYtMGViMjJjN2U1ZDcwSi4KCHRhc2tfa2V5EiIK
IGE5YTc2Y2E2OTU3ZDBiZmZhNjllYWIyMGI2NjQ4MjJiSjEKB3Rhc2tfaWQSJgokMmEyZDgwNjMt
MGQyZC00OGZmLTg2MmEtY2I4YzU0YTIwNjE2egIYAYUBAAEAABKXAQoQQgYNvHzrhiz04CrSnkG0
KBII9UsJM/96oEoqClRvb2wgVXNhZ2UwATkQPOFax6AaGEGAmupax6AaGEoaCg5jcmV3YWlfdmVy
c2lvbhIICgYwLjk1LjBKIwoJdG9vbF9uYW1lEhYKFEFkZCBpbWFnZSB0byBjb250ZW50Sg4KCGF0
dGVtcHRzEgIYAXoCGAGFAQABAAASpAcKEL8pSiN4H/umQhWexA4UYzoSCC+JqZKUlDffKgxDcmV3
IENyZWF0ZWQwATnA9r9cx6AaGEGAJMhcx6AaGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjk1LjBK
GgoOcHl0aG9uX3ZlcnNpb24SCAoGMy4xMi43Si4KCGNyZXdfa2V5EiIKIDAwYjk0NmJlNDQzNzE0
YjNhNDdjMjAxMDFlYjAyZDY2SjEKB2NyZXdfaWQSJgokZDRhZDMyZTUtM2I1NS00OGQ0LTlmYjMt
ZTVkOTY0ZGI5NzJhShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5
EgIQAEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRz
EgIYAUrPAgoLY3Jld19hZ2VudHMSvwIKvAJbeyJrZXkiOiAiNGI4YTdiODQwZjk0YmY3ODE4YjVk
NTNmNjg5MjdmZDUiLCAiaWQiOiAiNjdlMDhiZDMtMzA5MS00ZTdhLWE4NjQtYTUyOGQ4ZDZlN2Y4
IiwgInJvbGUiOiAiUmVwb3J0IFdyaXRlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIi
OiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6
ICJncHQtNG8iLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3dfY29kZV9leGVj
dXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX1d
SoICCgpjcmV3X3Rhc2tzEvMBCvABW3sia2V5IjogImI3MTNjODJmZWI5MmM5ZjVjNThiNDBhOTc1
NTZiN2FjIiwgImlkIjogIjUyZGMwN2ZjLWJjY2ItNDI4Mi1hZjllLWUyYTkxY2ViMzI0MCIsICJh
c3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3Jv
bGUiOiAiUmVwb3J0IFdyaXRlciIsICJhZ2VudF9rZXkiOiAiNGI4YTdiODQwZjk0YmY3ODE4YjVk
NTNmNjg5MjdmZDUiLCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUBAAEAABKOAgoQFiOJNSnPbaBo
fje7Tx2DdBIIwjGhGgyR5BkqDFRhc2sgQ3JlYXRlZDABOaAq1FzHoBoYQah81FzHoBoYSi4KCGNy
ZXdfa2V5EiIKIDAwYjk0NmJlNDQzNzE0YjNhNDdjMjAxMDFlYjAyZDY2SjEKB2NyZXdfaWQSJgok
ZDRhZDMyZTUtM2I1NS00OGQ0LTlmYjMtZTVkOTY0ZGI5NzJhSi4KCHRhc2tfa2V5EiIKIGI3MTNj
ODJmZWI5MmM5ZjVjNThiNDBhOTc1NTZiN2FjSjEKB3Rhc2tfaWQSJgokNTJkYzA3ZmMtYmNjYi00
MjgyLWFmOWUtZTJhOTFjZWIzMjQwegIYAYUBAAEAABKOAgoQt0X92psFBaT0eyn1IxJl0RIIpDY4
j2AlTioqDFRhc2sgQ3JlYXRlZDABOdgnPV/HoBoYQXi0PV/HoBoYSi4KCGNyZXdfa2V5EiIKIDAw
Yjk0NmJlNDQzNzE0YjNhNDdjMjAxMDFlYjAyZDY2SjEKB2NyZXdfaWQSJgokZDRhZDMyZTUtM2I1
NS00OGQ0LTlmYjMtZTVkOTY0ZGI5NzJhSi4KCHRhc2tfa2V5EiIKIGI3MTNjODJmZWI5MmM5ZjVj
NThiNDBhOTc1NTZiN2FjSjEKB3Rhc2tfaWQSJgokNTJkYzA3ZmMtYmNjYi00MjgyLWFmOWUtZTJh
OTFjZWIzMjQwegIYAYUBAAEAABKOAgoQZyIwBbsHH+6dumgTUJNVzxIIMAEwlT69bAwqDFRhc2sg
Q3JlYXRlZDABOeh9u2HHoBoYQfghvGHHoBoYSi4KCGNyZXdfa2V5EiIKIDAwYjk0NmJlNDQzNzE0
YjNhNDdjMjAxMDFlYjAyZDY2SjEKB2NyZXdfaWQSJgokZDRhZDMyZTUtM2I1NS00OGQ0LTlmYjMt
ZTVkOTY0ZGI5NzJhSi4KCHRhc2tfa2V5EiIKIGI3MTNjODJmZWI5MmM5ZjVjNThiNDBhOTc1NTZi
N2FjSjEKB3Rhc2tfaWQSJgokNTJkYzA3ZmMtYmNjYi00MjgyLWFmOWUtZTJhOTFjZWIzMjQwegIY
AYUBAAEAABKOAgoQNmx90haqHtL8tj3Y948aIhIIaiFn4f7x7RAqDFRhc2sgQ3JlYXRlZDABOTgM
nmTHoBoYQZCknmTHoBoYSi4KCGNyZXdfa2V5EiIKIDAwYjk0NmJlNDQzNzE0YjNhNDdjMjAxMDFl
YjAyZDY2SjEKB2NyZXdfaWQSJgokZDRhZDMyZTUtM2I1NS00OGQ0LTlmYjMtZTVkOTY0ZGI5NzJh
Si4KCHRhc2tfa2V5EiIKIGI3MTNjODJmZWI5MmM5ZjVjNThiNDBhOTc1NTZiN2FjSjEKB3Rhc2tf
aWQSJgokNTJkYzA3ZmMtYmNjYi00MjgyLWFmOWUtZTJhOTFjZWIzMjQwegIYAYUBAAEAABKWBwoQ
vt1TslFugf+idjOWhVfl9BIIGjt6tt0AKKkqDENyZXcgQ3JlYXRlZDABOWiz12fHoBoYQZj432fH
oBoYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuOTUuMEoaCg5weXRob25fdmVyc2lvbhIICgYzLjEy
LjdKLgoIY3Jld19rZXkSIgogZjVkZTY3ZTk5ODUwNTA3NmEyOTM3YjNmZGFhNzc1ZjFKMQoHY3Jl
d19pZBImCiQ2MzJjYTc0MC1mNjg2LTRlNGQtOTBmYy00YjZkYmE5ZjViMGRKHAoMY3Jld19wcm9j
ZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rh
c2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSsgCCgtjcmV3X2FnZW50cxK4Agq1
Alt7ImtleSI6ICI2ZjYzZjNlMzU4M2E0NjJmZjNlNzY2MDcxYzgyMTJhZiIsICJpZCI6ICI1ZTZl
NTMzNy1iZmMzLTRjZmYtODBlZi1hM2U5NDQ4YjBlYTMiLCAicm9sZSI6ICJXcml0ZXIiLCAidmVy
Ym9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9u
X2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvIiwgImRlbGVnYXRpb25fZW5hYmxlZD8i
OiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0
IjogMiwgInRvb2xzX25hbWVzIjogW119XUr7AQoKY3Jld190YXNrcxLsAQrpAVt7ImtleSI6ICIz
ZjMyNzEyMDk2ZmFjYjliNGI2ZWE1NWI3OGViN2M4MCIsICJpZCI6ICI5NDRiZWRmNS0xZjZiLTQw
OWEtOTE4Mi04YzMyZTM0MGZmMzQiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNlLCAiaHVtYW5f
aW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIldyaXRlciIsICJhZ2VudF9rZXkiOiAiNmY2
M2YzZTM1ODNhNDYyZmYzZTc2NjA3MWM4MjEyYWYiLCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUB
AAEAABKOAgoQ4leDd4+yGvuAxat0Z7g/uhIInjgmW2jrDBIqDFRhc2sgQ3JlYXRlZDABOXCN62fH
oBoYQXjf62fHoBoYSi4KCGNyZXdfa2V5EiIKIGY1ZGU2N2U5OTg1MDUwNzZhMjkzN2IzZmRhYTc3
NWYxSjEKB2NyZXdfaWQSJgokNjMyY2E3NDAtZjY4Ni00ZTRkLTkwZmMtNGI2ZGJhOWY1YjBkSi4K
CHRhc2tfa2V5EiIKIDNmMzI3MTIwOTZmYWNiOWI0YjZlYTU1Yjc4ZWI3YzgwSjEKB3Rhc2tfaWQS
JgokOTQ0YmVkZjUtMWY2Yi00MDlhLTkxODItOGMzMmUzNDBmZjM0egIYAYUBAAEAABKOAgoQ/K3x
az8rHR8RbOPAn3/V0xIIkOxMowIIFUoqDFRhc2sgQ3JlYXRlZDABOUCJ7WfHoBoYQcDH7WfHoBoY
Si4KCGNyZXdfa2V5EiIKIGY1ZGU2N2U5OTg1MDUwNzZhMjkzN2IzZmRhYTc3NWYxSjEKB2NyZXdf
aWQSJgokNjMyY2E3NDAtZjY4Ni00ZTRkLTkwZmMtNGI2ZGJhOWY1YjBkSi4KCHRhc2tfa2V5EiIK
IDNmMzI3MTIwOTZmYWNiOWI0YjZlYTU1Yjc4ZWI3YzgwSjEKB3Rhc2tfaWQSJgokOTQ0YmVkZjUt
MWY2Yi00MDlhLTkxODItOGMzMmUzNDBmZjM0egIYAYUBAAEAABKeBwoQ/q45KvZiCrfu5bu1k3u9
PBII3yPQFsZi+ywqDENyZXcgQ3JlYXRlZDABObA3PWjHoBoYQUDYSGjHoBoYShoKDmNyZXdhaV92
ZXJzaW9uEggKBjAuOTUuMEoaCg5weXRob25fdmVyc2lvbhIICgYzLjEyLjdKLgoIY3Jld19rZXkS
IgogNzc2NTcyNTMwMGY2NjAwYjI5NjExYmI3ZTAyZDU2ZTZKMQoHY3Jld19pZBImCiQ3NDcwMDVh
Yi1lODE0LTQ0YzItOWFlMy1lZTZkYWEzYmMxYjZKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRp
YWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3
X251bWJlcl9vZl9hZ2VudHMSAhgBSswCCgtjcmV3X2FnZW50cxK8Agq5Alt7ImtleSI6ICI3YjMz
ZjY0ZGQwYjFiYTc4NWUwYmE4YmI1YjUyZjI0NiIsICJpZCI6ICI1ZTA0MzczNC02MGU1LTQwZWQt
OGNlNS0wNjQ1MTNmMTkxMzciLCAicm9sZSI6ICJUZXN0IEFnZW50IiwgInZlcmJvc2U/IjogZmFs
c2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xs
bSI6ICIiLCAibGxtIjogImdwdC00byIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJh
bGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29s
c19uYW1lcyI6IFtdfV1K/wEKCmNyZXdfdGFza3MS8AEK7QFbeyJrZXkiOiAiZDg3OTA0ZWU4MmNh
NzVmZWQ1ODY4MTM3ZDRkYzEzNmYiLCAiaWQiOiAiNjdlZmEyZWEtZTQ0Ni00ZWI2LTg5YWMtMzA1
ZDUwZjFkODMwIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZh
bHNlLCAiYWdlbnRfcm9sZSI6ICJUZXN0IEFnZW50IiwgImFnZW50X2tleSI6ICI3YjMzZjY0ZGQw
YjFiYTc4NWUwYmE4YmI1YjUyZjI0NiIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEo4C
ChAWSoeQUP+DNRqnwCDlpo82Egg4jJLBn5Yi2ioMVGFzayBDcmVhdGVkMAE5+I9WaMegGhhBAOJW
aMegGhhKLgoIY3Jld19rZXkSIgogNzc2NTcyNTMwMGY2NjAwYjI5NjExYmI3ZTAyZDU2ZTZKMQoH
Y3Jld19pZBImCiQ3NDcwMDVhYi1lODE0LTQ0YzItOWFlMy1lZTZkYWEzYmMxYjZKLgoIdGFza19r
ZXkSIgogZDg3OTA0ZWU4MmNhNzVmZWQ1ODY4MTM3ZDRkYzEzNmZKMQoHdGFza19pZBImCiQ2N2Vm
YTJlYS1lNDQ2LTRlYjYtODlhYy0zMDVkNTBmMWQ4MzB6AhgBhQEAAQAA
body: '{"messages":[{"role":"system","content":"You are Test Agent. Test agent
backstory\nYour personal goal is: Test agent goal"},{"role":"user","content":"\nCurrent
Task: Test task description\n\nThis is the expected criteria for your final
answer: Test expected output\nyou MUST return the actual complete content as
the final answer, not a summary.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '32247'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.27.0
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Tue, 14 Jan 2025 17:56:25 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Test Agent. Test agent
backstory\nYour personal goal is: Test agent goal\nTo give my best complete
final answer to the task respond using the exact following format:\n\nThought:
I now can give a great answer\nFinal Answer: Your final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent
Task: Test task description\n\nThis is the expect criteria for your final answer:
Test expected output\nyou MUST return the actual complete content as the final
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"]}'
headers:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- gzip, deflate
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '838'
- '407'
content-type:
- application/json
cookie:
- _cfuvid=SlnUP7AT9jJlQiN.Fm1c7MDyo78_hBRAz8PoabvHVSU-1736018539826-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.59.6
x-stainless-raw-response:
- 'true'
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-ApfRLkycSd0vwuTw50dfB5bgIoWiC\",\n \"object\"\
: \"chat.completion\",\n \"created\": 1736877387,\n \"model\": \"gpt-4o-2024-08-06\"\
,\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \
\ \"role\": \"assistant\",\n \"content\": \"I now can give a great\
\ answer \\nFinal Answer: The final answer must be the great and the most\
\ complete as possible, it must be outcome described.\",\n \"refusal\"\
: null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\
\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 158,\n \"completion_tokens\"\
: 31,\n \"total_tokens\": 189,\n \"prompt_tokens_details\": {\n \
\ \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\"\
: {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"\
accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n\
\ }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\":\
\ \"fp_50cad350e4\"\n}\n"
string: "{\n \"id\": \"chatcmpl-DIjv3LqL0QS4iw3OM5b28B4VOMZPA\",\n \"object\":
\"chat.completion\",\n \"created\": 1773358789,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Test expected output\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
72,\n \"completion_tokens\": 3,\n \"total_tokens\": 75,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_5e793402c9\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 901f80a64cc6bd25-ATL
CF-Ray:
- 9db6a3f31e087b0e-EWR
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 14 Jan 2025 17:56:28 GMT
- Thu, 12 Mar 2026 23:39:50 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=A.PJUaUHPGyIr2pwNz44ei0seKXMH7czqXc5dA_MzD0-1736877388-1.0.1.1-jC2Lo7dl92z6qdY8mxRekSqg68TqMNsvyjPoNVXBfKNO6hHwL5BKWSBeA2i9hYWN2DBBLvHWeFXq1nXCKNcnlQ;
path=/; expires=Tue, 14-Jan-25 18:26:28 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=kERLxnulwhkdPi_RxnQLZV8G2Zbub8n_KYkKSL6uke8-1736877388108-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- X-Request-ID
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
- OPENAI-ORG-XXX
openai-processing-ms:
- '1020'
- '360'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
set-cookie:
- SET-COOKIE-XXX
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- '10000'
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- '30000000'
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- '9999'
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- '29999807'
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- 6ms
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- 0s
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- req_4ceac9bc8ae57f631959b91d2ab63c4d
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Test Agent. Test agent
backstory\nYour personal goal is: Test agent goal\nTo give my best complete
final answer to the task respond using the exact following format:\n\nThought:
I now can give a great answer\nFinal Answer: Your final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent
Task: Test task description\n\nThis is the expected criteria for your final
answer: Test expected output\nyou MUST return the actual complete content as
the final answer, not a summary.\n\nBegin! This is VERY important to you, use
the tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '840'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.61.0
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.61.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-BExKOliqPgvHyozZaBu5oN50CHtsa\",\n \"object\"\
: \"chat.completion\",\n \"created\": 1742904348,\n \"model\": \"gpt-4o-2024-08-06\"\
,\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \
\ \"role\": \"assistant\",\n \"content\": \"I now can give a great\
\ answer \\nFinal Answer: Test expected output\",\n \"refusal\": null,\n\
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"\
finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\"\
: 158,\n \"completion_tokens\": 15,\n \"total_tokens\": 173,\n \"\
prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\"\
: 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\"\
: 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n\
\ \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\"\
: \"default\",\n \"system_fingerprint\": \"fp_90d33c15d4\"\n}\n"
headers:
CF-RAY:
- 925e4749af02f227-GRU
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 25 Mar 2025 12:05:48 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=VHa7Z7dJYptxXpaMxgldvK6HqIM.m74xpi.80N_EBDc-1742904348-1.0.1.1-VthD2riCSnAprFYhOZxfIrTjT33tybJHpHWB25Q_Hx4vuACCyF00tix6e6eorDReGcW3jb5cUzbGqYi47TrMsS4LYjxBv5eCo7cU9OuFajs;
path=/; expires=Tue, 25-Mar-25 12:35:48 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=Is8fSaH3lU8yHyT3fI7cRZiDqIYSI6sPpzfzvEV8HMc-1742904348760-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '377'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '50000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '49999'
x-ratelimit-remaining-tokens:
- '149999822'
x-ratelimit-reset-requests:
- 1ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_fd6b93e3b1a30868482c72306e7f63c2
- X-REQUEST-ID-XXX
status:
code: 200
message: OK

View File

@@ -45,78 +45,89 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.12
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-DDFzCiMzYEJMnv9oV3KbMUwH6TGRO\",\n \"object\":
\"chat.completion\",\n \"created\": 1772052086,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
string: "{\n \"id\": \"chatcmpl-DIjlpMNPWid0bFT3tJ0wlsOZelKz7\",\n \"object\":
\"chat.completion\",\n \"created\": 1773358217,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"- **The Rise of Autonomous AI Agents:
Redefining Productivity and Creativity** \\n This article would dive into
how autonomous AI agents\u2014intelligent software systems capable of independently
performing complex tasks\u2014are transforming industries by augmenting human
productivity and creativity. It would explore real-world use cases from automated
content generation and customer support bots to AI-driven design and research
assistants, illustrating how these agents reduce repetitive workload and open
new avenues for innovation. The article could also analyze challenges such
as ethical considerations, decision-making transparency, and integration with
existing workflows, offering readers a comprehensive view of how autonomous
AI agents are reshaping the future of work.\\n\\n- **Bridging Human-AI Collaboration:
Designing AI Agents for Intuitive Interaction** \\n This piece would investigate
the critical design principles behind successful human-AI collaboration, focusing
on building AI agents that communicate and interact naturally with users.
From natural language processing nuances to adaptive learning from user behavior,
the article would examine how these technological advancements create seamless
partnerships between humans and machines. Highlighting case studies in healthcare,
finance, and creative industries, it would demonstrate the importance of trust,
interpretability, and empathy in AI agent interfaces, emphasizing how better-designed
interactions can dramatically improve adoption and effectiveness.\\n\\n- **The
Ethical Frontier: Navigating Bias and Accountability in AI Agents** \\n Exploring
the ethical implications of deploying AI agents at scale, this article would
address pressing issues like algorithmic bias, privacy concerns, and accountability
in autonomous decision-making. It would analyze how biases embedded in training
data can propagate through AI agents, impacting critical outcomes in hiring,
lending, and law enforcement. The article would also discuss emerging regulatory
frameworks, best practices for auditing AI agents, and the role of interdisciplinary
ethics teams in ensuring these technologies are fair, transparent, and responsible,
helping readers grasp the societal responsibilities accompanying AI advancement.\\n\\n-
**AI Agents in Startups: Driving Innovation and Competitive Advantage** \\n
\ Focused on the startup ecosystem, this article would explore how emerging
companies leverage AI agents to disrupt markets and scale rapidly with limited
resources. It would profile startups using AI agents for customer acquisition,
personalized marketing, operational automation, and product development, illustrating
how these tools enable lean teams to achieve much more. The narrative would
consider investment trends, challenges faced by startups incorporating AI
agents, and strategies for balancing innovation with reliability, providing
entrepreneurs and investors with valuable insights into harnessing AI agents
for meaningful growth.\\n\\n- **From Data to Decision: How AI Agents Transform
Business Intelligence** \\n This article would delve into the role of AI
agents as intelligent intermediaries in business intelligence (BI) systems,
automating data analysis and delivering actionable insights in real-time.
It would explain how AI agents can parse vast datasets, identify trends, generate
forecasts, and even suggest strategic decisions without constant human oversight.
Highlighting innovations like conversational BI interfaces and predictive
analytics agents, the article would underscore how businesses of all sizes
can democratize data-driven decision-making, driving agility and competitive
advantage in increasingly complex markets.\",\n \"refusal\": null,\n
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 164,\n \"completion_tokens\":
597,\n \"total_tokens\": 761,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
\"assistant\",\n \"content\": \"- **The Future of Autonomous AI Agents:
From Task Automation to Decision Making** \\nExploring the evolving landscape
of autonomous AI agents reveals a fascinating journey from simple task automation
to complex decision-making systems capable of self-learning and adaptation.
An article diving into this topic could unravel how cutting-edge advancements
in reinforcement learning, natural language processing, and multi-agent systems
are propelling AI agents beyond rigid scripts into dynamic collaborators and
problem solvers. It would offer readers insights into real-world applications\u2014such
as autonomous drones, financial trading bots, and personalized digital assistants\u2014and
speculate on ethical considerations and regulatory frameworks shaping their
future. This exploration emphasizes the transformative potential and the challenges
that autonomous AI agents pose to industries and society at large.\\n\\n-
**Bridging the Gap Between AI Agents and Human Collaboration** \\nAI agents
are no longer isolated tools but increasingly integral collaborators in creative
and professional workflows. An article tackling this theme could examine the
latest progress in human-AI interaction models, including explainability,
adaptability, and collaborative problem-solving. It would highlight how AI
agents are augmenting human capabilities in fields like healthcare diagnostics,
content creation, customer service, and software development. The narrative
could also include case studies demonstrating successful AI-human partnerships
along with the psychological and ergonomic aspects critical to designing AI
agents that work harmoniously with humans. Such a piece would resonate deeply
with readers interested in the symbiosis between artificial intelligence and
human ingenuity.\\n\\n- **The Rise of AI Agents in Cybersecurity: Defense
and Offense** \\nIn cybersecurity, AI agents are becoming indispensable,
not only in defensive roles but also on the offensive front. An article focused
on this area could deliver a comprehensive analysis of how AI agents detect
and respond to threats in real time, employing techniques like anomaly detection,
behavioral analysis, and automated incident response. Additionally, it would
delve into the darker side: the use of AI agents by malicious actors for sophisticated
cyber-attacks, including adaptive malware and social engineering bots. This
dual perspective could provide a thrilling and nuanced investigation of the
cybersecurity landscape dominated by AI, shedding light on strategic innovations,
emerging threats, and the ongoing arms race between attackers and defenders.\\n\\n-
**AI Agents in Startups: Revolutionizing Business Models and Customer Experience**
\ \\nStartups are leveraging AI agents as a catalyst for innovation, scalability,
and personalization, fundamentally transforming traditional business models.
An article on this topic could survey real examples where AI agents streamline
operations, enable hyper-personalized marketing, automate customer support,
and generate actionable business insights. It would analyze how the integration
of AI agents accelerates product-market fit through rapid iteration and data-driven
decision-making. Moreover, the article could explore challenges unique to
startup environments, such as resource constraints, technology adoption hurdles,
and ethical considerations around AI deployment. This comprehensive view would
inspire entrepreneurs and investors alike, spotlighting AI agents as game
changers in the startup ecosystem.\\n\\n- **Ethical and Societal Implications
of Delegating Decisions to AI Agents** \\nAs AI agents increasingly take
on decision-making roles with significant real-world impact, ethical and societal
questions come sharply into focus. An article on this theme could dissect
challenges concerning accountability, transparency, bias, and human autonomy.
It would provide an in-depth treatment of case studies where AI agents\u2019
decisions led to unintended consequences or public backlash, and the regulatory
and design frameworks proposed to mitigate these risks. Furthermore, the article
could explore philosophical questions about trust, control, and the future
relationship between humans and AI decision-makers. By unpacking the complex
moral landscape surrounding AI agents, this piece would offer critical insight
for policymakers, developers, and society at large.\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
164,\n \"completion_tokens\": 720,\n \"total_tokens\": 884,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_417e90869b\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_ae0f8c9a7b\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
CF-Cache-Status:
- DYNAMIC
CF-Ray:
- 9db695fa4bf9b911-EWR
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Wed, 25 Feb 2026 20:41:39 GMT
- Thu, 12 Mar 2026 23:30:27 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -129,12 +140,10 @@ interactions:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '13437'
- '9053'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
@@ -176,68 +185,75 @@ interactions:
list of ideas with their paragraph and your notes. task_expected_output: 5 bullet
points with a paragraph for each idea. agent: Researcher agent_goal: Make the
best research and analysis on content about AI and AI agents Task Output: -
**The Rise of Autonomous AI Agents: Redefining Productivity and Creativity**
\ \\n This article would dive into how autonomous AI agents\u2014intelligent
software systems capable of independently performing complex tasks\u2014are
transforming industries by augmenting human productivity and creativity. It
would explore real-world use cases from automated content generation and customer
support bots to AI-driven design and research assistants, illustrating how these
agents reduce repetitive workload and open new avenues for innovation. The article
could also analyze challenges such as ethical considerations, decision-making
transparency, and integration with existing workflows, offering readers a comprehensive
view of how autonomous AI agents are reshaping the future of work.\\n\\n- **Bridging
Human-AI Collaboration: Designing AI Agents for Intuitive Interaction** \\n
\ This piece would investigate the critical design principles behind successful
human-AI collaboration, focusing on building AI agents that communicate and
interact naturally with users. From natural language processing nuances to adaptive
learning from user behavior, the article would examine how these technological
advancements create seamless partnerships between humans and machines. Highlighting
case studies in healthcare, finance, and creative industries, it would demonstrate
the importance of trust, interpretability, and empathy in AI agent interfaces,
emphasizing how better-designed interactions can dramatically improve adoption
and effectiveness.\\n\\n- **The Ethical Frontier: Navigating Bias and Accountability
in AI Agents** \\n Exploring the ethical implications of deploying AI agents
at scale, this article would address pressing issues like algorithmic bias,
privacy concerns, and accountability in autonomous decision-making. It would
analyze how biases embedded in training data can propagate through AI agents,
impacting critical outcomes in hiring, lending, and law enforcement. The article
would also discuss emerging regulatory frameworks, best practices for auditing
AI agents, and the role of interdisciplinary ethics teams in ensuring these
technologies are fair, transparent, and responsible, helping readers grasp the
societal responsibilities accompanying AI advancement.\\n\\n- **AI Agents in
Startups: Driving Innovation and Competitive Advantage** \\n Focused on the
startup ecosystem, this article would explore how emerging companies leverage
AI agents to disrupt markets and scale rapidly with limited resources. It would
profile startups using AI agents for customer acquisition, personalized marketing,
operational automation, and product development, illustrating how these tools
enable lean teams to achieve much more. The narrative would consider investment
trends, challenges faced by startups incorporating AI agents, and strategies
for balancing innovation with reliability, providing entrepreneurs and investors
with valuable insights into harnessing AI agents for meaningful growth.\\n\\n-
**From Data to Decision: How AI Agents Transform Business Intelligence** \\n
\ This article would delve into the role of AI agents as intelligent intermediaries
in business intelligence (BI) systems, automating data analysis and delivering
actionable insights in real-time. It would explain how AI agents can parse vast
datasets, identify trends, generate forecasts, and even suggest strategic decisions
without constant human oversight. Highlighting innovations like conversational
BI interfaces and predictive analytics agents, the article would underscore
how businesses of all sizes can democratize data-driven decision-making, driving
agility and competitive advantage in increasingly complex markets.\\n\\nThis
is the expected criteria for your final answer: Evaluation Score from 1 to 10
based on the performance of the agents on the tasks\\nyou MUST return the actual
complete content as the final answer, not a summary.\\nFormat your final answer
according to the following OpenAPI schema: {\\n \\\"properties\\\": {\\n \\\"quality\\\":
{\\n \\\"description\\\": \\\"A score from 1 to 10 evaluating on completion,
quality, and overall performance from the task_description and task_expected_output
to the actual Task Output.\\\",\\n \\\"title\\\": \\\"Quality\\\",\\n \\\"type\\\":
\\\"number\\\"\\n }\\n },\\n \\\"required\\\": [\\n \\\"quality\\\"\\n
\ ],\\n \\\"title\\\": \\\"TaskEvaluationPydanticOutput\\\",\\n \\\"type\\\":
\\\"object\\\",\\n \\\"additionalProperties\\\": false\\n}\\n\\nIMPORTANT:
Preserve the original content exactly as-is. Do NOT rewrite, paraphrase, or
modify the meaning of the content. Only structure it to match the schema format.\\n\\nDo
not include the OpenAPI schema in the final output. Ensure the final output
does not include any code block markers like ```json or ```python.\\n\\nProvide
your complete response:\"}],\"model\":\"gpt-4o-mini\",\"response_format\":{\"type\":\"json_schema\",\"json_schema\":{\"schema\":{\"properties\":{\"quality\":{\"description\":\"A
**The Future of Autonomous AI Agents: From Task Automation to Decision Making**
\ \\nExploring the evolving landscape of autonomous AI agents reveals a fascinating
journey from simple task automation to complex decision-making systems capable
of self-learning and adaptation. An article diving into this topic could unravel
how cutting-edge advancements in reinforcement learning, natural language processing,
and multi-agent systems are propelling AI agents beyond rigid scripts into dynamic
collaborators and problem solvers. It would offer readers insights into real-world
applications\u2014such as autonomous drones, financial trading bots, and personalized
digital assistants\u2014and speculate on ethical considerations and regulatory
frameworks shaping their future. This exploration emphasizes the transformative
potential and the challenges that autonomous AI agents pose to industries and
society at large.\\n\\n- **Bridging the Gap Between AI Agents and Human Collaboration**
\ \\nAI agents are no longer isolated tools but increasingly integral collaborators
in creative and professional workflows. An article tackling this theme could
examine the latest progress in human-AI interaction models, including explainability,
adaptability, and collaborative problem-solving. It would highlight how AI agents
are augmenting human capabilities in fields like healthcare diagnostics, content
creation, customer service, and software development. The narrative could also
include case studies demonstrating successful AI-human partnerships along with
the psychological and ergonomic aspects critical to designing AI agents that
work harmoniously with humans. Such a piece would resonate deeply with readers
interested in the symbiosis between artificial intelligence and human ingenuity.\\n\\n-
**The Rise of AI Agents in Cybersecurity: Defense and Offense** \\nIn cybersecurity,
AI agents are becoming indispensable, not only in defensive roles but also on
the offensive front. An article focused on this area could deliver a comprehensive
analysis of how AI agents detect and respond to threats in real time, employing
techniques like anomaly detection, behavioral analysis, and automated incident
response. Additionally, it would delve into the darker side: the use of AI agents
by malicious actors for sophisticated cyber-attacks, including adaptive malware
and social engineering bots. This dual perspective could provide a thrilling
and nuanced investigation of the cybersecurity landscape dominated by AI, shedding
light on strategic innovations, emerging threats, and the ongoing arms race
between attackers and defenders.\\n\\n- **AI Agents in Startups: Revolutionizing
Business Models and Customer Experience** \\nStartups are leveraging AI agents
as a catalyst for innovation, scalability, and personalization, fundamentally
transforming traditional business models. An article on this topic could survey
real examples where AI agents streamline operations, enable hyper-personalized
marketing, automate customer support, and generate actionable business insights.
It would analyze how the integration of AI agents accelerates product-market
fit through rapid iteration and data-driven decision-making. Moreover, the article
could explore challenges unique to startup environments, such as resource constraints,
technology adoption hurdles, and ethical considerations around AI deployment.
This comprehensive view would inspire entrepreneurs and investors alike, spotlighting
AI agents as game changers in the startup ecosystem.\\n\\n- **Ethical and Societal
Implications of Delegating Decisions to AI Agents** \\nAs AI agents increasingly
take on decision-making roles with significant real-world impact, ethical and
societal questions come sharply into focus. An article on this theme could dissect
challenges concerning accountability, transparency, bias, and human autonomy.
It would provide an in-depth treatment of case studies where AI agents\u2019
decisions led to unintended consequences or public backlash, and the regulatory
and design frameworks proposed to mitigate these risks. Furthermore, the article
could explore philosophical questions about trust, control, and the future relationship
between humans and AI decision-makers. By unpacking the complex moral landscape
surrounding AI agents, this piece would offer critical insight for policymakers,
developers, and society at large.\\n\\nThis is the expected criteria for your
final answer: Evaluation Score from 1 to 10 based on the performance of the
agents on the tasks\\nyou MUST return the actual complete content as the final
answer, not a summary.\\nFormat your final answer according to the following
OpenAPI schema: {\\n \\\"properties\\\": {\\n \\\"quality\\\": {\\n \\\"description\\\":
\\\"A score from 1 to 10 evaluating on completion, quality, and overall performance
from the task_description and task_expected_output to the actual Task Output.\\\",\\n
\ \\\"title\\\": \\\"Quality\\\",\\n \\\"type\\\": \\\"number\\\"\\n
\ }\\n },\\n \\\"required\\\": [\\n \\\"quality\\\"\\n ],\\n \\\"title\\\":
\\\"TaskEvaluationPydanticOutput\\\",\\n \\\"type\\\": \\\"object\\\",\\n \\\"additionalProperties\\\":
false\\n}\\n\\nIMPORTANT: Preserve the original content exactly as-is. Do NOT
rewrite, paraphrase, or modify the meaning of the content. Only structure it
to match the schema format.\\n\\nDo not include the OpenAPI schema in the final
output. Ensure the final output does not include any code block markers like
```json or ```python.\\n\\nProvide your complete response:\"}],\"model\":\"gpt-4o-mini\",\"response_format\":{\"type\":\"json_schema\",\"json_schema\":{\"schema\":{\"properties\":{\"quality\":{\"description\":\"A
score from 1 to 10 evaluating on completion, quality, and overall performance
from the task_description and task_expected_output to the actual Task Output.\",\"title\":\"Quality\",\"type\":\"number\"}},\"required\":[\"quality\"],\"title\":\"TaskEvaluationPydanticOutput\",\"type\":\"object\",\"additionalProperties\":false},\"name\":\"TaskEvaluationPydanticOutput\",\"strict\":true}},\"stream\":false}"
headers:
@@ -252,7 +268,7 @@ interactions:
connection:
- keep-alive
content-length:
- '6502'
- '7184'
content-type:
- application/json
host:
@@ -276,31 +292,33 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.12
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-DDFzQTBe214rOuf82URXmgkuNj5u4\",\n \"object\":
\"chat.completion\",\n \"created\": 1772052100,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
string: "{\n \"id\": \"chatcmpl-DIjm0WxDVIL9NNzw98XHHh3cA4Yeh\",\n \"object\":
\"chat.completion\",\n \"created\": 1773358228,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"{\\\"quality\\\":9}\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
1134,\n \"completion_tokens\": 5,\n \"total_tokens\": 1139,\n \"prompt_tokens_details\":
1257,\n \"completion_tokens\": 5,\n \"total_tokens\": 1262,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_bd4be55b21\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_e609550549\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
CF-Cache-Status:
- DYNAMIC
CF-Ray:
- 9db696379bb48095-EWR
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Wed, 25 Feb 2026 20:41:40 GMT
- Thu, 12 Mar 2026 23:30:28 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -313,12 +331,10 @@ interactions:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '241'
- '380'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
@@ -356,54 +372,62 @@ interactions:
paragraph and your notes.\\n\\nThis is the expected criteria for your final
answer: 5 bullet points with a paragraph for each idea.\\nyou MUST return the
actual complete content as the final answer, not a summary.\\n\\nProvide your
complete response:\"},{\"role\":\"assistant\",\"content\":\"- **The Rise of
Autonomous AI Agents: Redefining Productivity and Creativity** \\n This article
would dive into how autonomous AI agents\u2014intelligent software systems capable
of independently performing complex tasks\u2014are transforming industries by
augmenting human productivity and creativity. It would explore real-world use
cases from automated content generation and customer support bots to AI-driven
design and research assistants, illustrating how these agents reduce repetitive
workload and open new avenues for innovation. The article could also analyze
challenges such as ethical considerations, decision-making transparency, and
integration with existing workflows, offering readers a comprehensive view of
how autonomous AI agents are reshaping the future of work.\\n\\n- **Bridging
Human-AI Collaboration: Designing AI Agents for Intuitive Interaction** \\n
\ This piece would investigate the critical design principles behind successful
human-AI collaboration, focusing on building AI agents that communicate and
interact naturally with users. From natural language processing nuances to adaptive
learning from user behavior, the article would examine how these technological
advancements create seamless partnerships between humans and machines. Highlighting
case studies in healthcare, finance, and creative industries, it would demonstrate
the importance of trust, interpretability, and empathy in AI agent interfaces,
emphasizing how better-designed interactions can dramatically improve adoption
and effectiveness.\\n\\n- **The Ethical Frontier: Navigating Bias and Accountability
in AI Agents** \\n Exploring the ethical implications of deploying AI agents
at scale, this article would address pressing issues like algorithmic bias,
privacy concerns, and accountability in autonomous decision-making. It would
analyze how biases embedded in training data can propagate through AI agents,
impacting critical outcomes in hiring, lending, and law enforcement. The article
would also discuss emerging regulatory frameworks, best practices for auditing
AI agents, and the role of interdisciplinary ethics teams in ensuring these
technologies are fair, transparent, and responsible, helping readers grasp the
societal responsibilities accompanying AI advancement.\\n\\n- **AI Agents in
Startups: Driving Innovation and Competitive Advantage** \\n Focused on the
startup ecosystem, this article would explore how emerging companies leverage
AI agents to disrupt markets and scale rapidly with limited resources. It would
profile startups using AI agents for customer acquisition, personalized marketing,
operational automation, and product development, illustrating how these tools
enable lean teams to achieve much more. The narrative would consider investment
trends, challenges faced by startups incorporating AI agents, and strategies
for balancing innovation with reliability, providing entrepreneurs and investors
with valuable insights into harnessing AI agents for meaningful growth.\\n\\n-
**From Data to Decision: How AI Agents Transform Business Intelligence** \\n
\ This article would delve into the role of AI agents as intelligent intermediaries
in business intelligence (BI) systems, automating data analysis and delivering
actionable insights in real-time. It would explain how AI agents can parse vast
datasets, identify trends, generate forecasts, and even suggest strategic decisions
without constant human oversight. Highlighting innovations like conversational
BI interfaces and predictive analytics agents, the article would underscore
how businesses of all sizes can democratize data-driven decision-making, driving
agility and competitive advantage in increasingly complex markets.\"},{\"role\":\"system\",\"content\":\"You
complete response:\"},{\"role\":\"assistant\",\"content\":\"- **The Future of
Autonomous AI Agents: From Task Automation to Decision Making** \\nExploring
the evolving landscape of autonomous AI agents reveals a fascinating journey
from simple task automation to complex decision-making systems capable of self-learning
and adaptation. An article diving into this topic could unravel how cutting-edge
advancements in reinforcement learning, natural language processing, and multi-agent
systems are propelling AI agents beyond rigid scripts into dynamic collaborators
and problem solvers. It would offer readers insights into real-world applications\u2014such
as autonomous drones, financial trading bots, and personalized digital assistants\u2014and
speculate on ethical considerations and regulatory frameworks shaping their
future. This exploration emphasizes the transformative potential and the challenges
that autonomous AI agents pose to industries and society at large.\\n\\n- **Bridging
the Gap Between AI Agents and Human Collaboration** \\nAI agents are no longer
isolated tools but increasingly integral collaborators in creative and professional
workflows. An article tackling this theme could examine the latest progress
in human-AI interaction models, including explainability, adaptability, and
collaborative problem-solving. It would highlight how AI agents are augmenting
human capabilities in fields like healthcare diagnostics, content creation,
customer service, and software development. The narrative could also include
case studies demonstrating successful AI-human partnerships along with the psychological
and ergonomic aspects critical to designing AI agents that work harmoniously
with humans. Such a piece would resonate deeply with readers interested in the
symbiosis between artificial intelligence and human ingenuity.\\n\\n- **The
Rise of AI Agents in Cybersecurity: Defense and Offense** \\nIn cybersecurity,
AI agents are becoming indispensable, not only in defensive roles but also on
the offensive front. An article focused on this area could deliver a comprehensive
analysis of how AI agents detect and respond to threats in real time, employing
techniques like anomaly detection, behavioral analysis, and automated incident
response. Additionally, it would delve into the darker side: the use of AI agents
by malicious actors for sophisticated cyber-attacks, including adaptive malware
and social engineering bots. This dual perspective could provide a thrilling
and nuanced investigation of the cybersecurity landscape dominated by AI, shedding
light on strategic innovations, emerging threats, and the ongoing arms race
between attackers and defenders.\\n\\n- **AI Agents in Startups: Revolutionizing
Business Models and Customer Experience** \\nStartups are leveraging AI agents
as a catalyst for innovation, scalability, and personalization, fundamentally
transforming traditional business models. An article on this topic could survey
real examples where AI agents streamline operations, enable hyper-personalized
marketing, automate customer support, and generate actionable business insights.
It would analyze how the integration of AI agents accelerates product-market
fit through rapid iteration and data-driven decision-making. Moreover, the article
could explore challenges unique to startup environments, such as resource constraints,
technology adoption hurdles, and ethical considerations around AI deployment.
This comprehensive view would inspire entrepreneurs and investors alike, spotlighting
AI agents as game changers in the startup ecosystem.\\n\\n- **Ethical and Societal
Implications of Delegating Decisions to AI Agents** \\nAs AI agents increasingly
take on decision-making roles with significant real-world impact, ethical and
societal questions come sharply into focus. An article on this theme could dissect
challenges concerning accountability, transparency, bias, and human autonomy.
It would provide an in-depth treatment of case studies where AI agents\u2019
decisions led to unintended consequences or public backlash, and the regulatory
and design frameworks proposed to mitigate these risks. Furthermore, the article
could explore philosophical questions about trust, control, and the future relationship
between humans and AI decision-makers. By unpacking the complex moral landscape
surrounding AI agents, this piece would offer critical insight for policymakers,
developers, and society at large.\"},{\"role\":\"system\",\"content\":\"You
are Researcher. You're an expert researcher, specialized in technology, software
engineering, AI and startups. You work as a freelancer and is now working on
doing research and analysis for a new customer.\\nYour personal goal is: Make
@@ -427,7 +451,7 @@ interactions:
connection:
- keep-alive
content-length:
- '5482'
- '6164'
content-type:
- application/json
cookie:
@@ -451,88 +475,88 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.12
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-DDFzR5eiizUskHHVDIqVHcFSGrZxy\",\n \"object\":
\"chat.completion\",\n \"created\": 1772052101,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
string: "{\n \"id\": \"chatcmpl-DIjm02rlgGyxm8CuBCxW7VcLsZ2UH\",\n \"object\":
\"chat.completion\",\n \"created\": 1773358228,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"- **The Evolution of AI Agents: From
Simple Automation to Context-Aware Decision Makers** \\n This article would
chart the remarkable progression of AI agents, tracing their journey from
rule-based automation systems to sophisticated, context-aware entities capable
of reasoning and adapting within complex environments. By exploring advances
in reinforcement learning, natural language understanding, and multi-agent
collaboration, readers would gain insight into how modern AI agents operate
with increasing autonomy. The piece could include cutting-edge research examples
and practical applications, illustrating how this evolution unlocks new possibilities
across industries such as healthcare, logistics, and customer service, truly
showcasing the expanding potential and impact of AI agents in everyday life.\\n\\n-
**Designing Trustworthy AI Agents: Balancing Transparency and Performance**
\ \\n Focusing on the critical issue of trust, this article would explore
the tension between creating AI agents that offer high performance and those
designed to be transparent and explainable to users. It would delve into techniques
like explainable AI (XAI), confidence scoring, and user-centric design principles
that foster trust and accountability. With a mix of theoretical insights and
real-world implementations, the article would highlight how companies tackle
challenges in deploying AI agents responsibly\u2014especially in sensitive
domains like finance, law enforcement, and healthcare\u2014demonstrating how
trustworthiness can become a competitive advantage in AI-driven services.\\n\\n-
**AI Agents as Personal Productivity Assistants: Beyond Scheduling and Reminders**
\ \\n This topic examines how AI agents are evolving from basic virtual assistants
to powerful personal productivity coaches that understand context, anticipate
needs, and proactively manage tasks. The article would investigate advances
in multi-modal understanding, emotional intelligence, and continuous learning
that enable AI agents to provide nuanced support in time management, email
triage, project coordination, and even creative brainstorming. Case studies
from popular platforms and startups would showcase how this new generation
of AI agents is revolutionizing daily workflows for professionals across sectors,
offering readers a forward-looking perspective on the future of personal digital
assistance.\\n\\n- **Collaborative AI Agents in Multi-Agent Systems: Driving
Complex Problem Solving** \\n This article would focus on the growing field
of multi-agent AI systems, where multiple AI agents communicate, negotiate,
and collaborate to solve problems that are too complex for a single agent.
It would highlight research advances in swarm intelligence, decentralized
decision-making, and cooperative game theory, and demonstrate practical applications
ranging from autonomous vehicle fleets to smart grid management and disaster
response coordination. By unpacking these complex interactions, the article
would engage readers with the fascinating dynamics of AI ecosystems and the
promise of collaborative agents to address society\u2019s grand challenges.\\n\\n-
**Startups Building Next-Gen AI Agents: Innovating at the Intersection of
AI and User Experience** \\n Highlighting startups at the forefront of AI
agent technology, this article would provide an in-depth look at how these
ventures blend cutting-edge artificial intelligence with seamless user experiences
to disrupt traditional markets. It would examine how startups harness advances
in natural language processing, reinforcement learning, and personalized modeling
to create AI agents that feel intuitive and human-like, powering applications
in healthcare, education, finance, and customer engagement. The article would
also discuss funding trends, go-to-market strategies, and technological challenges,
offering entrepreneurs, investors, and technologists valuable insights into
what it takes to succeed in the burgeoning AI agent landscape.\\n\\n**Notes:**
\ \\nThese ideas are crafted to cover a broad spectrum of AI agent-related
topics, combining technical depth with real-world relevance. Each paragraph
aims to showcase the potential richness, relevance, and appeal of a full article,
ensuring the content would engage a diverse readership, from AI researchers
and software engineers to startup founders and business leaders interested
in AI innovation.\",\n \"refusal\": null,\n \"annotations\":
\"assistant\",\n \"content\": \"- **How AI Agents are Shaping the Future
of Personalized Learning** \\nAn article exploring how AI agents are revolutionizing
personalized learning could captivate readers by detailing how these intelligent
systems adapt educational content in real time to meet individual learner
needs. By combining adaptive learning algorithms, natural language understanding,
and behavioral analytics, AI agents are transforming classrooms and online
platforms alike, providing personalized feedback, pacing, and even motivation
strategies. This piece would dive into current implementations, such as AI
tutors and automated grading, and forecast the potential for lifelong learning
companions that evolve alongside the learner\u2019s growth. Readers would
gain a comprehensive view of how AI agents can democratize access to education
while enhancing efficacy across diverse learning environments.\\n\\n- **The
Role of AI Agents in Accelerating Drug Discovery and Healthcare Innovation**
\ \\nThis article would provide a compelling exploration of how AI agents
are accelerating drug discovery by automating complex data analysis, predicting
molecular interactions, and optimizing clinical trial design. It would highlight
cutting-edge examples where AI agents collaborate with humans to speed up
identifying promising compounds, reducing costs, and shortening development
cycles. Beyond pharmaceuticals, the article could explore AI agents\u2019
expanding roles in personalized medicine, patient monitoring, and diagnostic
support. With healthcare challenges mounting globally, this topic offers high
relevance and excitement by showcasing how AI agents act as catalysts for
breakthroughs that can save lives and transform medical care.\\n\\n- **Collaborative
AI Agents: The Next Frontier in Software Development and DevOps** \\nAn in-depth
article on collaborative AI agents in software engineering would showcase
how these tools enhance productivity and code quality by automating routine
tasks, catching bugs, and assisting in code reviews. It could examine emerging
AI agents designed for pair programming, continuous integration, deployment
automation, and intelligent testing. By integrating seamlessly into DevOps
pipelines, these AI collaborators reduce human error, speed up delivery cycles,
and enable developers to focus on innovation. The piece would also discuss
challenges like trust, explainability, and maintaining human oversight, appealing
to software engineers and technology leaders eager to understand the practical
implications of AI agents in development workflows.\\n\\n- **AI Agents and
the Evolution of Customer Experience in Digital Businesses** \\nThis article
would explore the transformative role AI agents play in reshaping customer
experience for digital-first businesses. It could cover AI-powered chatbots,
recommendation engines, sentiment analysis tools, and personalized marketing
agents, illustrating how these intelligent systems enhance engagement and
satisfaction while reducing operational costs. By weaving in real-world case
studies and data, the article would demonstrate how AI agents help companies
anticipate customer needs, resolve issues proactively, and create seamless
omnichannel interactions. The narrative could also touch on the balance between
automation and human touch, offering strategic insights for businesses aiming
to harness AI agents without compromising brand loyalty and trust.\\n\\n-
**Ethical Frameworks for the Deployment of Autonomous AI Agents in Society**
\ \\nThis article would address the critical and timely topic of ethical considerations
surrounding the deployment of autonomous AI agents in public and private spheres.
It would systematically analyze issues such as accountability, transparency,
fairness, privacy, and unintended consequences when AI agents make decisions
or act autonomously. The article could feature interviews with ethicists,
policymakers, and researchers, and review current regulatory efforts and standards
shaping AI governance. By unpacking this complex terrain, the article would
provide a thoughtful, multidisciplinary perspective crucial for stakeholders
aiming to responsibly develop and deploy AI agents in ways that align with
societal values and legal frameworks.\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 926,\n \"completion_tokens\":
727,\n \"total_tokens\": 1653,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 1049,\n \"completion_tokens\":
677,\n \"total_tokens\": 1726,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_417e90869b\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_ae0f8c9a7b\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
CF-Cache-Status:
- DYNAMIC
CF-Ray:
- 9db696410a2db911-EWR
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Wed, 25 Feb 2026 20:41:50 GMT
- Thu, 12 Mar 2026 23:30:38 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -545,12 +569,10 @@ interactions:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '9082'
- '9221'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
@@ -590,76 +612,74 @@ interactions:
list of ideas with their paragraph and your notes. task_expected_output: 5 bullet
points with a paragraph for each idea. agent: Researcher agent_goal: Make the
best research and analysis on content about AI and AI agents Task Output: -
**The Evolution of AI Agents: From Simple Automation to Context-Aware Decision
Makers** \\n This article would chart the remarkable progression of AI agents,
tracing their journey from rule-based automation systems to sophisticated, context-aware
entities capable of reasoning and adapting within complex environments. By exploring
advances in reinforcement learning, natural language understanding, and multi-agent
collaboration, readers would gain insight into how modern AI agents operate
with increasing autonomy. The piece could include cutting-edge research examples
and practical applications, illustrating how this evolution unlocks new possibilities
across industries such as healthcare, logistics, and customer service, truly
showcasing the expanding potential and impact of AI agents in everyday life.\\n\\n-
**Designing Trustworthy AI Agents: Balancing Transparency and Performance**
\ \\n Focusing on the critical issue of trust, this article would explore the
tension between creating AI agents that offer high performance and those designed
to be transparent and explainable to users. It would delve into techniques like
explainable AI (XAI), confidence scoring, and user-centric design principles
that foster trust and accountability. With a mix of theoretical insights and
real-world implementations, the article would highlight how companies tackle
challenges in deploying AI agents responsibly\u2014especially in sensitive domains
like finance, law enforcement, and healthcare\u2014demonstrating how trustworthiness
can become a competitive advantage in AI-driven services.\\n\\n- **AI Agents
as Personal Productivity Assistants: Beyond Scheduling and Reminders** \\n
\ This topic examines how AI agents are evolving from basic virtual assistants
to powerful personal productivity coaches that understand context, anticipate
needs, and proactively manage tasks. The article would investigate advances
in multi-modal understanding, emotional intelligence, and continuous learning
that enable AI agents to provide nuanced support in time management, email triage,
project coordination, and even creative brainstorming. Case studies from popular
platforms and startups would showcase how this new generation of AI agents is
revolutionizing daily workflows for professionals across sectors, offering readers
a forward-looking perspective on the future of personal digital assistance.\\n\\n-
**Collaborative AI Agents in Multi-Agent Systems: Driving Complex Problem Solving**
\ \\n This article would focus on the growing field of multi-agent AI systems,
where multiple AI agents communicate, negotiate, and collaborate to solve problems
that are too complex for a single agent. It would highlight research advances
in swarm intelligence, decentralized decision-making, and cooperative game theory,
and demonstrate practical applications ranging from autonomous vehicle fleets
to smart grid management and disaster response coordination. By unpacking these
complex interactions, the article would engage readers with the fascinating
dynamics of AI ecosystems and the promise of collaborative agents to address
society\u2019s grand challenges.\\n\\n- **Startups Building Next-Gen AI Agents:
Innovating at the Intersection of AI and User Experience** \\n Highlighting
startups at the forefront of AI agent technology, this article would provide
an in-depth look at how these ventures blend cutting-edge artificial intelligence
with seamless user experiences to disrupt traditional markets. It would examine
how startups harness advances in natural language processing, reinforcement
learning, and personalized modeling to create AI agents that feel intuitive
and human-like, powering applications in healthcare, education, finance, and
customer engagement. The article would also discuss funding trends, go-to-market
strategies, and technological challenges, offering entrepreneurs, investors,
and technologists valuable insights into what it takes to succeed in the burgeoning
AI agent landscape.\\n\\n**Notes:** \\nThese ideas are crafted to cover a broad
spectrum of AI agent-related topics, combining technical depth with real-world
relevance. Each paragraph aims to showcase the potential richness, relevance,
and appeal of a full article, ensuring the content would engage a diverse readership,
from AI researchers and software engineers to startup founders and business
leaders interested in AI innovation.\\n\\nThis is the expected criteria for
your final answer: Evaluation Score from 1 to 10 based on the performance of
the agents on the tasks\\nyou MUST return the actual complete content as the
final answer, not a summary.\\nFormat your final answer according to the following
OpenAPI schema: {\\n \\\"properties\\\": {\\n \\\"quality\\\": {\\n \\\"description\\\":
\\\"A score from 1 to 10 evaluating on completion, quality, and overall performance
from the task_description and task_expected_output to the actual Task Output.\\\",\\n
\ \\\"title\\\": \\\"Quality\\\",\\n \\\"type\\\": \\\"number\\\"\\n
\ }\\n },\\n \\\"required\\\": [\\n \\\"quality\\\"\\n ],\\n \\\"title\\\":
\\\"TaskEvaluationPydanticOutput\\\",\\n \\\"type\\\": \\\"object\\\",\\n \\\"additionalProperties\\\":
false\\n}\\n\\nIMPORTANT: Preserve the original content exactly as-is. Do NOT
rewrite, paraphrase, or modify the meaning of the content. Only structure it
to match the schema format.\\n\\nDo not include the OpenAPI schema in the final
output. Ensure the final output does not include any code block markers like
```json or ```python.\\n\\nProvide your complete response:\"}],\"model\":\"gpt-4o-mini\",\"response_format\":{\"type\":\"json_schema\",\"json_schema\":{\"schema\":{\"properties\":{\"quality\":{\"description\":\"A
**How AI Agents are Shaping the Future of Personalized Learning** \\nAn article
exploring how AI agents are revolutionizing personalized learning could captivate
readers by detailing how these intelligent systems adapt educational content
in real time to meet individual learner needs. By combining adaptive learning
algorithms, natural language understanding, and behavioral analytics, AI agents
are transforming classrooms and online platforms alike, providing personalized
feedback, pacing, and even motivation strategies. This piece would dive into
current implementations, such as AI tutors and automated grading, and forecast
the potential for lifelong learning companions that evolve alongside the learner\u2019s
growth. Readers would gain a comprehensive view of how AI agents can democratize
access to education while enhancing efficacy across diverse learning environments.\\n\\n-
**The Role of AI Agents in Accelerating Drug Discovery and Healthcare Innovation**
\ \\nThis article would provide a compelling exploration of how AI agents are
accelerating drug discovery by automating complex data analysis, predicting
molecular interactions, and optimizing clinical trial design. It would highlight
cutting-edge examples where AI agents collaborate with humans to speed up identifying
promising compounds, reducing costs, and shortening development cycles. Beyond
pharmaceuticals, the article could explore AI agents\u2019 expanding roles in
personalized medicine, patient monitoring, and diagnostic support. With healthcare
challenges mounting globally, this topic offers high relevance and excitement
by showcasing how AI agents act as catalysts for breakthroughs that can save
lives and transform medical care.\\n\\n- **Collaborative AI Agents: The Next
Frontier in Software Development and DevOps** \\nAn in-depth article on collaborative
AI agents in software engineering would showcase how these tools enhance productivity
and code quality by automating routine tasks, catching bugs, and assisting in
code reviews. It could examine emerging AI agents designed for pair programming,
continuous integration, deployment automation, and intelligent testing. By integrating
seamlessly into DevOps pipelines, these AI collaborators reduce human error,
speed up delivery cycles, and enable developers to focus on innovation. The
piece would also discuss challenges like trust, explainability, and maintaining
human oversight, appealing to software engineers and technology leaders eager
to understand the practical implications of AI agents in development workflows.\\n\\n-
**AI Agents and the Evolution of Customer Experience in Digital Businesses**
\ \\nThis article would explore the transformative role AI agents play in reshaping
customer experience for digital-first businesses. It could cover AI-powered
chatbots, recommendation engines, sentiment analysis tools, and personalized
marketing agents, illustrating how these intelligent systems enhance engagement
and satisfaction while reducing operational costs. By weaving in real-world
case studies and data, the article would demonstrate how AI agents help companies
anticipate customer needs, resolve issues proactively, and create seamless omnichannel
interactions. The narrative could also touch on the balance between automation
and human touch, offering strategic insights for businesses aiming to harness
AI agents without compromising brand loyalty and trust.\\n\\n- **Ethical Frameworks
for the Deployment of Autonomous AI Agents in Society** \\nThis article would
address the critical and timely topic of ethical considerations surrounding
the deployment of autonomous AI agents in public and private spheres. It would
systematically analyze issues such as accountability, transparency, fairness,
privacy, and unintended consequences when AI agents make decisions or act autonomously.
The article could feature interviews with ethicists, policymakers, and researchers,
and review current regulatory efforts and standards shaping AI governance. By
unpacking this complex terrain, the article would provide a thoughtful, multidisciplinary
perspective crucial for stakeholders aiming to responsibly develop and deploy
AI agents in ways that align with societal values and legal frameworks.\\n\\nThis
is the expected criteria for your final answer: Evaluation Score from 1 to 10
based on the performance of the agents on the tasks\\nyou MUST return the actual
complete content as the final answer, not a summary.\\nFormat your final answer
according to the following OpenAPI schema: {\\n \\\"properties\\\": {\\n \\\"quality\\\":
{\\n \\\"description\\\": \\\"A score from 1 to 10 evaluating on completion,
quality, and overall performance from the task_description and task_expected_output
to the actual Task Output.\\\",\\n \\\"title\\\": \\\"Quality\\\",\\n \\\"type\\\":
\\\"number\\\"\\n }\\n },\\n \\\"required\\\": [\\n \\\"quality\\\"\\n
\ ],\\n \\\"title\\\": \\\"TaskEvaluationPydanticOutput\\\",\\n \\\"type\\\":
\\\"object\\\",\\n \\\"additionalProperties\\\": false\\n}\\n\\nIMPORTANT:
Preserve the original content exactly as-is. Do NOT rewrite, paraphrase, or
modify the meaning of the content. Only structure it to match the schema format.\\n\\nDo
not include the OpenAPI schema in the final output. Ensure the final output
does not include any code block markers like ```json or ```python.\\n\\nProvide
your complete response:\"}],\"model\":\"gpt-4o-mini\",\"response_format\":{\"type\":\"json_schema\",\"json_schema\":{\"schema\":{\"properties\":{\"quality\":{\"description\":\"A
score from 1 to 10 evaluating on completion, quality, and overall performance
from the task_description and task_expected_output to the actual Task Output.\",\"title\":\"Quality\",\"type\":\"number\"}},\"required\":[\"quality\"],\"title\":\"TaskEvaluationPydanticOutput\",\"type\":\"object\",\"additionalProperties\":false},\"name\":\"TaskEvaluationPydanticOutput\",\"strict\":true}},\"stream\":false}"
headers:
@@ -674,7 +694,7 @@ interactions:
connection:
- keep-alive
content-length:
- '7196'
- '7036'
content-type:
- application/json
cookie:
@@ -700,31 +720,33 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.12
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-DDFzaYq2i96GKjZisy507Xk2rVvjn\",\n \"object\":
\"chat.completion\",\n \"created\": 1772052110,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
string: "{\n \"id\": \"chatcmpl-DIjmAmRrx8ONo3OaHaBCNKzOa0Mzs\",\n \"object\":
\"chat.completion\",\n \"created\": 1773358238,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"{\\\"quality\\\":9}\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
1264,\n \"completion_tokens\": 5,\n \"total_tokens\": 1269,\n \"prompt_tokens_details\":
1214,\n \"completion_tokens\": 5,\n \"total_tokens\": 1219,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_bd4be55b21\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_1d1f595505\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
CF-Cache-Status:
- DYNAMIC
CF-Ray:
- 9db6967da94a10f3-EWR
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Wed, 25 Feb 2026 20:41:51 GMT
- Thu, 12 Mar 2026 23:30:39 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -737,12 +759,10 @@ interactions:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '391'
- '374'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:

View File

@@ -1,111 +1,115 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Researcher. You''re
an expert researcher, specialized in technology, software engineering, AI and
startups. You work as a freelancer and is now working on doing research and
analysis for a new customer.\nYour personal goal is: Make the best research
and analysis on content about AI and AI agents\nTo give my best complete final
answer to the task use the exact following format:\n\nThought: I now can give
a great answer\nFinal Answer: Your final answer must be the great and the most
complete as possible, it must be outcome described.\n\nI MUST use these formats,
my job depends on it!"}, {"role": "user", "content": "\nCurrent Task: Look at
the available data and give me a sense on the total number of sales.\n\nThis
is the expect criteria for your final answer: The total number of sales as an
integer\nyou MUST return the actual complete content as the final answer, not
a summary.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
"gpt-4o"}'
body: '{"messages":[{"role":"system","content":"You are Researcher. You''re an
expert researcher, specialized in technology, software engineering, AI and startups.
You work as a freelancer and is now working on doing research and analysis for
a new customer.\nYour personal goal is: Make the best research and analysis
on content about AI and AI agents"},{"role":"user","content":"\nCurrent Task:
Look at the available data and give me a sense on the total number of sales.\n\nThis
is the expected criteria for your final answer: The total number of sales as
an integer\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- gzip, deflate
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1097'
- '704'
content-type:
- application/json
cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.47.0
x-stainless-raw-response:
- 'true'
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-AB7cBo2TPJMkfJCtCzpXOEixI8VrG\",\n \"object\"\
: \"chat.completion\",\n \"created\": 1727214243,\n \"model\": \"gpt-4o-2024-05-13\"\
,\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \
\ \"role\": \"assistant\",\n \"content\": \"Thought: I need to\
\ analyze the available data to determine the total number of sales accurately.\\\
n\\nFinal Answer: The total number of sales is [the exact integer value of\
\ the total sales from the given data].\",\n \"refusal\": null\n \
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n \
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 215,\n \"completion_tokens\"\
: 41,\n \"total_tokens\": 256,\n \"completion_tokens_details\": {\n\
\ \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"\
fp_e375328146\"\n}\n"
string: "{\n \"id\": \"chatcmpl-DIjv4bkJSathhHXsLANGZgGhV3rl7\",\n \"object\":
\"chat.completion\",\n \"created\": 1773358790,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I don\u2019t see any data provided
yet regarding sales figures. Please share the available data or provide additional
details so I can analyze and calculate the total number of sales accurately.\",\n
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
130,\n \"completion_tokens\": 34,\n \"total_tokens\": 164,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_828130e5d4\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c85f4176a8e1cf3-GRU
CF-Ray:
- 9db6a3f7ae211512-EWR
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 24 Sep 2024 21:44:03 GMT
- Thu, 12 Mar 2026 23:39:51 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- X-Request-ID
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
- OPENAI-ORG-XXX
openai-processing-ms:
- '906'
- '854'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
set-cookie:
- SET-COOKIE-XXX
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- '10000'
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- '30000000'
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- '9999'
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- '29999735'
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- 6ms
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- 0s
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- req_06bf7b348d3d142c9cb7cce4d956b8d6
- X-REQUEST-ID-XXX
status:
code: 200
message: OK

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,171 +1,118 @@
interactions:
- request:
body: !!binary |
CoEMCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkS2AsKEgoQY3Jld2FpLnRl
bGVtZXRyeRKQAgoQ1lPH3Bis4hD4M7Sez2t96RIIIffb2kCAAqMqDlRhc2sgRXhlY3V0aW9uMAE5
WIEZIiVM+BdBSK51sSZM+BdKLgoIY3Jld19rZXkSIgogM2Y4ZDVjM2FiODgyZDY4NjlkOTNjYjgx
ZjBlMmVkNGFKMQoHY3Jld19pZBImCiQyYjZmY2ZmYS1lNDQ0LTQ4YjYtYWNjNi0xZTVhMDY2OTQ1
NWJKLgoIdGFza19rZXkSIgogOTRhODI2YzE5MzA1NTk2ODZiYWZiNDA5ZWU4Mzg3NmZKMQoHdGFz
a19pZBImCiQxMTU5NmU3OS0yYzllLTQzOWYtYWViMS0xMThhMTI2ZDNiYzN6AhgBhQEAAQAAEp0H
ChBEYWf4sVuYMd8/Oxr4ONAsEghO/cKNNKdq0CoMQ3JldyBDcmVhdGVkMAE5KCBKsyZM+BdByI5P
syZM+BdKGgoOY3Jld2FpX3ZlcnNpb24SCAoGMC42MS4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMu
MTEuN0ouCghjcmV3X2tleRIiCiBhOWNjNWQ0MzM5NWIyMWIxODFjODBiZDQzNTFjY2VjOEoxCgdj
cmV3X2lkEiYKJDkzNGJkMDZiLTY2ZDktNDE0MC1iZGE3LTQzMDZmNmM3Y2Q0N0ocCgxjcmV3X3By
b2Nlc3MSDAoKc2VxdWVudGlhbEoRCgtjcmV3X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2Zf
dGFza3MSAhgBShsKFWNyZXdfbnVtYmVyX29mX2FnZW50cxICGAFKzAIKC2NyZXdfYWdlbnRzErwC
CrkCW3sia2V5IjogIjhiZDIxMzliNTk3NTE4MTUwNmU0MWZkOWM0NTYzZDc1IiwgImlkIjogIjY3
MWMzYzdmLWNjMzUtNGU5MS1hYjgzLWVmZGVjOWU3Y2ZiNyIsICJyb2xlIjogIlJlc2VhcmNoZXIi
LCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMTUsICJtYXhfcnBtIjogbnVsbCwgImZ1
bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvIiwgImRlbGVnYXRpb25fZW5h
YmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5
X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUr+AQoKY3Jld190YXNrcxLvAQrsAVt7Imtl
eSI6ICJlOWU2YjcyYWFjMzI2NDU5ZGQ3MDY4ZjBiMTcxN2MxYyIsICJpZCI6ICI4YmFkNTJiZi05
MGM0LTQ0ZDgtYmNlZi0xODBkZTA2MjRiYWYiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IHRydWUsICJo
dW1hbl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAiUmVzZWFyY2hlciIsICJhZ2VudF9r
ZXkiOiAiOGJkMjEzOWI1OTc1MTgxNTA2ZTQxZmQ5YzQ1NjNkNzUiLCAidG9vbHNfbmFtZXMiOiBb
XX1degIYAYUBAAEAABKOAgoQduJhIxVspIn9gWgZzmXHrhIILYsCkB2V4ckqDFRhc2sgQ3JlYXRl
ZDABORCOYrMmTPgXQdDrYrMmTPgXSi4KCGNyZXdfa2V5EiIKIGE5Y2M1ZDQzMzk1YjIxYjE4MWM4
MGJkNDM1MWNjZWM4SjEKB2NyZXdfaWQSJgokOTM0YmQwNmItNjZkOS00MTQwLWJkYTctNDMwNmY2
YzdjZDQ3Si4KCHRhc2tfa2V5EiIKIGU5ZTZiNzJhYWMzMjY0NTlkZDcwNjhmMGIxNzE3YzFjSjEK
B3Rhc2tfaWQSJgokOGJhZDUyYmYtOTBjNC00NGQ4LWJjZWYtMTgwZGUwNjI0YmFmegIYAYUBAAEA
AA==
body: '{"messages":[{"role":"system","content":"You are Researcher. You''re an
expert researcher, specialized in technology, software engineering, AI and startups.
You work as a freelancer and is now working on doing research and analysis for
a new customer.\nYour personal goal is: Make the best research and analysis
on content about AI and AI agents"},{"role":"user","content":"\nCurrent Task:
Generate a list of 5 interesting ideas to explore for an article, where each
bulletpoint is under 15 words.\n\nThis is the expected criteria for your final
answer: Bullet point list of 5 important events. No additional commentary.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nProvide
your complete response:"}],"model":"gpt-4.1-mini"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '1540'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.27.0
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Tue, 24 Sep 2024 21:43:06 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Researcher. You''re
an expert researcher, specialized in technology, software engineering, AI and
startups. You work as a freelancer and is now working on doing research and
analysis for a new customer.\nYour personal goal is: Make the best research
and analysis on content about AI and AI agents\nTo give my best complete final
answer to the task use the exact following format:\n\nThought: I now can give
a great answer\nFinal Answer: Your final answer must be the great and the most
complete as possible, it must be outcome described.\n\nI MUST use these formats,
my job depends on it!"}, {"role": "user", "content": "\nCurrent Task: Generate
a list of 5 interesting ideas to explore for an article, where each bulletpoint
is under 15 words.\n\nThis is the expect criteria for your final answer: Bullet
point list of 5 important events. No additional commentary.\nyou MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}], "model": "gpt-4o"}'
headers:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- gzip, deflate
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1155'
- '762'
content-type:
- application/json
cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.47.0
x-stainless-raw-response:
- 'true'
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-AB7bGdQd8mh4zvM4UaLl93hex1Ys3\",\n \"object\"\
: \"chat.completion\",\n \"created\": 1727214186,\n \"model\": \"gpt-4o-2024-05-13\"\
,\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \
\ \"role\": \"assistant\",\n \"content\": \"Thought: I now can\
\ give a great answer.\\nFinal Answer:\\n- Ethical implications of AI in law\
\ enforcement and surveillance.\\n- AI advancements in personalized healthcare\
\ and diagnostics.\\n- Autonomous AI agents in financial market trading.\\\
n- Collaboration between AI and humans in creative arts.\\n- AI-driven climate\
\ modeling and environmental monitoring.\",\n \"refusal\": null\n \
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n \
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 226,\n \"completion_tokens\"\
: 61,\n \"total_tokens\": 287,\n \"completion_tokens_details\": {\n\
\ \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"\
fp_e375328146\"\n}\n"
string: "{\n \"id\": \"chatcmpl-DIkLSp6YfhftRnhYHqjRHZXAI8Sji\",\n \"object\":
\"chat.completion\",\n \"created\": 1773360426,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"- Impact of autonomous AI agents on
future workplace automation \\n- Ethical dilemmas in deploying AI decision-making
systems \\n- Advances in AI-driven personalized learning technologies \\n-
Role of AI in enhancing cybersecurity defense mechanisms \\n- Challenges
in regulating AI innovations across global markets\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
141,\n \"completion_tokens\": 50,\n \"total_tokens\": 191,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_5e793402c9\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c85f2b7c92f1cf3-GRU
CF-Ray:
- 9db6cbea1bd29d36-EWR
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 24 Sep 2024 21:43:07 GMT
- Fri, 13 Mar 2026 00:07:08 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- X-Request-ID
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
- OPENAI-ORG-XXX
openai-processing-ms:
- '939'
- '1357'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
set-cookie:
- SET-COOKIE-XXX
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- '10000'
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- '30000000'
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- '9999'
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- '29999722'
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- 6ms
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- 0s
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- req_4a6962cfb5b3418a75c19cfc1c2e7227
- X-REQUEST-ID-XXX
status:
code: 200
message: OK

View File

@@ -1121,3 +1121,345 @@ def test_anthropic_cached_prompt_tokens_with_tools():
assert usage.successful_requests == 2
# The second call should have cached prompt tokens
assert usage.cached_prompt_tokens > 0
# ---- Tool Search Tool Tests ----
def test_tool_search_true_injects_bm25_and_defer_loading():
"""tool_search=True should inject bm25 tool search and defer all tools."""
llm = LLM(model="anthropic/claude-sonnet-4-5", tool_search=True)
crewai_tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"],
},
},
},
{
"type": "function",
"function": {
"name": "calculator",
"description": "Perform math calculations",
"parameters": {
"type": "object",
"properties": {"expression": {"type": "string"}},
"required": ["expression"],
},
},
},
]
formatted_messages, system_message = llm._format_messages_for_anthropic(
[{"role": "user", "content": "Hello"}]
)
params = llm._prepare_completion_params(
formatted_messages, system_message, crewai_tools
)
tools = params["tools"]
# Should have 3 tools: tool_search + 2 regular
assert len(tools) == 3
# First tool should be the bm25 tool search tool
assert tools[0]["type"] == "tool_search_tool_bm25_20251119"
assert tools[0]["name"] == "tool_search_tool_bm25"
assert "input_schema" not in tools[0]
# All regular tools should have defer_loading=True
for t in tools[1:]:
assert t.get("defer_loading") is True, f"Tool {t['name']} missing defer_loading"
def test_tool_search_regex_config():
"""tool_search with regex config should use regex variant."""
from crewai.llms.providers.anthropic.completion import AnthropicToolSearchConfig
config = AnthropicToolSearchConfig(type="regex")
llm = LLM(model="anthropic/claude-sonnet-4-5", tool_search=config)
crewai_tools = [
{
"type": "function",
"function": {
"name": "tool_a",
"description": "First tool",
"parameters": {
"type": "object",
"properties": {"q": {"type": "string"}},
"required": ["q"],
},
},
},
{
"type": "function",
"function": {
"name": "tool_b",
"description": "Second tool",
"parameters": {
"type": "object",
"properties": {"q": {"type": "string"}},
"required": ["q"],
},
},
},
]
formatted_messages, system_message = llm._format_messages_for_anthropic(
[{"role": "user", "content": "Hello"}]
)
params = llm._prepare_completion_params(
formatted_messages, system_message, crewai_tools
)
tools = params["tools"]
assert tools[0]["type"] == "tool_search_tool_regex_20251119"
assert tools[0]["name"] == "tool_search_tool_regex"
def test_tool_search_disabled_by_default():
"""tool_search=None (default) should NOT inject anything."""
llm = LLM(model="anthropic/claude-sonnet-4-5")
crewai_tools = [
{
"type": "function",
"function": {
"name": "test_tool",
"description": "A test tool",
"parameters": {
"type": "object",
"properties": {"q": {"type": "string"}},
"required": ["q"],
},
},
},
]
formatted_messages, system_message = llm._format_messages_for_anthropic(
[{"role": "user", "content": "Hello"}]
)
params = llm._prepare_completion_params(
formatted_messages, system_message, crewai_tools
)
tools = params["tools"]
assert len(tools) == 1
for t in tools:
assert t.get("type", "") not in (
"tool_search_tool_bm25_20251119",
"tool_search_tool_regex_20251119",
)
assert "defer_loading" not in t
def test_tool_search_no_duplicate_when_manually_provided():
"""If user passes a tool search tool manually, don't inject a duplicate."""
llm = LLM(model="anthropic/claude-sonnet-4-5", tool_search=True)
# User manually includes a tool search tool
tools_with_search = [
{"type": "tool_search_tool_regex_20251119", "name": "tool_search_tool_regex"},
{
"type": "function",
"function": {
"name": "test_tool",
"description": "A test tool",
"parameters": {
"type": "object",
"properties": {"q": {"type": "string"}},
"required": ["q"],
},
},
},
]
formatted_messages, system_message = llm._format_messages_for_anthropic(
[{"role": "user", "content": "Hello"}]
)
params = llm._prepare_completion_params(
formatted_messages, system_message, tools_with_search
)
tools = params["tools"]
search_tools = [
t for t in tools
if t.get("type", "").startswith("tool_search_tool")
]
# Should only have 1 tool search tool (the user's manual one)
assert len(search_tools) == 1
assert search_tools[0]["type"] == "tool_search_tool_regex_20251119"
def test_tool_search_passthrough_preserves_tool_search_type():
"""_convert_tools_for_interference should pass through tool search tools unchanged."""
llm = LLM(model="anthropic/claude-sonnet-4-5")
tools = [
{"type": "tool_search_tool_regex_20251119", "name": "tool_search_tool_regex"},
{
"name": "get_weather",
"description": "Get weather",
"input_schema": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"],
},
},
]
converted = llm._convert_tools_for_interference(tools)
assert len(converted) == 2
# Tool search tool should be passed through exactly
assert converted[0] == {
"type": "tool_search_tool_regex_20251119",
"name": "tool_search_tool_regex",
}
# Regular tool should be preserved
assert converted[1]["name"] == "get_weather"
assert "input_schema" in converted[1]
def test_tool_search_single_tool_skips_search_and_forces_choice():
"""With only 1 tool, tool_search is skipped (nothing to search) and the
normal forced tool_choice optimisation still applies."""
llm = LLM(model="anthropic/claude-sonnet-4-5", tool_search=True)
crewai_tools = [
{
"type": "function",
"function": {
"name": "test_tool",
"description": "A test tool",
"parameters": {
"type": "object",
"properties": {"q": {"type": "string"}},
"required": ["q"],
},
},
},
]
formatted_messages, system_message = llm._format_messages_for_anthropic(
[{"role": "user", "content": "Hello"}]
)
params = llm._prepare_completion_params(
formatted_messages,
system_message,
crewai_tools,
available_functions={"test_tool": lambda q: "result"},
)
# Single tool — tool_search skipped, tool_choice forced as normal
assert "tool_choice" in params
assert params["tool_choice"]["name"] == "test_tool"
# No tool search tool should be injected
tool_types = [t.get("type", "") for t in params["tools"]]
for ts_type in ("tool_search_tool_bm25_20251119", "tool_search_tool_regex_20251119"):
assert ts_type not in tool_types
# No defer_loading on the single tool
assert "defer_loading" not in params["tools"][0]
def test_tool_search_via_llm_class():
"""Verify tool_search param passes through LLM class correctly."""
from crewai.llms.providers.anthropic.completion import (
AnthropicCompletion,
AnthropicToolSearchConfig,
)
# Test with True
llm = LLM(model="anthropic/claude-sonnet-4-5", tool_search=True)
assert isinstance(llm, AnthropicCompletion)
assert llm.tool_search is not None
assert llm.tool_search.type == "bm25"
# Test with config
llm2 = LLM(
model="anthropic/claude-sonnet-4-5",
tool_search=AnthropicToolSearchConfig(type="regex"),
)
assert llm2.tool_search is not None
assert llm2.tool_search.type == "regex"
# Test without (default)
llm3 = LLM(model="anthropic/claude-sonnet-4-5")
assert llm3.tool_search is None
# Many tools shared by the VCR tests below
_MANY_TOOLS = [
{
"name": name,
"description": desc,
"input_schema": {
"type": "object",
"properties": {"input": {"type": "string", "description": f"Input for {name}"}},
"required": ["input"],
},
}
for name, desc in [
("get_weather", "Get current weather conditions for a specified location"),
("search_files", "Search through files in the workspace by name or content"),
("read_database", "Read records from a database table with optional filtering"),
("write_database", "Write or update records in a database table"),
("send_email", "Send an email message to one or more recipients"),
("read_email", "Read emails from inbox with filtering options"),
("create_ticket", "Create a new support ticket in the ticketing system"),
("update_ticket", "Update an existing support ticket status or description"),
("list_users", "List all users in the system with optional filters"),
("get_user_profile", "Get detailed profile information for a specific user"),
("deploy_service", "Deploy a service to the specified environment"),
("rollback_service", "Rollback a service deployment to a previous version"),
("get_service_logs", "Get service logs filtered by time range and severity"),
("run_sql_query", "Run a read-only SQL query against the analytics database"),
("create_dashboard", "Create a new monitoring dashboard with widgets"),
]
]
@pytest.mark.vcr()
def test_tool_search_discovers_and_calls_tool():
"""Tool search should discover the right tool and return a tool_use block."""
llm = LLM(model="anthropic/claude-sonnet-4-5", tool_search=True)
result = llm.call(
"What is the weather in Tokyo?",
tools=_MANY_TOOLS,
)
# Should return tool_use blocks (list) since no available_functions provided
assert isinstance(result, list)
assert len(result) >= 1
# The discovered tool should be get_weather
tool_names = [getattr(block, "name", None) for block in result]
assert "get_weather" in tool_names
@pytest.mark.vcr()
def test_tool_search_saves_input_tokens():
"""Tool search with deferred loading should use fewer input tokens than loading all tools."""
# Call WITHOUT tool search — all 15 tools loaded upfront
llm_no_search = LLM(model="anthropic/claude-sonnet-4-5")
llm_no_search.call("What is the weather in Tokyo?", tools=_MANY_TOOLS)
usage_no_search = llm_no_search.get_token_usage_summary()
# Call WITH tool search — tools deferred
llm_search = LLM(model="anthropic/claude-sonnet-4-5", tool_search=True)
llm_search.call("What is the weather in Tokyo?", tools=_MANY_TOOLS)
usage_search = llm_search.get_token_usage_summary()
# Tool search should use fewer input tokens
assert usage_search.prompt_tokens < usage_no_search.prompt_tokens, (
f"Expected tool_search ({usage_search.prompt_tokens}) to use fewer input tokens "
f"than no search ({usage_no_search.prompt_tokens})"
)

View File

@@ -967,3 +967,211 @@ def test_bedrock_agent_kickoff_structured_output_with_tools():
assert result.pydantic.result == 42, f"Expected result 42 but got {result.pydantic.result}"
assert result.pydantic.operation, "Operation should not be empty"
assert result.pydantic.explanation, "Explanation should not be empty"
def test_bedrock_groups_three_tool_results():
"""Consecutive tool results should be grouped into one Bedrock user message."""
llm = LLM(model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0")
messages = [
{"role": "user", "content": "Use all three tools, then continue."},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "tool-1",
"type": "function",
"function": {
"name": "lookup_weather",
"arguments": '{"location": "New York"}',
},
},
{
"id": "tool-2",
"type": "function",
"function": {
"name": "lookup_news",
"arguments": '{"topic": "AI"}',
},
},
{
"id": "tool-3",
"type": "function",
"function": {
"name": "lookup_stock",
"arguments": '{"ticker": "AMZN"}',
},
},
],
},
{"role": "tool", "tool_call_id": "tool-1", "content": "72F and sunny"},
{"role": "tool", "tool_call_id": "tool-2", "content": "AI news summary"},
{"role": "tool", "tool_call_id": "tool-3", "content": "AMZN up 1.2%"},
]
formatted_messages, system_message = llm._format_messages_for_converse(messages)
assert system_message is None
assert [message["role"] for message in formatted_messages] == [
"user",
"assistant",
"user",
]
assert len(formatted_messages[1]["content"]) == 3
tool_results = formatted_messages[2]["content"]
assert len(tool_results) == 3
assert [block["toolResult"]["toolUseId"] for block in tool_results] == [
"tool-1",
"tool-2",
"tool-3",
]
assert [block["toolResult"]["content"][0]["text"] for block in tool_results] == [
"72F and sunny",
"AI news summary",
"AMZN up 1.2%",
]
def test_bedrock_parallel_tool_results_grouped():
"""Regression test for issue #4749.
When an assistant message contains multiple parallel tool calls,
Bedrock requires all corresponding tool results to be grouped
in a single user message. Previously each tool result was emitted
as a separate user message, causing:
ValidationException: Expected toolResult blocks at messages.2.content
"""
llm = LLM(model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0")
messages = [
{"role": "user", "content": "Calculate 25 + 17 AND 10 * 5"},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "call_add",
"type": "function",
"function": {"name": "add_tool", "arguments": '{"a": 25, "b": 17}'},
},
{
"id": "call_mul",
"type": "function",
"function": {"name": "multiply_tool", "arguments": '{"a": 10, "b": 5}'},
},
],
},
{"role": "tool", "tool_call_id": "call_add", "content": "42"},
{"role": "tool", "tool_call_id": "call_mul", "content": "50"},
]
converse_msgs, system_msg = llm._format_messages_for_converse(messages)
# Find the user message that contains toolResult blocks
tool_result_messages = [
m for m in converse_msgs
if m.get("role") == "user"
and any("toolResult" in b for b in m.get("content", []))
]
# There must be exactly ONE user message with tool results (not two)
assert len(tool_result_messages) == 1, (
f"Expected 1 grouped tool-result message, got {len(tool_result_messages)}. "
"Bedrock requires all parallel tool results in a single user message."
)
# That single message must contain both tool results
tool_results = tool_result_messages[0]["content"]
assert len(tool_results) == 2, (
f"Expected 2 toolResult blocks in grouped message, got {len(tool_results)}"
)
# Verify the tool use IDs match
tool_use_ids = {
block["toolResult"]["toolUseId"] for block in tool_results
}
assert tool_use_ids == {"call_add", "call_mul"}
def test_bedrock_single_tool_result_still_works():
"""Ensure single tool call still produces a single-block user message."""
llm = LLM(model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0")
messages = [
{"role": "user", "content": "Add 1 + 2"},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "call_single",
"type": "function",
"function": {"name": "add_tool", "arguments": '{"a": 1, "b": 2}'},
},
],
},
{"role": "tool", "tool_call_id": "call_single", "content": "3"},
]
converse_msgs, _ = llm._format_messages_for_converse(messages)
tool_result_messages = [
m for m in converse_msgs
if m.get("role") == "user"
and any("toolResult" in b for b in m.get("content", []))
]
assert len(tool_result_messages) == 1
assert len(tool_result_messages[0]["content"]) == 1
assert tool_result_messages[0]["content"][0]["toolResult"]["toolUseId"] == "call_single"
def test_bedrock_tool_results_not_merged_across_assistant_messages():
"""Tool results from different assistant turns must NOT be merged."""
llm = LLM(model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0")
messages = [
{"role": "user", "content": "First task"},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "call_a",
"type": "function",
"function": {"name": "tool_a", "arguments": "{}"},
},
],
},
{"role": "tool", "tool_call_id": "call_a", "content": "result_a"},
{"role": "assistant", "content": "Now doing second task"},
{"role": "user", "content": "Second task"},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "call_b",
"type": "function",
"function": {"name": "tool_b", "arguments": "{}"},
},
],
},
{"role": "tool", "tool_call_id": "call_b", "content": "result_b"},
]
converse_msgs, _ = llm._format_messages_for_converse(messages)
tool_result_messages = [
m for m in converse_msgs
if m.get("role") == "user"
and any("toolResult" in b for b in m.get("content", []))
]
# Two separate tool-result messages (one per assistant turn)
assert len(tool_result_messages) == 2, (
"Tool results from different assistant turns must remain separate"
)
assert tool_result_messages[0]["content"][0]["toolResult"]["toolUseId"] == "call_a"
assert tool_result_messages[1]["content"][0]["toolResult"]["toolUseId"] == "call_b"

View File

@@ -268,6 +268,54 @@ class TestGetMCPToolsAmpIntegration:
assert len(tools) == 1
assert tools[0].name == "mcp_notion_so_sse_search"
@patch("crewai.mcp.tool_resolver.MCPClient")
@patch.object(MCPToolResolver, "_fetch_amp_mcp_configs")
def test_tool_filter_with_hyphenated_hash_syntax(
self, mock_fetch, mock_client_class, agent
):
"""notion#get-page must match the tool whose sanitized name is get_page."""
mock_fetch.return_value = {
"notion": {
"type": "sse",
"url": "https://mcp.notion.so/sse",
"headers": {"Authorization": "Bearer token"},
},
}
hyphenated_tool_definitions = [
{
"name": "get_page",
"original_name": "get-page",
"description": "Get a page",
"inputSchema": {},
},
{
"name": "search",
"original_name": "search",
"description": "Search tool",
"inputSchema": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query"}
},
"required": ["query"],
},
},
]
mock_client = AsyncMock()
mock_client.list_tools = AsyncMock(return_value=hyphenated_tool_definitions)
mock_client.connected = False
mock_client.connect = AsyncMock()
mock_client.disconnect = AsyncMock()
mock_client_class.return_value = mock_client
tools = agent.get_mcp_tools(["notion#get-page"])
mock_fetch.assert_called_once_with(["notion"])
assert len(tools) == 1
assert tools[0].name.endswith("_get_page")
@patch("crewai.mcp.tool_resolver.MCPClient")
@patch.object(MCPToolResolver, "_fetch_amp_mcp_configs")
def test_deduplicates_slugs(
@@ -371,3 +419,87 @@ class TestGetMCPToolsAmpIntegration:
mock_external.assert_called_once_with("https://external.mcp.com/api")
# 2 from notion + 1 from external + 2 from http_config
assert len(tools) == 5
class TestResolveExternalToolFilter:
"""Tests for _resolve_external with #tool-name filtering."""
@pytest.fixture
def agent(self):
return Agent(
role="Test Agent",
goal="Test goal",
backstory="Test backstory",
)
@pytest.fixture
def resolver(self, agent):
return MCPToolResolver(agent=agent, logger=agent._logger)
@patch.object(MCPToolResolver, "_get_mcp_tool_schemas")
def test_filters_hyphenated_tool_name(self, mock_schemas, resolver):
"""https://...#get-page must match the sanitized key get_page in schemas."""
mock_schemas.return_value = {
"get_page": {
"description": "Get a page",
"args_schema": None,
},
"search": {
"description": "Search tool",
"args_schema": None,
},
}
tools = resolver._resolve_external("https://mcp.example.com/api#get-page")
assert len(tools) == 1
assert "get_page" in tools[0].name
@patch.object(MCPToolResolver, "_get_mcp_tool_schemas")
def test_filters_underscored_tool_name(self, mock_schemas, resolver):
"""https://...#get_page must also match the sanitized key get_page."""
mock_schemas.return_value = {
"get_page": {
"description": "Get a page",
"args_schema": None,
},
"search": {
"description": "Search tool",
"args_schema": None,
},
}
tools = resolver._resolve_external("https://mcp.example.com/api#get_page")
assert len(tools) == 1
assert "get_page" in tools[0].name
@patch.object(MCPToolResolver, "_get_mcp_tool_schemas")
def test_returns_all_tools_without_hash(self, mock_schemas, resolver):
mock_schemas.return_value = {
"get_page": {
"description": "Get a page",
"args_schema": None,
},
"search": {
"description": "Search tool",
"args_schema": None,
},
}
tools = resolver._resolve_external("https://mcp.example.com/api")
assert len(tools) == 2
@patch.object(MCPToolResolver, "_get_mcp_tool_schemas")
def test_returns_empty_for_nonexistent_tool(self, mock_schemas, resolver):
mock_schemas.return_value = {
"search": {
"description": "Search tool",
"args_schema": None,
},
}
tools = resolver._resolve_external("https://mcp.example.com/api#nonexistent")
assert len(tools) == 0

View File

@@ -1,4 +1,5 @@
import asyncio
import concurrent.futures
from unittest.mock import AsyncMock, patch
import pytest
@@ -30,6 +31,17 @@ def mock_tool_definitions():
]
def _make_mock_client(tool_definitions):
"""Create a mock MCPClient that returns *tool_definitions*."""
client = AsyncMock()
client.list_tools = AsyncMock(return_value=tool_definitions)
client.connected = False
client.connect = AsyncMock()
client.disconnect = AsyncMock()
client.call_tool = AsyncMock(return_value="test result")
return client
def test_agent_with_stdio_mcp_config(mock_tool_definitions):
"""Test agent setup with MCPServerStdio configuration."""
stdio_config = MCPServerStdio(
@@ -45,14 +57,8 @@ def test_agent_with_stdio_mcp_config(mock_tool_definitions):
mcps=[stdio_config],
)
with patch("crewai.mcp.tool_resolver.MCPClient") as mock_client_class:
mock_client = AsyncMock()
mock_client.list_tools = AsyncMock(return_value=mock_tool_definitions)
mock_client.connected = False # Will trigger connect
mock_client.connect = AsyncMock()
mock_client.disconnect = AsyncMock()
mock_client_class.return_value = mock_client
mock_client_class.return_value = _make_mock_client(mock_tool_definitions)
tools = agent.get_mcp_tools([stdio_config])
@@ -60,8 +66,7 @@ def test_agent_with_stdio_mcp_config(mock_tool_definitions):
assert all(isinstance(tool, BaseTool) for tool in tools)
mock_client_class.assert_called_once()
call_args = mock_client_class.call_args
transport = call_args.kwargs["transport"]
transport = mock_client_class.call_args.kwargs["transport"]
assert transport.command == "python"
assert transport.args == ["server.py"]
assert transport.env == {"API_KEY": "test_key"}
@@ -83,12 +88,7 @@ def test_agent_with_http_mcp_config(mock_tool_definitions):
)
with patch("crewai.mcp.tool_resolver.MCPClient") as mock_client_class:
mock_client = AsyncMock()
mock_client.list_tools = AsyncMock(return_value=mock_tool_definitions)
mock_client.connected = False # Will trigger connect
mock_client.connect = AsyncMock()
mock_client.disconnect = AsyncMock()
mock_client_class.return_value = mock_client
mock_client_class.return_value = _make_mock_client(mock_tool_definitions)
tools = agent.get_mcp_tools([http_config])
@@ -96,8 +96,7 @@ def test_agent_with_http_mcp_config(mock_tool_definitions):
assert all(isinstance(tool, BaseTool) for tool in tools)
mock_client_class.assert_called_once()
call_args = mock_client_class.call_args
transport = call_args.kwargs["transport"]
transport = mock_client_class.call_args.kwargs["transport"]
assert transport.url == "https://api.example.com/mcp"
assert transport.headers == {"Authorization": "Bearer test_token"}
assert transport.streamable is True
@@ -118,12 +117,7 @@ def test_agent_with_sse_mcp_config(mock_tool_definitions):
)
with patch("crewai.mcp.tool_resolver.MCPClient") as mock_client_class:
mock_client = AsyncMock()
mock_client.list_tools = AsyncMock(return_value=mock_tool_definitions)
mock_client.connected = False
mock_client.connect = AsyncMock()
mock_client.disconnect = AsyncMock()
mock_client_class.return_value = mock_client
mock_client_class.return_value = _make_mock_client(mock_tool_definitions)
tools = agent.get_mcp_tools([sse_config])
@@ -131,8 +125,7 @@ def test_agent_with_sse_mcp_config(mock_tool_definitions):
assert all(isinstance(tool, BaseTool) for tool in tools)
mock_client_class.assert_called_once()
call_args = mock_client_class.call_args
transport = call_args.kwargs["transport"]
transport = mock_client_class.call_args.kwargs["transport"]
assert transport.url == "https://api.example.com/mcp/sse"
assert transport.headers == {"Authorization": "Bearer test_token"}
@@ -142,13 +135,7 @@ def test_mcp_tool_execution_in_sync_context(mock_tool_definitions):
http_config = MCPServerHTTP(url="https://api.example.com/mcp")
with patch("crewai.mcp.tool_resolver.MCPClient") as mock_client_class:
mock_client = AsyncMock()
mock_client.list_tools = AsyncMock(return_value=mock_tool_definitions)
mock_client.connected = False
mock_client.connect = AsyncMock()
mock_client.disconnect = AsyncMock()
mock_client.call_tool = AsyncMock(return_value="test result")
mock_client_class.return_value = mock_client
mock_client_class.return_value = _make_mock_client(mock_tool_definitions)
agent = Agent(
role="Test Agent",
@@ -160,12 +147,12 @@ def test_mcp_tool_execution_in_sync_context(mock_tool_definitions):
tools = agent.get_mcp_tools([http_config])
assert len(tools) == 2
tool = tools[0]
result = tool.run(query="test query")
assert result == "test result"
mock_client.call_tool.assert_called()
# 1 discovery + 1 for the run() invocation
assert mock_client_class.call_count == 2
@pytest.mark.asyncio
@@ -174,13 +161,7 @@ async def test_mcp_tool_execution_in_async_context(mock_tool_definitions):
http_config = MCPServerHTTP(url="https://api.example.com/mcp")
with patch("crewai.mcp.tool_resolver.MCPClient") as mock_client_class:
mock_client = AsyncMock()
mock_client.list_tools = AsyncMock(return_value=mock_tool_definitions)
mock_client.connected = False
mock_client.connect = AsyncMock()
mock_client.disconnect = AsyncMock()
mock_client.call_tool = AsyncMock(return_value="test result")
mock_client_class.return_value = mock_client
mock_client_class.return_value = _make_mock_client(mock_tool_definitions)
agent = Agent(
role="Test Agent",
@@ -192,9 +173,129 @@ async def test_mcp_tool_execution_in_async_context(mock_tool_definitions):
tools = agent.get_mcp_tools([http_config])
assert len(tools) == 2
tool = tools[0]
result = tool.run(query="test query")
assert result == "test result"
mock_client.call_tool.assert_called()
assert mock_client_class.call_count == 2
def test_each_invocation_gets_fresh_client(mock_tool_definitions):
"""Every tool.run() must create its own MCPClient (no shared state)."""
http_config = MCPServerHTTP(url="https://api.example.com/mcp")
clients_created: list = []
def _make_client(**kwargs):
client = _make_mock_client(mock_tool_definitions)
clients_created.append(client)
return client
with patch("crewai.mcp.tool_resolver.MCPClient", side_effect=_make_client):
agent = Agent(
role="Test Agent",
goal="Test goal",
backstory="Test backstory",
mcps=[http_config],
)
tools = agent.get_mcp_tools([http_config])
assert len(tools) == 2
# 1 discovery client so far
assert len(clients_created) == 1
# Two sequential calls to the same tool must create 2 new clients
tools[0].run(query="q1")
tools[0].run(query="q2")
assert len(clients_created) == 3
assert clients_created[1] is not clients_created[2]
def test_parallel_mcp_tool_execution_same_tool(mock_tool_definitions):
"""Parallel calls to the *same* tool must not interfere."""
http_config = MCPServerHTTP(url="https://api.example.com/mcp")
call_log: list[str] = []
def _make_client(**kwargs):
client = AsyncMock()
client.list_tools = AsyncMock(return_value=mock_tool_definitions)
client.connected = False
client.connect = AsyncMock()
client.disconnect = AsyncMock()
async def _call_tool(name, args):
call_log.append(name)
await asyncio.sleep(0.05)
return f"result-{name}"
client.call_tool = AsyncMock(side_effect=_call_tool)
return client
with patch("crewai.mcp.tool_resolver.MCPClient", side_effect=_make_client):
agent = Agent(
role="Test Agent",
goal="Test goal",
backstory="Test backstory",
mcps=[http_config],
)
tools = agent.get_mcp_tools([http_config])
assert len(tools) >= 1
tool = tools[0]
# Call the SAME tool concurrently -- the exact scenario from the bug
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as pool:
futures = [
pool.submit(tool.run, query="q1"),
pool.submit(tool.run, query="q2"),
]
results = [f.result() for f in concurrent.futures.as_completed(futures)]
assert len(results) == 2
assert all("result-" in r for r in results)
assert len(call_log) == 2
def test_parallel_mcp_tool_execution_different_tools(mock_tool_definitions):
"""Parallel calls to different tools from the same server must not interfere."""
http_config = MCPServerHTTP(url="https://api.example.com/mcp")
call_log: list[str] = []
def _make_client(**kwargs):
client = AsyncMock()
client.list_tools = AsyncMock(return_value=mock_tool_definitions)
client.connected = False
client.connect = AsyncMock()
client.disconnect = AsyncMock()
async def _call_tool(name, args):
call_log.append(name)
await asyncio.sleep(0.05)
return f"result-{name}"
client.call_tool = AsyncMock(side_effect=_call_tool)
return client
with patch("crewai.mcp.tool_resolver.MCPClient", side_effect=_make_client):
agent = Agent(
role="Test Agent",
goal="Test goal",
backstory="Test backstory",
mcps=[http_config],
)
tools = agent.get_mcp_tools([http_config])
assert len(tools) == 2
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as pool:
futures = [
pool.submit(tools[0].run, query="q1"),
pool.submit(tools[1].run, query="q2"),
]
results = [f.result() for f in concurrent.futures.as_completed(futures)]
assert len(results) == 2
assert all("result-" in r for r in results)
assert len(call_log) == 2

View File

@@ -0,0 +1,13 @@
"""Stress tests for concurrent multi-process storage access.
Simulates the Airflow pattern: N worker processes each writing to the
same storage directory simultaneously. Verifies no LockException and
data integrity after all writes complete.
Uses temp files for IPC instead of multiprocessing.Manager (which uses
sockets blocked by pytest_recording).
"""
import pytest
pytestmark = pytest.mark.skip(reason="Multiprocessing tests incompatible with xdist --import-mode=importlib")

View File

@@ -172,8 +172,8 @@ def test_memory_scope_slice(tmp_path: Path, mock_embedder: MagicMock) -> None:
sc = mem.scope("/agent/1")
assert sc._root in ("/agent/1", "/agent/1/")
sl = mem.slice(["/a", "/b"], read_only=True)
assert sl._read_only is True
assert "/a" in sl._scopes and "/b" in sl._scopes
assert sl.read_only is True
assert "/a" in sl.scopes and "/b" in sl.scopes
def test_memory_list_scopes_info_tree(tmp_path: Path, mock_embedder: MagicMock) -> None:
@@ -198,7 +198,7 @@ def test_memory_scope_remember_recall(tmp_path: Path, mock_embedder: MagicMock)
from crewai.memory.memory_scope import MemoryScope
mem = Memory(storage=str(tmp_path / "db5"), llm=MagicMock(), embedder=mock_embedder)
scope = MemoryScope(mem, "/crew/1")
scope = MemoryScope(memory=mem, root_path="/crew/1")
scope.remember("Scoped note", scope="/", categories=[], importance=0.5, metadata={})
results = scope.recall("note", limit=5, depth="shallow")
assert len(results) >= 1
@@ -213,7 +213,7 @@ def test_memory_slice_recall(tmp_path: Path, mock_embedder: MagicMock) -> None:
mem = Memory(storage=str(tmp_path / "db6"), llm=MagicMock(), embedder=mock_embedder)
mem.remember("In scope A", scope="/a", categories=[], importance=0.5, metadata={})
sl = MemorySlice(mem, ["/a"], read_only=True)
sl = MemorySlice(memory=mem, scopes=["/a"], read_only=True)
matches = sl.recall("scope", limit=5, depth="shallow")
assert isinstance(matches, list)
@@ -223,7 +223,7 @@ def test_memory_slice_remember_is_noop_when_read_only(tmp_path: Path, mock_embed
from crewai.memory.memory_scope import MemorySlice
mem = Memory(storage=str(tmp_path / "db7"), llm=MagicMock(), embedder=mock_embedder)
sl = MemorySlice(mem, ["/a"], read_only=True)
sl = MemorySlice(memory=mem, scopes=["/a"], read_only=True)
result = sl.remember("x", scope="/a")
assert result is None
assert mem.list_records() == []
@@ -319,7 +319,7 @@ def test_executor_save_to_memory_calls_extract_then_remember_per_item() -> None:
from crewai.agents.parser import AgentFinish
mock_memory = MagicMock()
mock_memory._read_only = False
mock_memory.read_only = False
mock_memory.extract_memories.return_value = ["Fact A.", "Fact B."]
mock_agent = MagicMock()
@@ -360,7 +360,7 @@ def test_executor_save_to_memory_skips_delegation_output() -> None:
from crewai.utilities.string_utils import sanitize_tool_name
mock_memory = MagicMock()
mock_memory._read_only = False
mock_memory.read_only = False
mock_agent = MagicMock()
mock_agent.memory = mock_memory
mock_agent._logger = MagicMock()
@@ -393,7 +393,7 @@ def test_memory_scope_extract_memories_delegates() -> None:
mock_memory = MagicMock()
mock_memory.extract_memories.return_value = ["Scoped fact."]
scope = MemoryScope(mock_memory, "/agent/1")
scope = MemoryScope(memory=mock_memory, root_path="/agent/1")
result = scope.extract_memories("Some content")
mock_memory.extract_memories.assert_called_once_with("Some content")
assert result == ["Scoped fact."]
@@ -405,7 +405,7 @@ def test_memory_slice_extract_memories_delegates() -> None:
mock_memory = MagicMock()
mock_memory.extract_memories.return_value = ["Sliced fact."]
sl = MemorySlice(mock_memory, ["/a", "/b"], read_only=True)
sl = MemorySlice(memory=mock_memory, scopes=["/a", "/b"], read_only=True)
result = sl.extract_memories("Some content")
mock_memory.extract_memories.assert_called_once_with("Some content")
assert result == ["Sliced fact."]
@@ -670,10 +670,10 @@ def test_agent_kickoff_memory_recall_and_save(tmp_path: Path, mock_embedder: Mag
verbose=False,
)
# Mock recall to verify it's called, but return real results
with patch.object(mem, "recall", wraps=mem.recall) as recall_mock, \
patch.object(mem, "extract_memories", return_value=["PostgreSQL is used."]) as extract_mock, \
patch.object(mem, "remember_many", wraps=mem.remember_many) as remember_many_mock:
# Patch on the class to avoid Pydantic BaseModel __delattr__ restriction
with patch.object(Memory, "recall", wraps=mem.recall) as recall_mock, \
patch.object(Memory, "extract_memories", return_value=["PostgreSQL is used."]) as extract_mock, \
patch.object(Memory, "remember_many", wraps=mem.remember_many) as remember_many_mock:
result = agent.kickoff("What database do we use?")
assert result is not None

View File

@@ -971,6 +971,128 @@ class TestCollapseToOutcomeJsonParsing:
assert mock_llm.call.call_count == 2
class TestLLMObjectPreservedInContext:
"""Tests that BaseLLM objects have their model string preserved in PendingFeedbackContext."""
@patch("crewai.flow.flow.crewai_event_bus.emit")
def test_basellm_object_model_string_survives_roundtrip(self, mock_emit: MagicMock) -> None:
"""Test that when llm is a BaseLLM object, its model string is stored in context
so that outcome collapsing works after async pause/resume.
This is the exact bug: locally the sync path keeps the LLM object in memory,
but in production the async path serializes the context and the LLM object was
discarded (stored as None), causing resume to skip classification and always
fall back to emit[0].
"""
with tempfile.TemporaryDirectory() as tmpdir:
db_path = os.path.join(tmpdir, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
# Create a mock BaseLLM object (not a string)
mock_llm_obj = MagicMock()
mock_llm_obj.model = "gemini/gemini-2.0-flash"
class PausingProvider:
def __init__(self, persistence: SQLiteFlowPersistence):
self.persistence = persistence
self.captured_context: PendingFeedbackContext | None = None
def request_feedback(
self, context: PendingFeedbackContext, flow: Flow
) -> str:
self.captured_context = context
self.persistence.save_pending_feedback(
flow_uuid=context.flow_id,
context=context,
state_data=flow.state if isinstance(flow.state, dict) else flow.state.model_dump(),
)
raise HumanFeedbackPending(context=context)
provider = PausingProvider(persistence)
class TestFlow(Flow):
result_path: str = ""
@start()
@human_feedback(
message="Approve?",
emit=["needs_changes", "approved"],
llm=mock_llm_obj,
default_outcome="approved",
provider=provider,
)
def review(self):
return "content for review"
@listen("approved")
def handle_approved(self):
self.result_path = "approved"
return "Approved!"
@listen("needs_changes")
def handle_changes(self):
self.result_path = "needs_changes"
return "Changes needed"
# Phase 1: Start flow (should pause)
flow1 = TestFlow(persistence=persistence)
result = flow1.kickoff()
assert isinstance(result, HumanFeedbackPending)
# Verify the context stored the model STRING, not None
assert provider.captured_context is not None
assert provider.captured_context.llm == "gemini/gemini-2.0-flash"
# Verify it survives persistence roundtrip
flow_id = result.context.flow_id
loaded = persistence.load_pending_feedback(flow_id)
assert loaded is not None
_, loaded_context = loaded
assert loaded_context.llm == "gemini/gemini-2.0-flash"
# Phase 2: Resume with positive feedback - should use LLM to classify
flow2 = TestFlow.from_pending(flow_id, persistence)
assert flow2._pending_feedback_context is not None
assert flow2._pending_feedback_context.llm == "gemini/gemini-2.0-flash"
# Mock _collapse_to_outcome to verify it gets called (not skipped)
with patch.object(flow2, "_collapse_to_outcome", return_value="approved") as mock_collapse:
flow2.resume("this looks good, proceed!")
# The key assertion: _collapse_to_outcome was called (not skipped due to llm=None)
mock_collapse.assert_called_once_with(
feedback="this looks good, proceed!",
outcomes=["needs_changes", "approved"],
llm="gemini/gemini-2.0-flash",
)
assert flow2.last_human_feedback.outcome == "approved"
assert flow2.result_path == "approved"
def test_string_llm_still_works(self) -> None:
"""Test that passing llm as a string still works correctly."""
context = PendingFeedbackContext(
flow_id="str-llm-test",
flow_class="test.Flow",
method_name="review",
method_output="output",
message="Review:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
)
serialized = context.to_dict()
restored = PendingFeedbackContext.from_dict(serialized)
assert restored.llm == "gpt-4o-mini"
def test_none_llm_when_no_model_attr(self) -> None:
"""Test that llm is None when object has no model attribute."""
mock_obj = MagicMock(spec=[]) # No attributes
# Simulate what the decorator does
llm_value = mock_obj if isinstance(mock_obj, str) else getattr(mock_obj, "model", None)
assert llm_value is None
class TestAsyncHumanFeedbackEdgeCases:
"""Edge case tests for async human feedback."""

View File

@@ -36,7 +36,7 @@ from crewai.flow import Flow, start
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from crewai.llm import LLM
from crewai.memory.unified_memory import Memory
from crewai.process import Process
from crewai.project import CrewBase, agent, before_kickoff, crew, task
from crewai.task import Task
@@ -1101,7 +1101,7 @@ def test_single_task_with_async_execution():
result = crew.kickoff()
assert result.raw.startswith(
"- Ethical implications of AI in law enforcement and surveillance."
"- Impact of autonomous AI agents on future workplace automation"
)
@@ -2618,9 +2618,9 @@ def test_memory_remember_called_after_task():
)
with patch.object(
crew._memory, "extract_memories", wraps=crew._memory.extract_memories
Memory, "extract_memories", wraps=crew._memory.extract_memories
) as extract_mock, patch.object(
crew._memory, "remember", wraps=crew._memory.remember
Memory, "remember", wraps=crew._memory.remember
) as remember_mock:
crew.kickoff()
@@ -4773,13 +4773,13 @@ def test_memory_remember_receives_task_content():
# Mock extract_memories to return fake memories and capture the raw input.
# No wraps= needed -- the test only checks what args it receives, not the output.
patch.object(
crew._memory, "extract_memories", return_value=["Fake memory."]
Memory, "extract_memories", return_value=["Fake memory."]
) as extract_mock,
# Mock recall to avoid LLM calls for query analysis (not in cassette).
patch.object(crew._memory, "recall", return_value=[]),
patch.object(Memory, "recall", return_value=[]),
# Mock remember_many to prevent the background save from triggering
# LLM calls (field resolution) that aren't in the cassette.
patch.object(crew._memory, "remember_many", return_value=[]),
patch.object(Memory, "remember_many", return_value=[]),
):
crew.kickoff()

Some files were not shown because too many files have changed in this diff Show More