mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-03 21:28:29 +00:00
Compare commits
12 Commits
feat-emit-
...
devin/1745
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f4e61ae714 | ||
|
|
958751fe36 | ||
|
|
3c838f16ff | ||
|
|
6c08e6062a | ||
|
|
2e4c97661a | ||
|
|
16eb4df556 | ||
|
|
3d9000495c | ||
|
|
6d0039b117 | ||
|
|
311a078ca6 | ||
|
|
371f19f3cd | ||
|
|
870dffbb89 | ||
|
|
ced3c8f0e0 |
@@ -4,6 +4,36 @@ description: View the latest updates and changes to CrewAI
|
||||
icon: timeline
|
||||
---
|
||||
|
||||
<Update label="2025-04-07" description="v0.114.0">
|
||||
## Release Highlights
|
||||
<Frame>
|
||||
<img src="/images/v01140.png" />
|
||||
</Frame>
|
||||
|
||||
**New Features & Enhancements**
|
||||
- Agents as an atomic unit. (`Agent(...).kickoff()`)
|
||||
- Support for [Custom LLM implementations](https://docs.crewai.com/guides/advanced/custom-llm).
|
||||
- Integrated External Memory and [Opik observability](https://docs.crewai.com/how-to/opik-observability).
|
||||
- Enhanced YAML extraction.
|
||||
- Multimodal agent validation.
|
||||
- Added Secure fingerprints for agents and crews.
|
||||
|
||||
**Core Improvements & Fixes**
|
||||
- Improved serialization, agent copying, and Python compatibility.
|
||||
- Added wildcard support to `emit()`
|
||||
- Added support for additional router calls and context window adjustments.
|
||||
- Fixed typing issues, validation, and import statements.
|
||||
- Improved method performance.
|
||||
- Enhanced agent task handling, event emissions, and memory management.
|
||||
- Fixed CLI issues, conditional tasks, cloning behavior, and tool outputs.
|
||||
|
||||
**Documentation & Guides**
|
||||
- Improved documentation structure, theme, and organization.
|
||||
- Added guides for Local NVIDIA NIM with WSL2, W&B Weave, and Arize Phoenix.
|
||||
- Updated tool configuration examples, prompts, and observability docs.
|
||||
- Guide on using singular agents within Flows.
|
||||
</Update>
|
||||
|
||||
<Update label="2025-03-17" description="v0.108.0">
|
||||
**Features**
|
||||
- Converted tabs to spaces in `crew.py` template
|
||||
|
||||
@@ -179,7 +179,78 @@ def crew(self) -> Crew:
|
||||
```
|
||||
</Note>
|
||||
|
||||
### 10. API Keys
|
||||
### 10. Deploy
|
||||
|
||||
Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
|
||||
|
||||
- **Authentication**: You need to be authenticated to deploy to CrewAI Enterprise.
|
||||
```shell Terminal
|
||||
crewai signup
|
||||
```
|
||||
If you already have an account, you can login with:
|
||||
```shell Terminal
|
||||
crewai login
|
||||
```
|
||||
|
||||
- **Create a deployment**: Once you are authenticated, you can create a deployment for your crew or flow from the root of your localproject.
|
||||
```shell Terminal
|
||||
crewai deploy create
|
||||
```
|
||||
- Reads your local project configuration.
|
||||
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
|
||||
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
|
||||
|
||||
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI Enterprise.
|
||||
```shell Terminal
|
||||
crewai deploy push
|
||||
```
|
||||
- Initiates the deployment process on the CrewAI Enterprise platform.
|
||||
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
|
||||
|
||||
- **Deployment Status**: You can check the status of your deployment with:
|
||||
```shell Terminal
|
||||
crewai deploy status
|
||||
```
|
||||
This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`).
|
||||
|
||||
- **Deployment Logs**: You can check the logs of your deployment with:
|
||||
```shell Terminal
|
||||
crewai deploy logs
|
||||
```
|
||||
This streams the deployment logs to your terminal.
|
||||
|
||||
- **List deployments**: You can list all your deployments with:
|
||||
```shell Terminal
|
||||
crewai deploy list
|
||||
```
|
||||
This lists all your deployments.
|
||||
|
||||
- **Delete a deployment**: You can delete a deployment with:
|
||||
```shell Terminal
|
||||
crewai deploy remove
|
||||
```
|
||||
This deletes the deployment from the CrewAI Enterprise platform.
|
||||
|
||||
- **Help Command**: You can get help with the CLI with:
|
||||
```shell Terminal
|
||||
crewai deploy --help
|
||||
```
|
||||
This shows the help message for the CrewAI Deploy CLI.
|
||||
|
||||
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI Enterprise](http://app.crewai.com) using the CLI.
|
||||
|
||||
<iframe
|
||||
width="100%"
|
||||
height="400"
|
||||
src="https://www.youtube.com/embed/3EqSV-CYDZA"
|
||||
title="CrewAI Deployment Guide"
|
||||
frameborder="0"
|
||||
style={{ borderRadius: '10px' }}
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|
||||
allowfullscreen
|
||||
></iframe>
|
||||
|
||||
### 11. API Keys
|
||||
|
||||
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.
|
||||
|
||||
|
||||
@@ -42,6 +42,16 @@ CrewAI supports various types of knowledge sources out of the box:
|
||||
| `collection_name` | **str** | No | Name of the collection where the knowledge will be stored. Used to identify different sets of knowledge. Defaults to "knowledge" if not provided. |
|
||||
| `storage` | **Optional[KnowledgeStorage]** | No | Custom storage configuration for managing how the knowledge is stored and retrieved. If not provided, a default storage will be created. |
|
||||
|
||||
|
||||
<Tip>
|
||||
Unlike retrieval from a vector database using a tool, agents preloaded with knowledge will not need a retrieval persona or task.
|
||||
Simply add the relevant knowledge sources your agent or crew needs to function.
|
||||
|
||||
Knowledge sources can be added at the agent or crew level.
|
||||
Crew level knowledge sources will be used by **all agents** in the crew.
|
||||
Agent level knowledge sources will be used by the **specific agent** that is preloaded with the knowledge.
|
||||
</Tip>
|
||||
|
||||
## Quickstart Example
|
||||
|
||||
<Tip>
|
||||
@@ -146,6 +156,26 @@ result = crew.kickoff(
|
||||
)
|
||||
```
|
||||
|
||||
## Knowledge Configuration
|
||||
|
||||
You can configure the knowledge configuration for the crew or agent.
|
||||
|
||||
```python Code
|
||||
from crewai.knowledge.knowledge_config import KnowledgeConfig
|
||||
|
||||
knowledge_config = KnowledgeConfig(results_limit=10, score_threshold=0.5)
|
||||
|
||||
agent = Agent(
|
||||
...
|
||||
knowledge_config=knowledge_config
|
||||
)
|
||||
```
|
||||
|
||||
<Tip>
|
||||
`results_limit`: is the number of relevant documents to return. Default is 3.
|
||||
`score_threshold`: is the minimum score for a document to be considered relevant. Default is 0.35.
|
||||
</Tip>
|
||||
|
||||
## More Examples
|
||||
|
||||
Here are examples of how to use different types of knowledge sources:
|
||||
|
||||
@@ -1,71 +0,0 @@
|
||||
---
|
||||
title: Using LlamaIndex Tools
|
||||
description: Learn how to integrate LlamaIndex tools with CrewAI agents to enhance search-based queries and more.
|
||||
icon: toolbox
|
||||
---
|
||||
|
||||
## Using LlamaIndex Tools
|
||||
|
||||
<Info>
|
||||
CrewAI seamlessly integrates with LlamaIndex’s comprehensive toolkit for RAG (Retrieval-Augmented Generation) and agentic pipelines, enabling advanced search-based queries and more.
|
||||
</Info>
|
||||
|
||||
Here are the available built-in tools offered by LlamaIndex.
|
||||
|
||||
```python Code
|
||||
from crewai import Agent
|
||||
from crewai_tools import LlamaIndexTool
|
||||
|
||||
# Example 1: Initialize from FunctionTool
|
||||
from llama_index.core.tools import FunctionTool
|
||||
|
||||
your_python_function = lambda ...: ...
|
||||
og_tool = FunctionTool.from_defaults(
|
||||
your_python_function,
|
||||
name="<name>",
|
||||
description='<description>'
|
||||
)
|
||||
tool = LlamaIndexTool.from_tool(og_tool)
|
||||
|
||||
# Example 2: Initialize from LlamaHub Tools
|
||||
from llama_index.tools.wolfram_alpha import WolframAlphaToolSpec
|
||||
wolfram_spec = WolframAlphaToolSpec(app_id="<app_id>")
|
||||
wolfram_tools = wolfram_spec.to_tool_list()
|
||||
tools = [LlamaIndexTool.from_tool(t) for t in wolfram_tools]
|
||||
|
||||
# Example 3: Initialize Tool from a LlamaIndex Query Engine
|
||||
query_engine = index.as_query_engine()
|
||||
query_tool = LlamaIndexTool.from_query_engine(
|
||||
query_engine,
|
||||
name="Uber 2019 10K Query Tool",
|
||||
description="Use this tool to lookup the 2019 Uber 10K Annual Report"
|
||||
)
|
||||
|
||||
# Create and assign the tools to an agent
|
||||
agent = Agent(
|
||||
role='Research Analyst',
|
||||
goal='Provide up-to-date market analysis',
|
||||
backstory='An expert analyst with a keen eye for market trends.',
|
||||
tools=[tool, *tools, query_tool]
|
||||
)
|
||||
|
||||
# rest of the code ...
|
||||
```
|
||||
|
||||
## Steps to Get Started
|
||||
|
||||
To effectively use the LlamaIndexTool, follow these steps:
|
||||
|
||||
<Steps>
|
||||
<Step title="Package Installation">
|
||||
Make sure that `crewai[tools]` package is installed in your Python environment:
|
||||
<CodeGroup>
|
||||
```shell Terminal
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
<Step title="Install and Use LlamaIndex">
|
||||
Follow the LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.
|
||||
</Step>
|
||||
</Steps>
|
||||
117
docs/docs.json
117
docs/docs.json
@@ -8,25 +8,27 @@
|
||||
"dark": "#C94C3C"
|
||||
},
|
||||
"favicon": "favicon.svg",
|
||||
"contextual": {
|
||||
"options": ["copy", "view", "chatgpt", "claude"]
|
||||
},
|
||||
"navigation": {
|
||||
"tabs": [
|
||||
{
|
||||
"tab": "Get Started",
|
||||
"tab": "Documentation",
|
||||
"groups": [
|
||||
{
|
||||
"group": "Get Started",
|
||||
"pages": [
|
||||
"introduction",
|
||||
"installation",
|
||||
"quickstart",
|
||||
"changelog"
|
||||
"quickstart"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Guides",
|
||||
"pages": [
|
||||
{
|
||||
"group": "Concepts",
|
||||
"group": "Strategy",
|
||||
"pages": [
|
||||
"guides/concepts/evaluating-use-cases"
|
||||
]
|
||||
@@ -79,41 +81,6 @@
|
||||
"concepts/event-listener"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "How to Guides",
|
||||
"pages": [
|
||||
"how-to/create-custom-tools",
|
||||
"how-to/sequential-process",
|
||||
"how-to/hierarchical-process",
|
||||
"how-to/custom-manager-agent",
|
||||
"how-to/llm-connections",
|
||||
"how-to/customizing-agents",
|
||||
"how-to/multimodal-agents",
|
||||
"how-to/coding-agents",
|
||||
"how-to/force-tool-output-as-result",
|
||||
"how-to/human-input-on-execution",
|
||||
"how-to/kickoff-async",
|
||||
"how-to/kickoff-for-each",
|
||||
"how-to/replay-tasks-from-latest-crew-kickoff",
|
||||
"how-to/conditional-tasks",
|
||||
"how-to/langchain-tools",
|
||||
"how-to/llamaindex-tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Agent Monitoring & Observability",
|
||||
"pages": [
|
||||
"how-to/agentops-observability",
|
||||
"how-to/arize-phoenix-observability",
|
||||
"how-to/langfuse-observability",
|
||||
"how-to/langtrace-observability",
|
||||
"how-to/mlflow-observability",
|
||||
"how-to/openlit-observability",
|
||||
"how-to/opik-observability",
|
||||
"how-to/portkey-observability",
|
||||
"how-to/weave-integration"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Tools",
|
||||
"pages": [
|
||||
@@ -141,6 +108,7 @@
|
||||
"tools/hyperbrowserloadtool",
|
||||
"tools/linkupsearchtool",
|
||||
"tools/llamaindextool",
|
||||
"tools/langchaintool",
|
||||
"tools/serperdevtool",
|
||||
"tools/s3readertool",
|
||||
"tools/s3writertool",
|
||||
@@ -170,6 +138,40 @@
|
||||
"tools/youtubevideosearchtool"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Agent Monitoring & Observability",
|
||||
"pages": [
|
||||
"how-to/agentops-observability",
|
||||
"how-to/arize-phoenix-observability",
|
||||
"how-to/langfuse-observability",
|
||||
"how-to/langtrace-observability",
|
||||
"how-to/mlflow-observability",
|
||||
"how-to/openlit-observability",
|
||||
"how-to/opik-observability",
|
||||
"how-to/portkey-observability",
|
||||
"how-to/weave-integration"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Learn",
|
||||
"pages": [
|
||||
"how-to/conditional-tasks",
|
||||
"how-to/coding-agents",
|
||||
"how-to/create-custom-tools",
|
||||
"how-to/custom-llm",
|
||||
"how-to/custom-manager-agent",
|
||||
"how-to/customizing-agents",
|
||||
"how-to/force-tool-output-as-result",
|
||||
"how-to/hierarchical-process",
|
||||
"how-to/human-input-on-execution",
|
||||
"how-to/kickoff-async",
|
||||
"how-to/kickoff-for-each",
|
||||
"how-to/llm-connections",
|
||||
"how-to/multimodal-agents",
|
||||
"how-to/replay-tasks-from-latest-crew-kickoff",
|
||||
"how-to/sequential-process"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Telemetry",
|
||||
"pages": [
|
||||
@@ -188,19 +190,35 @@
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"tab": "Releases",
|
||||
"groups": [
|
||||
{
|
||||
"group": "Releases",
|
||||
"pages": [
|
||||
"changelog"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"global": {
|
||||
"anchors": [
|
||||
{
|
||||
"anchor": "Community",
|
||||
"anchor": "Website",
|
||||
"href": "https://crewai.com",
|
||||
"icon": "globe"
|
||||
},
|
||||
{
|
||||
"anchor": "Forum",
|
||||
"href": "https://community.crewai.com",
|
||||
"icon": "discourse"
|
||||
},
|
||||
{
|
||||
"anchor": "Tutorials",
|
||||
"href": "https://www.youtube.com/@crewAIInc",
|
||||
"icon": "youtube"
|
||||
"anchor": "Get Help",
|
||||
"href": "mailto:support@crewai.com",
|
||||
"icon": "headset"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -214,6 +232,12 @@
|
||||
"strict": false
|
||||
},
|
||||
"navbar": {
|
||||
"links": [
|
||||
{
|
||||
"label": "Start Free Trial",
|
||||
"href": "https://app.crewai.com"
|
||||
}
|
||||
],
|
||||
"primary": {
|
||||
"type": "github",
|
||||
"href": "https://github.com/crewAIInc/crewAI"
|
||||
@@ -223,7 +247,12 @@
|
||||
"prompt": "Search CrewAI docs"
|
||||
},
|
||||
"seo": {
|
||||
"indexing": "navigable"
|
||||
"indexing": "all"
|
||||
},
|
||||
"errors": {
|
||||
"404": {
|
||||
"redirect": true
|
||||
}
|
||||
},
|
||||
"footer": {
|
||||
"socials": {
|
||||
|
||||
443
docs/how-to/bring-your-own-agent.mdx
Normal file
443
docs/how-to/bring-your-own-agent.mdx
Normal file
@@ -0,0 +1,443 @@
|
||||
---
|
||||
title: Bring your own agent
|
||||
description: Learn how to bring your own agents that work within a Crew.
|
||||
icon: robots
|
||||
---
|
||||
|
||||
Interoperability is a core concept in CrewAI. This guide will show you how to bring your own agents that work within a Crew.
|
||||
|
||||
|
||||
## Adapter Guide for Bringing your own agents (Langgraph Agents, OpenAI Agents, etc...)
|
||||
We require 3 adapters to turn any agent from different frameworks to work within crew.
|
||||
|
||||
1. BaseAgentAdapter
|
||||
2. BaseToolAdapter
|
||||
3. BaseConverter
|
||||
|
||||
|
||||
## BaseAgentAdapter
|
||||
This abstract class defines the common interface and functionality that all
|
||||
agent adapters must implement. It extends BaseAgent to maintain compatibility
|
||||
with the CrewAI framework while adding adapter-specific requirements.
|
||||
|
||||
Required Methods:
|
||||
|
||||
1. `def configure_tools`
|
||||
2. `def configure_structured_output`
|
||||
|
||||
## Creating your own Adapter
|
||||
To integrate an agent from a different framework (e.g., LangGraph, Autogen, OpenAI Assistants) into CrewAI, you need to create a custom adapter by inheriting from `BaseAgentAdapter`. This adapter acts as a compatibility layer, translating between the CrewAI interfaces and the specific requirements of your external agent.
|
||||
|
||||
Here's how you implement your custom adapter:
|
||||
|
||||
1. **Inherit from `BaseAgentAdapter`**:
|
||||
```python
|
||||
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
|
||||
from crewai.tools import BaseTool
|
||||
from typing import List, Optional, Any, Dict
|
||||
|
||||
class MyCustomAgentAdapter(BaseAgentAdapter):
|
||||
# ... implementation details ...
|
||||
```
|
||||
|
||||
2. **Implement `__init__`**:
|
||||
The constructor should call the parent class constructor `super().__init__(**kwargs)` and perform any initialization specific to your external agent. You can use the optional `agent_config` dictionary passed during CrewAI's `Agent` initialization to configure your adapter and the underlying agent.
|
||||
|
||||
```python
|
||||
def __init__(self, agent_config: Optional[Dict[str, Any]] = None, **kwargs: Any):
|
||||
super().__init__(agent_config=agent_config, **kwargs)
|
||||
# Initialize your external agent here, possibly using agent_config
|
||||
# Example: self.external_agent = initialize_my_agent(agent_config)
|
||||
print(f"Initializing MyCustomAgentAdapter with config: {agent_config}")
|
||||
```
|
||||
|
||||
3. **Implement `configure_tools`**:
|
||||
This abstract method is crucial. It receives a list of CrewAI `BaseTool` instances. Your implementation must convert or adapt these tools into the format expected by your external agent framework. This might involve wrapping them, extracting specific attributes, or registering them with the external agent instance.
|
||||
|
||||
```python
|
||||
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
|
||||
if tools:
|
||||
adapted_tools = []
|
||||
for tool in tools:
|
||||
# Adapt CrewAI BaseTool to the format your agent expects
|
||||
# Example: adapted_tool = adapt_to_my_framework(tool)
|
||||
# adapted_tools.append(adapted_tool)
|
||||
pass # Replace with your actual adaptation logic
|
||||
|
||||
# Configure the external agent with the adapted tools
|
||||
# Example: self.external_agent.set_tools(adapted_tools)
|
||||
print(f"Configuring tools for MyCustomAgentAdapter: {adapted_tools}") # Placeholder
|
||||
else:
|
||||
# Handle the case where no tools are provided
|
||||
# Example: self.external_agent.set_tools([])
|
||||
print("No tools provided for MyCustomAgentAdapter.")
|
||||
```
|
||||
|
||||
4. **Implement `configure_structured_output`**:
|
||||
This method is called when the CrewAI `Agent` is configured with structured output requirements (e.g., `output_json` or `output_pydantic`). Your adapter needs to ensure the external agent is set up to comply with these requirements. This might involve setting specific parameters on the external agent or ensuring its underlying model supports the requested format. If the external agent doesn't support structured output in a way compatible with CrewAI's expectations, you might need to handle the conversion or raise an appropriate error.
|
||||
|
||||
```python
|
||||
def configure_structured_output(self, structured_output: Any) -> None:
|
||||
# Configure your external agent to produce output in the specified format
|
||||
# Example: self.external_agent.set_output_format(structured_output)
|
||||
self.adapted_structured_output = True # Signal that structured output is handled
|
||||
print(f"Configuring structured output for MyCustomAgentAdapter: {structured_output}")
|
||||
```
|
||||
|
||||
By implementing these methods, your `MyCustomAgentAdapter` will allow your custom agent implementation to function correctly within a CrewAI crew, interacting with tasks and tools seamlessly. Remember to replace the example comments and print statements with your actual adaptation logic specific to the external agent framework you are integrating.
|
||||
|
||||
## BaseToolAdapter implementation
|
||||
The `BaseToolAdapter` class is responsible for converting CrewAI's native `BaseTool` objects into a format that your specific external agent framework can understand and utilize. Different agent frameworks (like LangGraph, OpenAI Assistants, etc.) have their own unique ways of defining and handling tools, and the `BaseToolAdapter` acts as the translator.
|
||||
|
||||
Here's how you implement your custom tool adapter:
|
||||
|
||||
1. **Inherit from `BaseToolAdapter`**:
|
||||
```python
|
||||
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
|
||||
from crewai.tools import BaseTool
|
||||
from typing import List, Any
|
||||
|
||||
class MyCustomToolAdapter(BaseToolAdapter):
|
||||
# ... implementation details ...
|
||||
```
|
||||
|
||||
2. **Implement `configure_tools`**:
|
||||
This is the core abstract method you must implement. It receives a list of CrewAI `BaseTool` instances provided to the agent. Your task is to iterate through this list, adapt each `BaseTool` into the format expected by your external framework, and store the converted tools in the `self.converted_tools` list (which is initialized in the base class constructor).
|
||||
|
||||
```python
|
||||
def configure_tools(self, tools: List[BaseTool]) -> None:
|
||||
"""Configure and convert CrewAI tools for the specific implementation."""
|
||||
self.converted_tools = [] # Reset in case it's called multiple times
|
||||
for tool in tools:
|
||||
# Sanitize the tool name if required by the target framework
|
||||
sanitized_name = self.sanitize_tool_name(tool.name)
|
||||
|
||||
# --- Your Conversion Logic Goes Here ---
|
||||
# Example: Convert BaseTool to a dictionary format for LangGraph
|
||||
# converted_tool = {
|
||||
# "name": sanitized_name,
|
||||
# "description": tool.description,
|
||||
# "parameters": tool.args_schema.schema() if tool.args_schema else {},
|
||||
# # Add any other framework-specific fields
|
||||
# }
|
||||
|
||||
# Example: Convert BaseTool to an OpenAI function definition
|
||||
# converted_tool = {
|
||||
# "type": "function",
|
||||
# "function": {
|
||||
# "name": sanitized_name,
|
||||
# "description": tool.description,
|
||||
# "parameters": tool.args_schema.schema() if tool.args_schema else {"type": "object", "properties": {}},
|
||||
# }
|
||||
# }
|
||||
|
||||
# --- Replace above examples with your actual adaptation ---
|
||||
converted_tool = self.adapt_tool_to_my_framework(tool, sanitized_name) # Placeholder
|
||||
|
||||
self.converted_tools.append(converted_tool)
|
||||
print(f"Adapted tool '{tool.name}' to '{sanitized_name}' for MyCustomToolAdapter") # Placeholder
|
||||
|
||||
print(f"MyCustomToolAdapter finished configuring tools: {len(self.converted_tools)} adapted.") # Placeholder
|
||||
|
||||
# --- Helper method for adaptation (Example) ---
|
||||
def adapt_tool_to_my_framework(self, tool: BaseTool, sanitized_name: str) -> Any:
|
||||
# Replace this with the actual logic to convert a CrewAI BaseTool
|
||||
# to the format needed by your specific external agent framework.
|
||||
# This will vary greatly depending on the target framework.
|
||||
adapted_representation = {
|
||||
"framework_specific_name": sanitized_name,
|
||||
"framework_specific_description": tool.description,
|
||||
"inputs": tool.args_schema.schema() if tool.args_schema else None,
|
||||
"implementation_reference": tool.run # Or however the framework needs to call it
|
||||
}
|
||||
# Also ensure the tool works both sync and async
|
||||
async def async_tool_wrapper(*args, **kwargs):
|
||||
output = tool.run(*args, **kwargs)
|
||||
if inspect.isawaitable(output):
|
||||
return await output
|
||||
else:
|
||||
return output
|
||||
|
||||
adapted_tool = MyFrameworkTool(
|
||||
name=sanitized_name,
|
||||
description=tool.description,
|
||||
inputs=tool.args_schema.schema() if tool.args_schema else None,
|
||||
implementation_reference=async_tool_wrapper
|
||||
)
|
||||
|
||||
return adapted_representation
|
||||
|
||||
```
|
||||
|
||||
3. **Using the Adapter**:
|
||||
Typically, you would instantiate your `MyCustomToolAdapter` within your `MyCustomAgentAdapter`'s `configure_tools` method and use it to process the tools before configuring your external agent.
|
||||
|
||||
```python
|
||||
# Inside MyCustomAgentAdapter.configure_tools
|
||||
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
|
||||
if tools:
|
||||
tool_adapter = MyCustomToolAdapter() # Instantiate your tool adapter
|
||||
tool_adapter.configure_tools(tools) # Convert the tools
|
||||
adapted_tools = tool_adapter.tools() # Get the converted tools
|
||||
|
||||
# Now configure your external agent with the adapted_tools
|
||||
# Example: self.external_agent.set_tools(adapted_tools)
|
||||
print(f"Configuring external agent with adapted tools: {adapted_tools}") # Placeholder
|
||||
else:
|
||||
# Handle no tools case
|
||||
print("No tools provided for MyCustomAgentAdapter.")
|
||||
```
|
||||
|
||||
By creating a `BaseToolAdapter`, you decouple the tool conversion logic from the agent adaptation, making the integration cleaner and more modular. Remember to replace the placeholder examples with the actual conversion logic required by your specific external agent framework.
|
||||
|
||||
## BaseConverter
|
||||
The `BaseConverterAdapter` plays a crucial role when a CrewAI `Task` requires an agent to return its final output in a specific structured format, such as JSON or a Pydantic model. It bridges the gap between CrewAI's structured output requirements and the capabilities of your external agent.
|
||||
|
||||
Its primary responsibilities are:
|
||||
1. **Configuring the Agent for Structured Output:** Based on the `Task`'s requirements (`output_json` or `output_pydantic`), it instructs the associated `BaseAgentAdapter` (and indirectly, the external agent) on what format is expected.
|
||||
2. **Enhancing the System Prompt:** It modifies the agent's system prompt to include clear instructions on *how* to generate the output in the required structure.
|
||||
3. **Post-processing the Result:** It takes the raw output from the agent and attempts to parse, validate, and format it according to the required structure, ultimately returning a string representation (e.g., a JSON string).
|
||||
|
||||
Here's how you implement your custom converter adapter:
|
||||
|
||||
1. **Inherit from `BaseConverterAdapter`**:
|
||||
```python
|
||||
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
|
||||
# Assuming you have your MyCustomAgentAdapter defined
|
||||
# from .my_custom_agent_adapter import MyCustomAgentAdapter
|
||||
from crewai.task import Task
|
||||
from typing import Any
|
||||
|
||||
class MyCustomConverterAdapter(BaseConverterAdapter):
|
||||
# Store the expected output type (e.g., 'json', 'pydantic', 'text')
|
||||
_output_type: str = 'text'
|
||||
_output_schema: Any = None # Store JSON schema or Pydantic model
|
||||
|
||||
# ... implementation details ...
|
||||
```
|
||||
|
||||
2. **Implement `__init__`**:
|
||||
The constructor must accept the corresponding `agent_adapter` instance it will work with.
|
||||
|
||||
```python
|
||||
def __init__(self, agent_adapter: Any): # Use your specific AgentAdapter type hint
|
||||
self.agent_adapter = agent_adapter
|
||||
print(f"Initializing MyCustomConverterAdapter for agent adapter: {type(agent_adapter).__name__}")
|
||||
```
|
||||
|
||||
3. **Implement `configure_structured_output`**:
|
||||
This method receives the CrewAI `Task` object. You need to check the task's `output_json` and `output_pydantic` attributes to determine the required output structure. Store this information (e.g., in `_output_type` and `_output_schema`) and potentially call configuration methods on your `self.agent_adapter` if the external agent needs specific setup for structured output (which might have been partially handled in the agent adapter's `configure_structured_output` already).
|
||||
|
||||
```python
|
||||
def configure_structured_output(self, task: Task) -> None:
|
||||
"""Configure the expected structured output based on the task."""
|
||||
if task.output_pydantic:
|
||||
self._output_type = 'pydantic'
|
||||
self._output_schema = task.output_pydantic
|
||||
print(f"Converter: Configured for Pydantic output: {self._output_schema.__name__}")
|
||||
elif task.output_json:
|
||||
self._output_type = 'json'
|
||||
self._output_schema = task.output_json
|
||||
print(f"Converter: Configured for JSON output with schema: {self._output_schema}")
|
||||
else:
|
||||
self._output_type = 'text'
|
||||
self._output_schema = None
|
||||
print("Converter: Configured for standard text output.")
|
||||
|
||||
# Optionally, inform the agent adapter if needed
|
||||
# self.agent_adapter.set_output_mode(self._output_type, self._output_schema)
|
||||
```
|
||||
|
||||
4. **Implement `enhance_system_prompt`**:
|
||||
This method takes the agent's base system prompt string and should append instructions tailored to the currently configured `_output_type` and `_output_schema`. The goal is to guide the LLM powering the agent to produce output in the correct format.
|
||||
|
||||
```python
|
||||
def enhance_system_prompt(self, base_prompt: str) -> str:
|
||||
"""Enhance the system prompt with structured output instructions."""
|
||||
if self._output_type == 'text':
|
||||
return base_prompt # No enhancement needed for plain text
|
||||
|
||||
instructions = "\n\nYour final answer MUST be formatted as "
|
||||
if self._output_type == 'json':
|
||||
schema_str = json.dumps(self._output_schema, indent=2)
|
||||
instructions += f"a JSON object conforming to the following schema:\n```json\n{schema_str}\n```"
|
||||
elif self._output_type == 'pydantic':
|
||||
schema_str = json.dumps(self._output_schema.model_json_schema(), indent=2)
|
||||
instructions += f"a JSON object conforming to the Pydantic model '{self._output_schema.__name__}' with the following schema:\n```json\n{schema_str}\n```"
|
||||
|
||||
instructions += "\nEnsure your entire response is ONLY the valid JSON object, without any introductory text, explanations, or concluding remarks."
|
||||
|
||||
print(f"Converter: Enhancing prompt for {self._output_type} output.")
|
||||
return base_prompt + instructions
|
||||
```
|
||||
*Note: The exact prompt engineering might need tuning based on the agent/LLM being used.*
|
||||
|
||||
5. **Implement `post_process_result`**:
|
||||
This method receives the raw string output from the agent. If structured output was requested (`json` or `pydantic`), you should attempt to parse the string into the expected format. Handle potential parsing errors (e.g., log them, attempt simple fixes, or raise an exception). Crucially, the method must **always return a string**, even if the intermediate format was a dictionary or Pydantic object (e.g., by serializing it back to a JSON string).
|
||||
|
||||
```python
|
||||
import json
|
||||
from pydantic import ValidationError
|
||||
|
||||
def post_process_result(self, result: str) -> str:
|
||||
"""Post-process the agent's result to ensure it matches the expected format."""
|
||||
print(f"Converter: Post-processing result for {self._output_type} output.")
|
||||
if self._output_type == 'json':
|
||||
try:
|
||||
# Attempt to parse and re-serialize to ensure validity and consistent format
|
||||
parsed_json = json.loads(result)
|
||||
# Optional: Validate against self._output_schema if it's a JSON schema dictionary
|
||||
# from jsonschema import validate
|
||||
# validate(instance=parsed_json, schema=self._output_schema)
|
||||
return json.dumps(parsed_json)
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"Error: Failed to parse JSON output: {e}\nRaw output:\n{result}")
|
||||
# Handle error: return raw, raise exception, or try to fix
|
||||
return result # Example: return raw output on failure
|
||||
# except Exception as e: # Catch validation errors if using jsonschema
|
||||
# print(f"Error: JSON output failed schema validation: {e}\nRaw output:\n{result}")
|
||||
# return result
|
||||
elif self._output_type == 'pydantic':
|
||||
try:
|
||||
# Attempt to parse into the Pydantic model
|
||||
model_instance = self._output_schema.model_validate_json(result)
|
||||
# Return the model serialized back to JSON
|
||||
return model_instance.model_dump_json()
|
||||
except ValidationError as e:
|
||||
print(f"Error: Failed to validate Pydantic output: {e}\nRaw output:\n{result}")
|
||||
# Handle error
|
||||
return result # Example: return raw output on failure
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"Error: Failed to parse JSON for Pydantic model: {e}\nRaw output:\n{result}")
|
||||
return result
|
||||
else: # 'text'
|
||||
return result # No processing needed for plain text
|
||||
```
|
||||
|
||||
By implementing these methods, your `MyCustomConverterAdapter` ensures that structured output requests from CrewAI tasks are correctly handled by your integrated external agent, improving the reliability and usability of your custom agent within the CrewAI framework.
|
||||
|
||||
## Out of the Box Adapters
|
||||
|
||||
We provide out of the box adapters for the following frameworks:
|
||||
1. LangGraph
|
||||
2. OpenAI Agents
|
||||
|
||||
## Kicking off a crew with adapted agents:
|
||||
|
||||
```python
|
||||
import json
|
||||
import os
|
||||
from typing import List
|
||||
|
||||
from crewai_tools import SerperDevTool
|
||||
from src.crewai import Agent, Crew, Task
|
||||
from langchain_openai import ChatOpenAI
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.agents.agent_adapters.langgraph.langgraph_adapter import (
|
||||
LangGraphAgentAdapter,
|
||||
)
|
||||
from crewai.agents.agent_adapters.openai_agents.openai_adapter import OpenAIAgentAdapter
|
||||
|
||||
# CrewAI Agent
|
||||
code_helper_agent = Agent(
|
||||
role="Code Helper",
|
||||
goal="Help users solve coding problems effectively and provide clear explanations.",
|
||||
backstory="You are an experienced programmer with deep knowledge across multiple programming languages and frameworks. You specialize in solving complex coding challenges and explaining solutions clearly.",
|
||||
allow_delegation=False,
|
||||
verbose=True,
|
||||
)
|
||||
# OpenAI Agent Adapter
|
||||
link_finder_agent = OpenAIAgentAdapter(
|
||||
role="Link Finder",
|
||||
goal="Find the most relevant and high-quality resources for coding tasks.",
|
||||
backstory="You are a research specialist with a talent for finding the most helpful resources. You're skilled at using search tools to discover documentation, tutorials, and examples that directly address the user's coding needs.",
|
||||
tools=[SerperDevTool()],
|
||||
allow_delegation=False,
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# LangGraph Agent Adapter
|
||||
reporter_agent = LangGraphAgentAdapter(
|
||||
role="Reporter",
|
||||
goal="Report the results of the tasks.",
|
||||
backstory="You are a reporter who reports the results of the other tasks",
|
||||
llm=ChatOpenAI(model="gpt-4o"),
|
||||
allow_delegation=True,
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
|
||||
class Code(BaseModel):
|
||||
code: str
|
||||
|
||||
|
||||
task = Task(
|
||||
description="Give an answer to the coding question: {task}",
|
||||
expected_output="A thorough answer to the coding question: {task}",
|
||||
agent=code_helper_agent,
|
||||
output_json=Code,
|
||||
)
|
||||
task2 = Task(
|
||||
description="Find links to resources that can help with coding tasks. Use the serper tool to find resources that can help.",
|
||||
expected_output="A list of links to resources that can help with coding tasks",
|
||||
agent=link_finder_agent,
|
||||
)
|
||||
|
||||
|
||||
class Report(BaseModel):
|
||||
code: str
|
||||
links: List[str]
|
||||
|
||||
|
||||
task3 = Task(
|
||||
description="Report the results of the tasks.",
|
||||
expected_output="A report of the results of the tasks. this is the code produced and then the links to the resources that can help with the coding task.",
|
||||
agent=reporter_agent,
|
||||
output_json=Report,
|
||||
)
|
||||
# Use in CrewAI
|
||||
crew = Crew(
|
||||
agents=[code_helper_agent, link_finder_agent, reporter_agent],
|
||||
tasks=[task, task2, task3],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
result = crew.kickoff(
|
||||
inputs={"task": "How do you implement an abstract class in python?"}
|
||||
)
|
||||
|
||||
# Print raw result first
|
||||
print("Raw result:", result)
|
||||
|
||||
# Handle result based on its type
|
||||
if hasattr(result, "json_dict") and result.json_dict:
|
||||
json_result = result.json_dict
|
||||
print("\nStructured JSON result:")
|
||||
print(f"{json.dumps(json_result, indent=2)}")
|
||||
|
||||
# Access fields safely
|
||||
if isinstance(json_result, dict):
|
||||
if "code" in json_result:
|
||||
print("\nCode:")
|
||||
print(
|
||||
json_result["code"][:200] + "..."
|
||||
if len(json_result["code"]) > 200
|
||||
else json_result["code"]
|
||||
)
|
||||
|
||||
if "links" in json_result:
|
||||
print("\nLinks:")
|
||||
for link in json_result["links"][:5]: # Print first 5 links
|
||||
print(f"- {link}")
|
||||
if len(json_result["links"]) > 5:
|
||||
print(f"...and {len(json_result['links']) - 5} more links")
|
||||
elif hasattr(result, "pydantic") and result.pydantic:
|
||||
print("\nPydantic model result:")
|
||||
print(result.pydantic.model_dump_json(indent=2))
|
||||
else:
|
||||
# Fallback to raw output
|
||||
print("\nNo structured result available, using raw output:")
|
||||
print(result.raw[:500] + "..." if len(result.raw) > 500 else result.raw)
|
||||
|
||||
```
|
||||
@@ -1,9 +1,13 @@
|
||||
# Custom LLM Implementations
|
||||
---
|
||||
title: Custom LLM Implementation
|
||||
description: Learn how to create custom LLM implementations in CrewAI.
|
||||
icon: code
|
||||
---
|
||||
|
||||
## Custom LLM Implementations
|
||||
|
||||
CrewAI now supports custom LLM implementations through the `BaseLLM` abstract base class. This allows you to create your own LLM implementations that don't rely on litellm's authentication mechanism.
|
||||
|
||||
## Using Custom LLM Implementations
|
||||
|
||||
To create a custom LLM implementation, you need to:
|
||||
|
||||
1. Inherit from the `BaseLLM` abstract base class
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: Create Your Own Manager Agent
|
||||
title: Custom Manager Agent
|
||||
description: Learn how to set a custom agent as the manager in CrewAI, providing more control over task management and coordination.
|
||||
icon: user-shield
|
||||
---
|
||||
|
||||
117
docs/how-to/elasticsearch-integration.md
Normal file
117
docs/how-to/elasticsearch-integration.md
Normal file
@@ -0,0 +1,117 @@
|
||||
# Elasticsearch Integration
|
||||
|
||||
CrewAI supports using Elasticsearch as an alternative to ChromaDB for RAG (Retrieval Augmented Generation) storage. This allows you to leverage Elasticsearch's powerful search capabilities and scalability for your AI agents.
|
||||
|
||||
## Installation
|
||||
|
||||
To use Elasticsearch with CrewAI, you need to install the Elasticsearch Python client:
|
||||
|
||||
```bash
|
||||
pip install elasticsearch
|
||||
```
|
||||
|
||||
## Using Elasticsearch for Memory
|
||||
|
||||
You can configure your crew to use Elasticsearch for memory storage:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Crew, Task
|
||||
|
||||
# Create agents and tasks
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Research a topic",
|
||||
backstory="You are a researcher who loves to find information.",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research about AI",
|
||||
expected_output="Information about AI",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
# Create a crew with Elasticsearch memory
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
memory_config={
|
||||
"provider": "elasticsearch",
|
||||
"host": "localhost", # Optional, defaults to localhost
|
||||
"port": 9200, # Optional, defaults to 9200
|
||||
"username": "user", # Optional
|
||||
"password": "pass", # Optional
|
||||
},
|
||||
)
|
||||
|
||||
# Execute the crew
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Using Elasticsearch for Knowledge
|
||||
|
||||
You can also use Elasticsearch for knowledge storage:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.knowledge import Knowledge
|
||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||
|
||||
# Create knowledge with Elasticsearch storage
|
||||
content = "AI is a field of computer science that focuses on creating machines that can perform tasks that typically require human intelligence."
|
||||
string_source = StringKnowledgeSource(
|
||||
content=content, metadata={"topic": "AI"}
|
||||
)
|
||||
|
||||
knowledge = Knowledge(
|
||||
collection_name="test",
|
||||
sources=[string_source],
|
||||
storage_provider="elasticsearch", # Use Elasticsearch
|
||||
# Optional Elasticsearch configuration
|
||||
host="localhost",
|
||||
port=9200,
|
||||
username="user",
|
||||
password="pass",
|
||||
)
|
||||
|
||||
# Create an agent with the knowledge
|
||||
agent = Agent(
|
||||
role="AI Expert",
|
||||
goal="Explain AI",
|
||||
backstory="You are an AI expert who loves to explain AI concepts.",
|
||||
knowledge=[knowledge],
|
||||
)
|
||||
|
||||
# Create a task
|
||||
task = Task(
|
||||
description="Explain what AI is",
|
||||
expected_output="Explanation of AI",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
# Create a crew
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
)
|
||||
|
||||
# Execute the crew
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
The Elasticsearch integration supports the following configuration options:
|
||||
|
||||
- `host`: Elasticsearch host (default: "localhost")
|
||||
- `port`: Elasticsearch port (default: 9200)
|
||||
- `username`: Elasticsearch username (optional)
|
||||
- `password`: Elasticsearch password (optional)
|
||||
- Additional keyword arguments are passed directly to the Elasticsearch client
|
||||
|
||||
## Running Tests
|
||||
|
||||
To run the Elasticsearch tests, you need to set the `RUN_ELASTICSEARCH_TESTS` environment variable to `true`:
|
||||
|
||||
```bash
|
||||
RUN_ELASTICSEARCH_TESTS=true pytest tests/memory/elasticsearch_storage_test.py tests/knowledge/elasticsearch_knowledge_storage_test.py tests/integration/elasticsearch_integration_test.py
|
||||
```
|
||||
@@ -20,10 +20,8 @@ Here's an example of how to replay from a task:
|
||||
To use the replay feature, follow these steps:
|
||||
|
||||
<Steps>
|
||||
<Step title="Open your terminal or command prompt.">
|
||||
</Step>
|
||||
<Step title="Navigate to the directory where your CrewAI project is located.">
|
||||
</Step>
|
||||
<Step title="Open your terminal or command prompt."></Step>
|
||||
<Step title="Navigate to the directory where your CrewAI project is located."></Step>
|
||||
<Step title="Run the following commands:">
|
||||
To view the latest kickoff task_ids use:
|
||||
|
||||
|
||||
BIN
docs/images/v01140.png
Normal file
BIN
docs/images/v01140.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 2.4 MiB |
@@ -336,9 +336,22 @@ email_summarizer_task:
|
||||
- research_task
|
||||
```
|
||||
|
||||
## Deploying Your Project
|
||||
## Deploying Your Crew
|
||||
|
||||
The easiest way to deploy your crew is through [CrewAI Enterprise](http://app.crewai.com), where you can deploy your crew in a few clicks.
|
||||
The easiest way to deploy your crew to production is through [CrewAI Enterprise](http://app.crewai.com).
|
||||
|
||||
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI Enterprise](http://app.crewai.com) using the CLI.
|
||||
|
||||
<iframe
|
||||
width="100%"
|
||||
height="400"
|
||||
src="https://www.youtube.com/embed/3EqSV-CYDZA"
|
||||
title="CrewAI Deployment Guide"
|
||||
frameborder="0"
|
||||
style={{ borderRadius: '10px' }}
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|
||||
allowfullscreen
|
||||
></iframe>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
---
|
||||
title: Using LangChain Tools
|
||||
description: Learn how to integrate LangChain tools with CrewAI agents to enhance search-based queries and more.
|
||||
title: LangChain Tool
|
||||
description: The `LangChainTool` is a wrapper for LangChain tools and query engines.
|
||||
icon: link
|
||||
---
|
||||
|
||||
## Using LangChain Tools
|
||||
## `LangChainTool`
|
||||
|
||||
<Info>
|
||||
CrewAI seamlessly integrates with LangChain's comprehensive [list of tools](https://python.langchain.com/docs/integrations/tools/), all of which can be used with CrewAI.
|
||||
@@ -37,6 +37,7 @@ dependencies = [
|
||||
"tomli>=2.0.2",
|
||||
"blinker>=1.9.0",
|
||||
"json5>=0.10.0",
|
||||
"elasticsearch>=9.0.0",
|
||||
]
|
||||
|
||||
[project.urls]
|
||||
|
||||
@@ -114,6 +114,14 @@ class Agent(BaseAgent):
|
||||
default=None,
|
||||
description="Embedder configuration for the agent.",
|
||||
)
|
||||
agent_knowledge_context: Optional[str] = Field(
|
||||
default=None,
|
||||
description="Knowledge context for the agent.",
|
||||
)
|
||||
crew_knowledge_context: Optional[str] = Field(
|
||||
default=None,
|
||||
description="Knowledge context for the crew.",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def post_init_setup(self):
|
||||
@@ -177,7 +185,7 @@ class Agent(BaseAgent):
|
||||
self,
|
||||
task: Task,
|
||||
context: Optional[str] = None,
|
||||
tools: Optional[List[BaseTool]] = None,
|
||||
tools: Optional[List[BaseTool]] = None
|
||||
) -> str:
|
||||
"""Execute a task with the agent.
|
||||
|
||||
@@ -188,6 +196,11 @@ class Agent(BaseAgent):
|
||||
|
||||
Returns:
|
||||
Output of the agent
|
||||
|
||||
Raises:
|
||||
TimeoutError: If execution exceeds the maximum execution time.
|
||||
ValueError: If the max execution time is not a positive integer.
|
||||
RuntimeError: If the agent execution fails for other reasons.
|
||||
"""
|
||||
if self.tools_handler:
|
||||
self.tools_handler.last_used_tool = {} # type: ignore # Incompatible types in assignment (expression has type "dict[Never, Never]", variable has type "ToolCalling")
|
||||
@@ -229,22 +242,30 @@ class Agent(BaseAgent):
|
||||
memory = contextual_memory.build_context_for_task(task, context)
|
||||
if memory.strip() != "":
|
||||
task_prompt += self.i18n.slice("memory").format(memory=memory)
|
||||
|
||||
knowledge_config = (
|
||||
self.knowledge_config.model_dump() if self.knowledge_config else {}
|
||||
)
|
||||
if self.knowledge:
|
||||
agent_knowledge_snippets = self.knowledge.query([task.prompt()])
|
||||
agent_knowledge_snippets = self.knowledge.query(
|
||||
[task.prompt()], **knowledge_config
|
||||
)
|
||||
if agent_knowledge_snippets:
|
||||
agent_knowledge_context = extract_knowledge_context(
|
||||
self.agent_knowledge_context = extract_knowledge_context(
|
||||
agent_knowledge_snippets
|
||||
)
|
||||
if agent_knowledge_context:
|
||||
task_prompt += agent_knowledge_context
|
||||
if self.agent_knowledge_context:
|
||||
task_prompt += self.agent_knowledge_context
|
||||
|
||||
if self.crew:
|
||||
knowledge_snippets = self.crew.query_knowledge([task.prompt()])
|
||||
knowledge_snippets = self.crew.query_knowledge(
|
||||
[task.prompt()], **knowledge_config
|
||||
)
|
||||
if knowledge_snippets:
|
||||
crew_knowledge_context = extract_knowledge_context(knowledge_snippets)
|
||||
if crew_knowledge_context:
|
||||
task_prompt += crew_knowledge_context
|
||||
self.crew_knowledge_context = extract_knowledge_context(
|
||||
knowledge_snippets
|
||||
)
|
||||
if self.crew_knowledge_context:
|
||||
task_prompt += self.crew_knowledge_context
|
||||
|
||||
tools = tools or self.tools or []
|
||||
self.create_agent_executor(tools=tools, task=task)
|
||||
@@ -264,14 +285,26 @@ class Agent(BaseAgent):
|
||||
task=task,
|
||||
),
|
||||
)
|
||||
result = self.agent_executor.invoke(
|
||||
{
|
||||
"input": task_prompt,
|
||||
"tool_names": self.agent_executor.tools_names,
|
||||
"tools": self.agent_executor.tools_description,
|
||||
"ask_for_human_input": task.human_input,
|
||||
}
|
||||
)["output"]
|
||||
|
||||
# Determine execution method based on timeout setting
|
||||
if self.max_execution_time is not None:
|
||||
if not isinstance(self.max_execution_time, int) or self.max_execution_time <= 0:
|
||||
raise ValueError("Max Execution time must be a positive integer greater than zero")
|
||||
result = self._execute_with_timeout(task_prompt, task, self.max_execution_time)
|
||||
else:
|
||||
result = self._execute_without_timeout(task_prompt, task)
|
||||
|
||||
except TimeoutError as e:
|
||||
# Propagate TimeoutError without retry
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=AgentExecutionErrorEvent(
|
||||
agent=self,
|
||||
task=task,
|
||||
error=str(e),
|
||||
),
|
||||
)
|
||||
raise e
|
||||
except Exception as e:
|
||||
if e.__class__.__module__.startswith("litellm"):
|
||||
# Do not retry on litellm errors
|
||||
@@ -312,6 +345,66 @@ class Agent(BaseAgent):
|
||||
)
|
||||
return result
|
||||
|
||||
def _execute_with_timeout(
|
||||
self,
|
||||
task_prompt: str,
|
||||
task: Task,
|
||||
timeout: int
|
||||
) -> str:
|
||||
"""Execute a task with a timeout.
|
||||
|
||||
Args:
|
||||
task_prompt: The prompt to send to the agent.
|
||||
task: The task being executed.
|
||||
timeout: Maximum execution time in seconds.
|
||||
|
||||
Returns:
|
||||
The output of the agent.
|
||||
|
||||
Raises:
|
||||
TimeoutError: If execution exceeds the timeout.
|
||||
RuntimeError: If execution fails for other reasons.
|
||||
"""
|
||||
import concurrent.futures
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
future = executor.submit(
|
||||
self._execute_without_timeout,
|
||||
task_prompt=task_prompt,
|
||||
task=task
|
||||
)
|
||||
|
||||
try:
|
||||
return future.result(timeout=timeout)
|
||||
except concurrent.futures.TimeoutError:
|
||||
future.cancel()
|
||||
raise TimeoutError(f"Task '{task.description}' execution timed out after {timeout} seconds. Consider increasing max_execution_time or optimizing the task.")
|
||||
except Exception as e:
|
||||
future.cancel()
|
||||
raise RuntimeError(f"Task execution failed: {str(e)}")
|
||||
|
||||
def _execute_without_timeout(
|
||||
self,
|
||||
task_prompt: str,
|
||||
task: Task
|
||||
) -> str:
|
||||
"""Execute a task without a timeout.
|
||||
|
||||
Args:
|
||||
task_prompt: The prompt to send to the agent.
|
||||
task: The task being executed.
|
||||
|
||||
Returns:
|
||||
The output of the agent.
|
||||
"""
|
||||
return self.agent_executor.invoke(
|
||||
{
|
||||
"input": task_prompt,
|
||||
"tool_names": self.agent_executor.tools_names,
|
||||
"tools": self.agent_executor.tools_description,
|
||||
"ask_for_human_input": task.human_input,
|
||||
}
|
||||
)["output"]
|
||||
|
||||
def create_agent_executor(
|
||||
self, tools: Optional[List[BaseTool]] = None, task=None
|
||||
) -> None:
|
||||
|
||||
0
src/crewai/agents/agent_adapters/__init__.py
Normal file
0
src/crewai/agents/agent_adapters/__init__.py
Normal file
42
src/crewai/agents/agent_adapters/base_agent_adapter.py
Normal file
42
src/crewai/agents/agent_adapters/base_agent_adapter.py
Normal file
@@ -0,0 +1,42 @@
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from pydantic import PrivateAttr
|
||||
|
||||
from crewai.agent import BaseAgent
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
|
||||
class BaseAgentAdapter(BaseAgent, ABC):
|
||||
"""Base class for all agent adapters in CrewAI.
|
||||
|
||||
This abstract class defines the common interface and functionality that all
|
||||
agent adapters must implement. It extends BaseAgent to maintain compatibility
|
||||
with the CrewAI framework while adding adapter-specific requirements.
|
||||
"""
|
||||
|
||||
adapted_structured_output: bool = False
|
||||
_agent_config: Optional[Dict[str, Any]] = PrivateAttr(default=None)
|
||||
|
||||
model_config = {"arbitrary_types_allowed": True}
|
||||
|
||||
def __init__(self, agent_config: Optional[Dict[str, Any]] = None, **kwargs: Any):
|
||||
super().__init__(adapted_agent=True, **kwargs)
|
||||
self._agent_config = agent_config
|
||||
|
||||
@abstractmethod
|
||||
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
|
||||
"""Configure and adapt tools for the specific agent implementation.
|
||||
|
||||
Args:
|
||||
tools: Optional list of BaseTool instances to be configured
|
||||
"""
|
||||
pass
|
||||
|
||||
def configure_structured_output(self, structured_output: Any) -> None:
|
||||
"""Configure the structured output for the specific agent implementation.
|
||||
|
||||
Args:
|
||||
structured_output: The structured output to be configured
|
||||
"""
|
||||
pass
|
||||
29
src/crewai/agents/agent_adapters/base_converter_adapter.py
Normal file
29
src/crewai/agents/agent_adapters/base_converter_adapter.py
Normal file
@@ -0,0 +1,29 @@
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
|
||||
class BaseConverterAdapter(ABC):
|
||||
"""Base class for all converter adapters in CrewAI.
|
||||
|
||||
This abstract class defines the common interface and functionality that all
|
||||
converter adapters must implement for converting structured output.
|
||||
"""
|
||||
|
||||
def __init__(self, agent_adapter):
|
||||
self.agent_adapter = agent_adapter
|
||||
|
||||
@abstractmethod
|
||||
def configure_structured_output(self, task) -> None:
|
||||
"""Configure agents to return structured output.
|
||||
Must support json and pydantic output.
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def enhance_system_prompt(self, base_prompt: str) -> str:
|
||||
"""Enhance the system prompt with structured output instructions."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def post_process_result(self, result: str) -> str:
|
||||
"""Post-process the result to ensure it matches the expected format: string."""
|
||||
pass
|
||||
37
src/crewai/agents/agent_adapters/base_tool_adapter.py
Normal file
37
src/crewai/agents/agent_adapters/base_tool_adapter.py
Normal file
@@ -0,0 +1,37 @@
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Any, List, Optional
|
||||
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
|
||||
|
||||
class BaseToolAdapter(ABC):
|
||||
"""Base class for all tool adapters in CrewAI.
|
||||
|
||||
This abstract class defines the common interface that all tool adapters
|
||||
must implement. It provides the structure for adapting CrewAI tools to
|
||||
different frameworks and platforms.
|
||||
"""
|
||||
|
||||
original_tools: List[BaseTool]
|
||||
converted_tools: List[Any]
|
||||
|
||||
def __init__(self, tools: Optional[List[BaseTool]] = None):
|
||||
self.original_tools = tools or []
|
||||
self.converted_tools = []
|
||||
|
||||
@abstractmethod
|
||||
def configure_tools(self, tools: List[BaseTool]) -> None:
|
||||
"""Configure and convert tools for the specific implementation.
|
||||
|
||||
Args:
|
||||
tools: List of BaseTool instances to be configured and converted
|
||||
"""
|
||||
pass
|
||||
|
||||
def tools(self) -> List[Any]:
|
||||
"""Return all converted tools."""
|
||||
return self.converted_tools
|
||||
|
||||
def sanitize_tool_name(self, tool_name: str) -> str:
|
||||
"""Sanitize tool name for API compatibility."""
|
||||
return tool_name.replace(" ", "_")
|
||||
226
src/crewai/agents/agent_adapters/langgraph/langgraph_adapter.py
Normal file
226
src/crewai/agents/agent_adapters/langgraph/langgraph_adapter.py
Normal file
@@ -0,0 +1,226 @@
|
||||
from typing import Any, AsyncIterable, Dict, List, Optional
|
||||
|
||||
from pydantic import Field, PrivateAttr
|
||||
|
||||
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
|
||||
from crewai.agents.agent_adapters.langgraph.langgraph_tool_adapter import (
|
||||
LangGraphToolAdapter,
|
||||
)
|
||||
from crewai.agents.agent_adapters.langgraph.structured_output_converter import (
|
||||
LangGraphConverterAdapter,
|
||||
)
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.tools.agent_tools.agent_tools import AgentTools
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.utilities import Logger
|
||||
from crewai.utilities.converter import Converter
|
||||
from crewai.utilities.events import crewai_event_bus
|
||||
from crewai.utilities.events.agent_events import (
|
||||
AgentExecutionCompletedEvent,
|
||||
AgentExecutionErrorEvent,
|
||||
AgentExecutionStartedEvent,
|
||||
)
|
||||
|
||||
try:
|
||||
from langchain_core.messages import ToolMessage
|
||||
from langgraph.checkpoint.memory import MemorySaver
|
||||
from langgraph.prebuilt import create_react_agent
|
||||
|
||||
LANGGRAPH_AVAILABLE = True
|
||||
except ImportError:
|
||||
LANGGRAPH_AVAILABLE = False
|
||||
|
||||
|
||||
class LangGraphAgentAdapter(BaseAgentAdapter):
|
||||
"""Adapter for LangGraph agents to work with CrewAI."""
|
||||
|
||||
model_config = {"arbitrary_types_allowed": True}
|
||||
|
||||
_logger: Logger = PrivateAttr(default_factory=lambda: Logger())
|
||||
_tool_adapter: LangGraphToolAdapter = PrivateAttr()
|
||||
_graph: Any = PrivateAttr(default=None)
|
||||
_memory: Any = PrivateAttr(default=None)
|
||||
_max_iterations: int = PrivateAttr(default=10)
|
||||
function_calling_llm: Any = Field(default=None)
|
||||
step_callback: Any = Field(default=None)
|
||||
|
||||
model: str = Field(default="gpt-4o")
|
||||
verbose: bool = Field(default=False)
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
role: str,
|
||||
goal: str,
|
||||
backstory: str,
|
||||
tools: Optional[List[BaseTool]] = None,
|
||||
llm: Any = None,
|
||||
max_iterations: int = 10,
|
||||
agent_config: Optional[Dict[str, Any]] = None,
|
||||
**kwargs,
|
||||
):
|
||||
"""Initialize the LangGraph agent adapter."""
|
||||
if not LANGGRAPH_AVAILABLE:
|
||||
raise ImportError(
|
||||
"LangGraph Agent Dependencies are not installed. Please install it using `uv add langchain-core langgraph`"
|
||||
)
|
||||
super().__init__(
|
||||
role=role,
|
||||
goal=goal,
|
||||
backstory=backstory,
|
||||
tools=tools,
|
||||
llm=llm or self.model,
|
||||
agent_config=agent_config,
|
||||
**kwargs,
|
||||
)
|
||||
self._tool_adapter = LangGraphToolAdapter(tools=tools)
|
||||
self._converter_adapter = LangGraphConverterAdapter(self)
|
||||
self._max_iterations = max_iterations
|
||||
self._setup_graph()
|
||||
|
||||
def _setup_graph(self) -> None:
|
||||
"""Set up the LangGraph workflow graph."""
|
||||
try:
|
||||
self._memory = MemorySaver()
|
||||
|
||||
converted_tools: List[Any] = self._tool_adapter.tools()
|
||||
if self._agent_config:
|
||||
self._graph = create_react_agent(
|
||||
model=self.llm,
|
||||
tools=converted_tools,
|
||||
checkpointer=self._memory,
|
||||
debug=self.verbose,
|
||||
**self._agent_config,
|
||||
)
|
||||
else:
|
||||
self._graph = create_react_agent(
|
||||
model=self.llm,
|
||||
tools=converted_tools or [],
|
||||
checkpointer=self._memory,
|
||||
debug=self.verbose,
|
||||
)
|
||||
|
||||
except ImportError as e:
|
||||
self._logger.log(
|
||||
"error", f"Failed to import LangGraph dependencies: {str(e)}"
|
||||
)
|
||||
raise
|
||||
except Exception as e:
|
||||
self._logger.log("error", f"Error setting up LangGraph agent: {str(e)}")
|
||||
raise
|
||||
|
||||
def _build_system_prompt(self) -> str:
|
||||
"""Build a system prompt for the LangGraph agent."""
|
||||
base_prompt = f"""
|
||||
You are {self.role}.
|
||||
|
||||
Your goal is: {self.goal}
|
||||
|
||||
Your backstory: {self.backstory}
|
||||
|
||||
When working on tasks, think step-by-step and use the available tools when necessary.
|
||||
"""
|
||||
return self._converter_adapter.enhance_system_prompt(base_prompt)
|
||||
|
||||
def execute_task(
|
||||
self,
|
||||
task: Any,
|
||||
context: Optional[str] = None,
|
||||
tools: Optional[List[BaseTool]] = None,
|
||||
) -> str:
|
||||
"""Execute a task using the LangGraph workflow."""
|
||||
self.create_agent_executor(tools)
|
||||
|
||||
self.configure_structured_output(task)
|
||||
|
||||
try:
|
||||
task_prompt = task.prompt() if hasattr(task, "prompt") else str(task)
|
||||
|
||||
if context:
|
||||
task_prompt = self.i18n.slice("task_with_context").format(
|
||||
task=task_prompt, context=context
|
||||
)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=AgentExecutionStartedEvent(
|
||||
agent=self,
|
||||
tools=self.tools,
|
||||
task_prompt=task_prompt,
|
||||
task=task,
|
||||
),
|
||||
)
|
||||
|
||||
session_id = f"task_{id(task)}"
|
||||
|
||||
config = {"configurable": {"thread_id": session_id}}
|
||||
|
||||
result = self._graph.invoke(
|
||||
{
|
||||
"messages": [
|
||||
("system", self._build_system_prompt()),
|
||||
("user", task_prompt),
|
||||
]
|
||||
},
|
||||
config,
|
||||
)
|
||||
|
||||
messages = result.get("messages", [])
|
||||
last_message = messages[-1] if messages else None
|
||||
|
||||
final_answer = ""
|
||||
if isinstance(last_message, dict):
|
||||
final_answer = last_message.get("content", "")
|
||||
elif hasattr(last_message, "content"):
|
||||
final_answer = getattr(last_message, "content", "")
|
||||
|
||||
final_answer = (
|
||||
self._converter_adapter.post_process_result(final_answer)
|
||||
or "Task execution completed but no clear answer was provided."
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=AgentExecutionCompletedEvent(
|
||||
agent=self, task=task, output=final_answer
|
||||
),
|
||||
)
|
||||
|
||||
return final_answer
|
||||
|
||||
except Exception as e:
|
||||
self._logger.log("error", f"Error executing LangGraph task: {str(e)}")
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=AgentExecutionErrorEvent(
|
||||
agent=self,
|
||||
task=task,
|
||||
error=str(e),
|
||||
),
|
||||
)
|
||||
raise
|
||||
|
||||
def create_agent_executor(self, tools: Optional[List[BaseTool]] = None) -> None:
|
||||
"""Configure the LangGraph agent for execution."""
|
||||
self.configure_tools(tools)
|
||||
|
||||
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
|
||||
"""Configure tools for the LangGraph agent."""
|
||||
if tools:
|
||||
all_tools = list(self.tools or []) + list(tools or [])
|
||||
self._tool_adapter.configure_tools(all_tools)
|
||||
available_tools = self._tool_adapter.tools()
|
||||
self._graph.tools = available_tools
|
||||
|
||||
def get_delegation_tools(self, agents: List[BaseAgent]) -> List[BaseTool]:
|
||||
"""Implement delegation tools support for LangGraph."""
|
||||
agent_tools = AgentTools(agents=agents)
|
||||
return agent_tools.tools()
|
||||
|
||||
def get_output_converter(
|
||||
self, llm: Any, text: str, model: Any, instructions: str
|
||||
) -> Any:
|
||||
"""Convert output format if needed."""
|
||||
return Converter(llm=llm, text=text, model=model, instructions=instructions)
|
||||
|
||||
def configure_structured_output(self, task) -> None:
|
||||
"""Configure the structured output for LangGraph."""
|
||||
self._converter_adapter.configure_structured_output(task)
|
||||
@@ -0,0 +1,61 @@
|
||||
import inspect
|
||||
from typing import Any, List, Optional
|
||||
|
||||
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
|
||||
|
||||
class LangGraphToolAdapter(BaseToolAdapter):
|
||||
"""Adapts CrewAI tools to LangGraph agent tool compatible format"""
|
||||
|
||||
def __init__(self, tools: Optional[List[BaseTool]] = None):
|
||||
self.original_tools = tools or []
|
||||
self.converted_tools = []
|
||||
|
||||
def configure_tools(self, tools: List[BaseTool]) -> None:
|
||||
"""
|
||||
Configure and convert CrewAI tools to LangGraph-compatible format.
|
||||
LangGraph expects tools in langchain_core.tools format.
|
||||
"""
|
||||
from langchain_core.tools import BaseTool, StructuredTool
|
||||
|
||||
converted_tools = []
|
||||
if self.original_tools:
|
||||
all_tools = tools + self.original_tools
|
||||
else:
|
||||
all_tools = tools
|
||||
for tool in all_tools:
|
||||
if isinstance(tool, BaseTool):
|
||||
converted_tools.append(tool)
|
||||
continue
|
||||
|
||||
sanitized_name = self.sanitize_tool_name(tool.name)
|
||||
|
||||
async def tool_wrapper(*args, tool=tool, **kwargs):
|
||||
output = None
|
||||
if len(args) > 0 and isinstance(args[0], str):
|
||||
output = tool.run(args[0])
|
||||
elif "input" in kwargs:
|
||||
output = tool.run(kwargs["input"])
|
||||
else:
|
||||
output = tool.run(**kwargs)
|
||||
|
||||
if inspect.isawaitable(output):
|
||||
result = await output
|
||||
else:
|
||||
result = output
|
||||
return result
|
||||
|
||||
converted_tool = StructuredTool(
|
||||
name=sanitized_name,
|
||||
description=tool.description,
|
||||
func=tool_wrapper,
|
||||
args_schema=tool.args_schema,
|
||||
)
|
||||
|
||||
converted_tools.append(converted_tool)
|
||||
|
||||
self.converted_tools = converted_tools
|
||||
|
||||
def tools(self) -> List[Any]:
|
||||
return self.converted_tools or []
|
||||
@@ -0,0 +1,80 @@
|
||||
import json
|
||||
|
||||
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
|
||||
from crewai.utilities.converter import generate_model_description
|
||||
|
||||
|
||||
class LangGraphConverterAdapter(BaseConverterAdapter):
|
||||
"""Adapter for handling structured output conversion in LangGraph agents"""
|
||||
|
||||
def __init__(self, agent_adapter):
|
||||
"""Initialize the converter adapter with a reference to the agent adapter"""
|
||||
self.agent_adapter = agent_adapter
|
||||
self._output_format = None
|
||||
self._schema = None
|
||||
self._system_prompt_appendix = None
|
||||
|
||||
def configure_structured_output(self, task) -> None:
|
||||
"""Configure the structured output for LangGraph."""
|
||||
if not (task.output_json or task.output_pydantic):
|
||||
self._output_format = None
|
||||
self._schema = None
|
||||
self._system_prompt_appendix = None
|
||||
return
|
||||
|
||||
if task.output_json:
|
||||
self._output_format = "json"
|
||||
self._schema = generate_model_description(task.output_json)
|
||||
elif task.output_pydantic:
|
||||
self._output_format = "pydantic"
|
||||
self._schema = generate_model_description(task.output_pydantic)
|
||||
|
||||
self._system_prompt_appendix = self._generate_system_prompt_appendix()
|
||||
|
||||
def _generate_system_prompt_appendix(self) -> str:
|
||||
"""Generate an appendix for the system prompt to enforce structured output"""
|
||||
if not self._output_format or not self._schema:
|
||||
return ""
|
||||
|
||||
return f"""
|
||||
Important: Your final answer MUST be provided in the following structured format:
|
||||
|
||||
{self._schema}
|
||||
|
||||
DO NOT include any markdown code blocks, backticks, or other formatting around your response.
|
||||
The output should be raw JSON that exactly matches the specified schema.
|
||||
"""
|
||||
|
||||
def enhance_system_prompt(self, original_prompt: str) -> str:
|
||||
"""Add structured output instructions to the system prompt if needed"""
|
||||
if not self._system_prompt_appendix:
|
||||
return original_prompt
|
||||
|
||||
return f"{original_prompt}\n{self._system_prompt_appendix}"
|
||||
|
||||
def post_process_result(self, result: str) -> str:
|
||||
"""Post-process the result to ensure it matches the expected format"""
|
||||
if not self._output_format:
|
||||
return result
|
||||
|
||||
# Try to extract valid JSON if it's wrapped in code blocks or other text
|
||||
if self._output_format in ["json", "pydantic"]:
|
||||
try:
|
||||
# First, try to parse as is
|
||||
json.loads(result)
|
||||
return result
|
||||
except json.JSONDecodeError:
|
||||
# Try to extract JSON from the text
|
||||
import re
|
||||
|
||||
json_match = re.search(r"(\{.*\})", result, re.DOTALL)
|
||||
if json_match:
|
||||
try:
|
||||
extracted = json_match.group(1)
|
||||
# Validate it's proper JSON
|
||||
json.loads(extracted)
|
||||
return extracted
|
||||
except:
|
||||
pass
|
||||
|
||||
return result
|
||||
178
src/crewai/agents/agent_adapters/openai_agents/openai_adapter.py
Normal file
178
src/crewai/agents/agent_adapters/openai_agents/openai_adapter.py
Normal file
@@ -0,0 +1,178 @@
|
||||
from typing import Any, List, Optional
|
||||
|
||||
from pydantic import Field, PrivateAttr
|
||||
|
||||
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
|
||||
from crewai.agents.agent_adapters.openai_agents.structured_output_converter import (
|
||||
OpenAIConverterAdapter,
|
||||
)
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.tools import BaseTool
|
||||
from crewai.tools.agent_tools.agent_tools import AgentTools
|
||||
from crewai.utilities import Logger
|
||||
from crewai.utilities.events import crewai_event_bus
|
||||
from crewai.utilities.events.agent_events import (
|
||||
AgentExecutionCompletedEvent,
|
||||
AgentExecutionErrorEvent,
|
||||
AgentExecutionStartedEvent,
|
||||
)
|
||||
|
||||
try:
|
||||
from agents import Agent as OpenAIAgent # type: ignore
|
||||
from agents import Runner, enable_verbose_stdout_logging # type: ignore
|
||||
|
||||
from .openai_agent_tool_adapter import OpenAIAgentToolAdapter
|
||||
|
||||
OPENAI_AVAILABLE = True
|
||||
except ImportError:
|
||||
OPENAI_AVAILABLE = False
|
||||
|
||||
|
||||
class OpenAIAgentAdapter(BaseAgentAdapter):
|
||||
"""Adapter for OpenAI Assistants"""
|
||||
|
||||
model_config = {"arbitrary_types_allowed": True}
|
||||
|
||||
_openai_agent: "OpenAIAgent" = PrivateAttr()
|
||||
_logger: Logger = PrivateAttr(default_factory=lambda: Logger())
|
||||
_active_thread: Optional[str] = PrivateAttr(default=None)
|
||||
function_calling_llm: Any = Field(default=None)
|
||||
step_callback: Any = Field(default=None)
|
||||
_tool_adapter: "OpenAIAgentToolAdapter" = PrivateAttr()
|
||||
_converter_adapter: OpenAIConverterAdapter = PrivateAttr()
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model: str = "gpt-4o-mini",
|
||||
tools: Optional[List[BaseTool]] = None,
|
||||
agent_config: Optional[dict] = None,
|
||||
**kwargs,
|
||||
):
|
||||
if not OPENAI_AVAILABLE:
|
||||
raise ImportError(
|
||||
"OpenAI Agent Dependencies are not installed. Please install it using `uv add openai-agents`"
|
||||
)
|
||||
else:
|
||||
role = kwargs.pop("role", None)
|
||||
goal = kwargs.pop("goal", None)
|
||||
backstory = kwargs.pop("backstory", None)
|
||||
super().__init__(
|
||||
role=role,
|
||||
goal=goal,
|
||||
backstory=backstory,
|
||||
tools=tools,
|
||||
agent_config=agent_config,
|
||||
**kwargs,
|
||||
)
|
||||
self._tool_adapter = OpenAIAgentToolAdapter(tools=tools)
|
||||
self.llm = model
|
||||
self._converter_adapter = OpenAIConverterAdapter(self)
|
||||
|
||||
def _build_system_prompt(self) -> str:
|
||||
"""Build a system prompt for the OpenAI agent."""
|
||||
base_prompt = f"""
|
||||
You are {self.role}.
|
||||
|
||||
Your goal is: {self.goal}
|
||||
|
||||
Your backstory: {self.backstory}
|
||||
|
||||
When working on tasks, think step-by-step and use the available tools when necessary.
|
||||
"""
|
||||
return self._converter_adapter.enhance_system_prompt(base_prompt)
|
||||
|
||||
def execute_task(
|
||||
self,
|
||||
task: Any,
|
||||
context: Optional[str] = None,
|
||||
tools: Optional[List[BaseTool]] = None,
|
||||
) -> str:
|
||||
"""Execute a task using the OpenAI Assistant"""
|
||||
self._converter_adapter.configure_structured_output(task)
|
||||
self.create_agent_executor(tools)
|
||||
|
||||
if self.verbose:
|
||||
enable_verbose_stdout_logging()
|
||||
|
||||
try:
|
||||
task_prompt = task.prompt()
|
||||
if context:
|
||||
task_prompt = self.i18n.slice("task_with_context").format(
|
||||
task=task_prompt, context=context
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=AgentExecutionStartedEvent(
|
||||
agent=self,
|
||||
tools=self.tools,
|
||||
task_prompt=task_prompt,
|
||||
task=task,
|
||||
),
|
||||
)
|
||||
result = self.agent_executor.run_sync(self._openai_agent, task_prompt)
|
||||
final_answer = self.handle_execution_result(result)
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=AgentExecutionCompletedEvent(
|
||||
agent=self, task=task, output=final_answer
|
||||
),
|
||||
)
|
||||
return final_answer
|
||||
|
||||
except Exception as e:
|
||||
self._logger.log("error", f"Error executing OpenAI task: {str(e)}")
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=AgentExecutionErrorEvent(
|
||||
agent=self,
|
||||
task=task,
|
||||
error=str(e),
|
||||
),
|
||||
)
|
||||
raise
|
||||
|
||||
def create_agent_executor(self, tools: Optional[List[BaseTool]] = None) -> None:
|
||||
"""
|
||||
Configure the OpenAI agent for execution.
|
||||
While OpenAI handles execution differently through Runner,
|
||||
we can use this method to set up tools and configurations.
|
||||
"""
|
||||
all_tools = list(self.tools or []) + list(tools or [])
|
||||
|
||||
instructions = self._build_system_prompt()
|
||||
self._openai_agent = OpenAIAgent(
|
||||
name=self.role,
|
||||
instructions=instructions,
|
||||
model=self.llm,
|
||||
**self._agent_config or {},
|
||||
)
|
||||
|
||||
if all_tools:
|
||||
self.configure_tools(all_tools)
|
||||
|
||||
self.agent_executor = Runner
|
||||
|
||||
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
|
||||
"""Configure tools for the OpenAI Assistant"""
|
||||
if tools:
|
||||
self._tool_adapter.configure_tools(tools)
|
||||
if self._tool_adapter.converted_tools:
|
||||
self._openai_agent.tools = self._tool_adapter.converted_tools
|
||||
|
||||
def handle_execution_result(self, result: Any) -> str:
|
||||
"""Process OpenAI Assistant execution result converting any structured output to a string"""
|
||||
return self._converter_adapter.post_process_result(result.final_output)
|
||||
|
||||
def get_delegation_tools(self, agents: List[BaseAgent]) -> List[BaseTool]:
|
||||
"""Implement delegation tools support"""
|
||||
agent_tools = AgentTools(agents=agents)
|
||||
tools = agent_tools.tools()
|
||||
return tools
|
||||
|
||||
def configure_structured_output(self, task) -> None:
|
||||
"""Configure the structured output for the specific agent implementation.
|
||||
|
||||
Args:
|
||||
structured_output: The structured output to be configured
|
||||
"""
|
||||
self._converter_adapter.configure_structured_output(task)
|
||||
@@ -0,0 +1,91 @@
|
||||
import inspect
|
||||
from typing import Any, List, Optional
|
||||
|
||||
from agents import FunctionTool, Tool
|
||||
|
||||
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
|
||||
class OpenAIAgentToolAdapter(BaseToolAdapter):
|
||||
"""Adapter for OpenAI Assistant tools"""
|
||||
|
||||
def __init__(self, tools: Optional[List[BaseTool]] = None):
|
||||
self.original_tools = tools or []
|
||||
|
||||
def configure_tools(self, tools: List[BaseTool]) -> None:
|
||||
"""Configure tools for the OpenAI Assistant"""
|
||||
if self.original_tools:
|
||||
all_tools = tools + self.original_tools
|
||||
else:
|
||||
all_tools = tools
|
||||
if all_tools:
|
||||
self.converted_tools = self._convert_tools_to_openai_format(all_tools)
|
||||
|
||||
def _convert_tools_to_openai_format(
|
||||
self, tools: Optional[List[BaseTool]]
|
||||
) -> List[Tool]:
|
||||
"""Convert CrewAI tools to OpenAI Assistant tool format"""
|
||||
if not tools:
|
||||
return []
|
||||
|
||||
def sanitize_tool_name(name: str) -> str:
|
||||
"""Convert tool name to match OpenAI's required pattern"""
|
||||
import re
|
||||
|
||||
sanitized = re.sub(r"[^a-zA-Z0-9_-]", "_", name).lower()
|
||||
return sanitized
|
||||
|
||||
def create_tool_wrapper(tool: BaseTool):
|
||||
"""Create a wrapper function that handles the OpenAI function tool interface"""
|
||||
|
||||
async def wrapper(context_wrapper: Any, arguments: Any) -> Any:
|
||||
# Get the parameter name from the schema
|
||||
param_name = list(
|
||||
tool.args_schema.model_json_schema()["properties"].keys()
|
||||
)[0]
|
||||
|
||||
# Handle different argument types
|
||||
if isinstance(arguments, dict):
|
||||
args_dict = arguments
|
||||
elif isinstance(arguments, str):
|
||||
try:
|
||||
import json
|
||||
|
||||
args_dict = json.loads(arguments)
|
||||
except json.JSONDecodeError:
|
||||
args_dict = {param_name: arguments}
|
||||
else:
|
||||
args_dict = {param_name: str(arguments)}
|
||||
|
||||
# Run the tool with the processed arguments
|
||||
output = tool._run(**args_dict)
|
||||
|
||||
# Await if the tool returned a coroutine
|
||||
if inspect.isawaitable(output):
|
||||
result = await output
|
||||
else:
|
||||
result = output
|
||||
|
||||
# Ensure the result is JSON serializable
|
||||
if isinstance(result, (dict, list, str, int, float, bool, type(None))):
|
||||
return result
|
||||
return str(result)
|
||||
|
||||
return wrapper
|
||||
|
||||
openai_tools = []
|
||||
for tool in tools:
|
||||
schema = tool.args_schema.model_json_schema()
|
||||
|
||||
schema.update({"additionalProperties": False, "type": "object"})
|
||||
|
||||
openai_tool = FunctionTool(
|
||||
name=sanitize_tool_name(tool.name),
|
||||
description=tool.description,
|
||||
params_json_schema=schema,
|
||||
on_invoke_tool=create_tool_wrapper(tool),
|
||||
)
|
||||
openai_tools.append(openai_tool)
|
||||
|
||||
return openai_tools
|
||||
@@ -0,0 +1,122 @@
|
||||
import json
|
||||
import re
|
||||
|
||||
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
|
||||
from crewai.utilities.converter import generate_model_description
|
||||
from crewai.utilities.i18n import I18N
|
||||
|
||||
|
||||
class OpenAIConverterAdapter(BaseConverterAdapter):
|
||||
"""
|
||||
Adapter for handling structured output conversion in OpenAI agents.
|
||||
|
||||
This adapter enhances the OpenAI agent to handle structured output formats
|
||||
and post-processes the results when needed.
|
||||
|
||||
Attributes:
|
||||
_output_format: The expected output format (json, pydantic, or None)
|
||||
_schema: The schema description for the expected output
|
||||
_output_model: The Pydantic model for the output
|
||||
"""
|
||||
|
||||
def __init__(self, agent_adapter):
|
||||
"""Initialize the converter adapter with a reference to the agent adapter"""
|
||||
self.agent_adapter = agent_adapter
|
||||
self._output_format = None
|
||||
self._schema = None
|
||||
self._output_model = None
|
||||
|
||||
def configure_structured_output(self, task) -> None:
|
||||
"""
|
||||
Configure the structured output for OpenAI agent based on task requirements.
|
||||
|
||||
Args:
|
||||
task: The task containing output format requirements
|
||||
"""
|
||||
# Reset configuration
|
||||
self._output_format = None
|
||||
self._schema = None
|
||||
self._output_model = None
|
||||
|
||||
# If no structured output is required, return early
|
||||
if not (task.output_json or task.output_pydantic):
|
||||
return
|
||||
|
||||
# Configure based on task output format
|
||||
if task.output_json:
|
||||
self._output_format = "json"
|
||||
self._schema = generate_model_description(task.output_json)
|
||||
self.agent_adapter._openai_agent.output_type = task.output_json
|
||||
self._output_model = task.output_json
|
||||
elif task.output_pydantic:
|
||||
self._output_format = "pydantic"
|
||||
self._schema = generate_model_description(task.output_pydantic)
|
||||
self.agent_adapter._openai_agent.output_type = task.output_pydantic
|
||||
self._output_model = task.output_pydantic
|
||||
|
||||
def enhance_system_prompt(self, base_prompt: str) -> str:
|
||||
"""
|
||||
Enhance the base system prompt with structured output requirements if needed.
|
||||
|
||||
Args:
|
||||
base_prompt: The original system prompt
|
||||
|
||||
Returns:
|
||||
Enhanced system prompt with output format instructions if needed
|
||||
"""
|
||||
if not self._output_format:
|
||||
return base_prompt
|
||||
|
||||
output_schema = (
|
||||
I18N()
|
||||
.slice("formatted_task_instructions")
|
||||
.format(output_format=self._schema)
|
||||
)
|
||||
|
||||
return f"{base_prompt}\n\n{output_schema}"
|
||||
|
||||
def post_process_result(self, result: str) -> str:
|
||||
"""
|
||||
Post-process the result to ensure it matches the expected format.
|
||||
|
||||
This method attempts to extract valid JSON from the result if necessary.
|
||||
|
||||
Args:
|
||||
result: The raw result from the agent
|
||||
|
||||
Returns:
|
||||
Processed result conforming to the expected output format
|
||||
"""
|
||||
if not self._output_format:
|
||||
return result
|
||||
# Try to extract valid JSON if it's wrapped in code blocks or other text
|
||||
if isinstance(result, str) and self._output_format in ["json", "pydantic"]:
|
||||
# First, try to parse as is
|
||||
try:
|
||||
json.loads(result)
|
||||
return result
|
||||
except json.JSONDecodeError:
|
||||
# Try to extract JSON from markdown code blocks
|
||||
code_block_pattern = r"```(?:json)?\s*([\s\S]*?)```"
|
||||
code_blocks = re.findall(code_block_pattern, result)
|
||||
|
||||
for block in code_blocks:
|
||||
try:
|
||||
json.loads(block.strip())
|
||||
return block.strip()
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
|
||||
# Try to extract any JSON-like structure
|
||||
json_pattern = r"(\{[\s\S]*\})"
|
||||
json_matches = re.findall(json_pattern, result, re.DOTALL)
|
||||
|
||||
for match in json_matches:
|
||||
try:
|
||||
json.loads(match)
|
||||
return match
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
|
||||
# If all extraction attempts fail, return the original
|
||||
return str(result)
|
||||
@@ -19,6 +19,7 @@ from crewai.agents.agent_builder.utilities.base_token_process import TokenProces
|
||||
from crewai.agents.cache.cache_handler import CacheHandler
|
||||
from crewai.agents.tools_handler import ToolsHandler
|
||||
from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.knowledge.knowledge_config import KnowledgeConfig
|
||||
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
|
||||
from crewai.security.security_config import SecurityConfig
|
||||
from crewai.tools.base_tool import BaseTool, Tool
|
||||
@@ -62,8 +63,6 @@ class BaseAgent(ABC, BaseModel):
|
||||
Abstract method to execute a task.
|
||||
create_agent_executor(tools=None) -> None:
|
||||
Abstract method to create an agent executor.
|
||||
_parse_tools(tools: List[BaseTool]) -> List[Any]:
|
||||
Abstract method to parse tools.
|
||||
get_delegation_tools(agents: List["BaseAgent"]):
|
||||
Abstract method to set the agents task tools for handling delegation and question asking to other agents in crew.
|
||||
get_output_converter(llm, model, instructions):
|
||||
@@ -154,6 +153,13 @@ class BaseAgent(ABC, BaseModel):
|
||||
callbacks: List[Callable] = Field(
|
||||
default=[], description="Callbacks to be used for the agent"
|
||||
)
|
||||
adapted_agent: bool = Field(
|
||||
default=False, description="Whether the agent is adapted"
|
||||
)
|
||||
knowledge_config: Optional[KnowledgeConfig] = Field(
|
||||
default=None,
|
||||
description="Knowledge configuration for the agent such as limits and threshold",
|
||||
)
|
||||
|
||||
@model_validator(mode="before")
|
||||
@classmethod
|
||||
@@ -170,15 +176,15 @@ class BaseAgent(ABC, BaseModel):
|
||||
tool meets these criteria, it is processed and added to the list of
|
||||
tools. Otherwise, a ValueError is raised.
|
||||
"""
|
||||
if not tools:
|
||||
return []
|
||||
|
||||
processed_tools = []
|
||||
required_attrs = ["name", "func", "description"]
|
||||
for tool in tools:
|
||||
if isinstance(tool, BaseTool):
|
||||
processed_tools.append(tool)
|
||||
elif (
|
||||
hasattr(tool, "name")
|
||||
and hasattr(tool, "func")
|
||||
and hasattr(tool, "description")
|
||||
):
|
||||
elif all(hasattr(tool, attr) for attr in required_attrs):
|
||||
# Tool has the required attributes, create a Tool instance
|
||||
processed_tools.append(Tool.from_langchain(tool))
|
||||
else:
|
||||
@@ -260,13 +266,6 @@ class BaseAgent(ABC, BaseModel):
|
||||
"""Set the task tools that init BaseAgenTools class."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_output_converter(
|
||||
self, llm: Any, text: str, model: type[BaseModel] | None, instructions: str
|
||||
) -> Converter:
|
||||
"""Get the converter class for the agent to create json/pydantic outputs."""
|
||||
pass
|
||||
|
||||
def copy(self: T) -> T: # type: ignore # Signature of "copy" incompatible with supertype "BaseModel"
|
||||
"""Create a deep copy of the Agent."""
|
||||
exclude = {
|
||||
|
||||
@@ -117,7 +117,9 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
|
||||
published_handle = publish_response.json()["handle"]
|
||||
console.print(
|
||||
f"Successfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}",
|
||||
f"Successfully published `{published_handle}` ({project_version}).\n\n"
|
||||
+ "⚠️ Security checks are running in the background. Your tool will be available once these are complete.\n"
|
||||
+ f"You can monitor the status or access your tool here:\nhttps://app.crewai.com/crewai_plus/tools/{published_handle}",
|
||||
style="bold green",
|
||||
)
|
||||
|
||||
@@ -153,8 +155,12 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
login_response_json = login_response.json()
|
||||
|
||||
settings = Settings()
|
||||
settings.tool_repository_username = login_response_json["credential"]["username"]
|
||||
settings.tool_repository_password = login_response_json["credential"]["password"]
|
||||
settings.tool_repository_username = login_response_json["credential"][
|
||||
"username"
|
||||
]
|
||||
settings.tool_repository_password = login_response_json["credential"][
|
||||
"password"
|
||||
]
|
||||
settings.dump()
|
||||
|
||||
console.print(
|
||||
@@ -179,7 +185,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
capture_output=False,
|
||||
env=self._build_env_with_credentials(repository_handle),
|
||||
text=True,
|
||||
check=True
|
||||
check=True,
|
||||
)
|
||||
|
||||
if add_package_result.stderr:
|
||||
@@ -204,7 +210,11 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
settings = Settings()
|
||||
|
||||
env = os.environ.copy()
|
||||
env[f"UV_INDEX_{repository_handle}_USERNAME"] = str(settings.tool_repository_username or "")
|
||||
env[f"UV_INDEX_{repository_handle}_PASSWORD"] = str(settings.tool_repository_password or "")
|
||||
env[f"UV_INDEX_{repository_handle}_USERNAME"] = str(
|
||||
settings.tool_repository_username or ""
|
||||
)
|
||||
env[f"UV_INDEX_{repository_handle}_PASSWORD"] = str(
|
||||
settings.tool_repository_password or ""
|
||||
)
|
||||
|
||||
return env
|
||||
|
||||
@@ -304,9 +304,7 @@ class Crew(BaseModel):
|
||||
"""Initialize private memory attributes."""
|
||||
self._external_memory = (
|
||||
# External memory doesn’t support a default value since it was designed to be managed entirely externally
|
||||
self.external_memory.set_crew(self)
|
||||
if self.external_memory
|
||||
else None
|
||||
self.external_memory.set_crew(self) if self.external_memory else None
|
||||
)
|
||||
|
||||
self._long_term_memory = self.long_term_memory
|
||||
@@ -1136,9 +1134,13 @@ class Crew(BaseModel):
|
||||
result = self._execute_tasks(self.tasks, start_index, True)
|
||||
return result
|
||||
|
||||
def query_knowledge(self, query: List[str]) -> Union[List[Dict[str, Any]], None]:
|
||||
def query_knowledge(
|
||||
self, query: List[str], results_limit: int = 3, score_threshold: float = 0.35
|
||||
) -> Union[List[Dict[str, Any]], None]:
|
||||
if self.knowledge:
|
||||
return self.knowledge.query(query)
|
||||
return self.knowledge.query(
|
||||
query, results_limit=results_limit, score_threshold=score_threshold
|
||||
)
|
||||
return None
|
||||
|
||||
def fetch_inputs(self) -> Set[str]:
|
||||
@@ -1220,9 +1222,13 @@ class Crew(BaseModel):
|
||||
copied_data = self.model_dump(exclude=exclude)
|
||||
copied_data = {k: v for k, v in copied_data.items() if v is not None}
|
||||
if self.short_term_memory:
|
||||
copied_data["short_term_memory"] = self.short_term_memory.model_copy(deep=True)
|
||||
copied_data["short_term_memory"] = self.short_term_memory.model_copy(
|
||||
deep=True
|
||||
)
|
||||
if self.long_term_memory:
|
||||
copied_data["long_term_memory"] = self.long_term_memory.model_copy(deep=True)
|
||||
copied_data["long_term_memory"] = self.long_term_memory.model_copy(
|
||||
deep=True
|
||||
)
|
||||
if self.entity_memory:
|
||||
copied_data["entity_memory"] = self.entity_memory.model_copy(deep=True)
|
||||
if self.external_memory:
|
||||
@@ -1230,7 +1236,6 @@ class Crew(BaseModel):
|
||||
if self.user_memory:
|
||||
copied_data["user_memory"] = self.user_memory.model_copy(deep=True)
|
||||
|
||||
|
||||
copied_data.pop("agents", None)
|
||||
copied_data.pop("tasks", None)
|
||||
|
||||
@@ -1403,7 +1408,10 @@ class Crew(BaseModel):
|
||||
"short": (getattr(self, "_short_term_memory", None), "short term"),
|
||||
"entity": (getattr(self, "_entity_memory", None), "entity"),
|
||||
"knowledge": (getattr(self, "knowledge", None), "knowledge"),
|
||||
"kickoff_outputs": (getattr(self, "_task_output_handler", None), "task output"),
|
||||
"kickoff_outputs": (
|
||||
getattr(self, "_task_output_handler", None),
|
||||
"task output",
|
||||
),
|
||||
"external": (getattr(self, "_external_memory", None), "external"),
|
||||
}
|
||||
|
||||
|
||||
@@ -1,11 +1,18 @@
|
||||
import os
|
||||
from typing import Any, Dict, List, Optional
|
||||
from typing import Any, Dict, List, Optional, cast
|
||||
|
||||
from pydantic import BaseModel, ConfigDict, Field
|
||||
|
||||
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
|
||||
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
|
||||
|
||||
try:
|
||||
from crewai.knowledge.storage.elasticsearch_knowledge_storage import (
|
||||
ElasticsearchKnowledgeStorage,
|
||||
)
|
||||
except ImportError:
|
||||
ElasticsearchKnowledgeStorage = None
|
||||
|
||||
os.environ["TOKENIZERS_PARALLELISM"] = "false" # removes logging from fastembed
|
||||
|
||||
|
||||
@@ -30,20 +37,34 @@ class Knowledge(BaseModel):
|
||||
sources: List[BaseKnowledgeSource],
|
||||
embedder: Optional[Dict[str, Any]] = None,
|
||||
storage: Optional[KnowledgeStorage] = None,
|
||||
storage_provider: str = "chromadb",
|
||||
**data,
|
||||
):
|
||||
super().__init__(**data)
|
||||
if storage:
|
||||
self.storage = storage
|
||||
else:
|
||||
self.storage = KnowledgeStorage(
|
||||
embedder=embedder, collection_name=collection_name
|
||||
)
|
||||
if storage_provider == "elasticsearch":
|
||||
try:
|
||||
self.storage = cast(KnowledgeStorage, self._create_elasticsearch_storage(
|
||||
embedder, collection_name
|
||||
))
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Elasticsearch is not installed. Please install it with `pip install elasticsearch`."
|
||||
)
|
||||
else:
|
||||
self.storage = KnowledgeStorage(
|
||||
embedder=embedder, collection_name=collection_name
|
||||
)
|
||||
self.sources = sources
|
||||
self.storage.initialize_knowledge_storage()
|
||||
if self.storage is not None:
|
||||
self.storage.initialize_knowledge_storage()
|
||||
self._add_sources()
|
||||
|
||||
def query(self, query: List[str], limit: int = 3) -> List[Dict[str, Any]]:
|
||||
def query(
|
||||
self, query: List[str], results_limit: int = 3, score_threshold: float = 0.35
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Query across all knowledge sources to find the most relevant information.
|
||||
Returns the top_k most relevant chunks.
|
||||
@@ -56,7 +77,8 @@ class Knowledge(BaseModel):
|
||||
|
||||
results = self.storage.search(
|
||||
query,
|
||||
limit,
|
||||
limit=results_limit,
|
||||
score_threshold=score_threshold,
|
||||
)
|
||||
return results
|
||||
|
||||
@@ -68,6 +90,16 @@ class Knowledge(BaseModel):
|
||||
except Exception as e:
|
||||
raise e
|
||||
|
||||
def _create_elasticsearch_storage(self, embedder, collection_name):
|
||||
"""Create an Elasticsearch storage instance."""
|
||||
if ElasticsearchKnowledgeStorage is None:
|
||||
raise ImportError(
|
||||
"Elasticsearch is not installed. Please install it with `pip install elasticsearch`."
|
||||
)
|
||||
return ElasticsearchKnowledgeStorage(
|
||||
embedder_config=embedder, collection_name=collection_name
|
||||
)
|
||||
|
||||
def reset(self) -> None:
|
||||
if self.storage:
|
||||
self.storage.reset()
|
||||
|
||||
16
src/crewai/knowledge/knowledge_config.py
Normal file
16
src/crewai/knowledge/knowledge_config.py
Normal file
@@ -0,0 +1,16 @@
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class KnowledgeConfig(BaseModel):
|
||||
"""Configuration for knowledge retrieval.
|
||||
|
||||
Args:
|
||||
results_limit (int): The number of relevant documents to return.
|
||||
score_threshold (float): The minimum score for a document to be considered relevant.
|
||||
"""
|
||||
|
||||
results_limit: int = Field(default=3, description="The number of results to return")
|
||||
score_threshold: float = Field(
|
||||
default=0.35,
|
||||
description="The minimum score for a result to be considered relevant",
|
||||
)
|
||||
268
src/crewai/knowledge/storage/elasticsearch_knowledge_storage.py
Normal file
268
src/crewai/knowledge/storage/elasticsearch_knowledge_storage.py
Normal file
@@ -0,0 +1,268 @@
|
||||
import contextlib
|
||||
import hashlib
|
||||
import io
|
||||
import logging
|
||||
import os
|
||||
from typing import Any, Dict, List, Optional, Union, cast
|
||||
|
||||
from crewai.knowledge.storage.base_knowledge_storage import BaseKnowledgeStorage
|
||||
from crewai.utilities import EmbeddingConfigurator
|
||||
from crewai.utilities.logger import Logger
|
||||
from crewai.utilities.paths import db_storage_path
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def suppress_logging(logger_name="elasticsearch", level=logging.ERROR):
|
||||
logger = logging.getLogger(logger_name)
|
||||
original_level = logger.getEffectiveLevel()
|
||||
logger.setLevel(level)
|
||||
with (
|
||||
contextlib.redirect_stdout(io.StringIO()),
|
||||
contextlib.redirect_stderr(io.StringIO()),
|
||||
contextlib.suppress(UserWarning),
|
||||
):
|
||||
yield
|
||||
logger.setLevel(original_level)
|
||||
|
||||
|
||||
class ElasticsearchKnowledgeStorage(BaseKnowledgeStorage):
|
||||
"""
|
||||
Extends BaseKnowledgeStorage to use Elasticsearch for storing embeddings
|
||||
and improving search efficiency.
|
||||
"""
|
||||
|
||||
app: Any = None
|
||||
collection_name: Optional[str] = "knowledge"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
embedder_config: Optional[Dict[str, Any]] = None,
|
||||
collection_name: Optional[str] = None,
|
||||
host: str = "localhost",
|
||||
port: int = 9200,
|
||||
username: Optional[str] = None,
|
||||
password: Optional[str] = None,
|
||||
**kwargs: Any
|
||||
):
|
||||
self.collection_name = collection_name
|
||||
self._set_embedder_config(embedder_config)
|
||||
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.username = username
|
||||
self.password = password
|
||||
self.index_name = f"crewai_knowledge_{collection_name if collection_name else 'default'}".lower()
|
||||
self.additional_config = kwargs
|
||||
|
||||
def search(
|
||||
self,
|
||||
query: List[str],
|
||||
limit: int = 3,
|
||||
filter: Optional[dict] = None,
|
||||
score_threshold: float = 0.35,
|
||||
) -> List[Dict[str, Any]]:
|
||||
if not self.app:
|
||||
self.initialize_knowledge_storage()
|
||||
|
||||
try:
|
||||
embedding = self._get_embedding_for_text(query[0])
|
||||
|
||||
search_query: Dict[str, Any] = {
|
||||
"size": limit,
|
||||
"query": {
|
||||
"script_score": {
|
||||
"query": {"match_all": {}},
|
||||
"script": {
|
||||
"source": "cosineSimilarity(params.query_vector, 'embedding') + 1.0",
|
||||
"params": {"query_vector": embedding}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if filter:
|
||||
query_obj = search_query.get("query", {})
|
||||
if isinstance(query_obj, dict):
|
||||
script_score_obj = query_obj.get("script_score", {})
|
||||
if isinstance(script_score_obj, dict):
|
||||
query_part = script_score_obj.get("query", {})
|
||||
if isinstance(query_part, dict):
|
||||
for key, value in filter.items():
|
||||
new_query = {
|
||||
"bool": {
|
||||
"must": [
|
||||
query_part,
|
||||
{"match": {f"metadata.{key}": value}}
|
||||
]
|
||||
}
|
||||
}
|
||||
if isinstance(script_score_obj, dict):
|
||||
script_score_obj["query"] = new_query
|
||||
|
||||
with suppress_logging():
|
||||
if self.app is not None and hasattr(self.app, "search") and callable(getattr(self.app, "search")):
|
||||
response = self.app.search(
|
||||
index=self.index_name,
|
||||
body=search_query
|
||||
)
|
||||
|
||||
results = []
|
||||
for hit in response["hits"]["hits"]:
|
||||
adjusted_score = (hit["_score"] - 1.0)
|
||||
|
||||
if adjusted_score >= score_threshold:
|
||||
results.append({
|
||||
"id": hit["_id"],
|
||||
"metadata": hit["_source"]["metadata"],
|
||||
"context": hit["_source"]["text"],
|
||||
"score": adjusted_score,
|
||||
})
|
||||
|
||||
return results
|
||||
else:
|
||||
Logger(verbose=True).log("error", "Elasticsearch client is not initialized", "red")
|
||||
return []
|
||||
except Exception as e:
|
||||
Logger(verbose=True).log("error", f"Search error: {e}", "red")
|
||||
raise Exception(f"Error during knowledge search: {str(e)}")
|
||||
|
||||
def initialize_knowledge_storage(self):
|
||||
try:
|
||||
from elasticsearch import Elasticsearch
|
||||
|
||||
es_auth = {}
|
||||
if self.username and self.password:
|
||||
es_auth = {"basic_auth": (self.username, self.password)}
|
||||
|
||||
self.app = Elasticsearch(
|
||||
[f"http://{self.host}:{self.port}"],
|
||||
**es_auth,
|
||||
**self.additional_config
|
||||
)
|
||||
|
||||
if not self.app.indices.exists(index=self.index_name):
|
||||
self.app.indices.create(
|
||||
index=self.index_name,
|
||||
body={
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"text": {"type": "text"},
|
||||
"embedding": {
|
||||
"type": "dense_vector",
|
||||
"dims": 1536, # Default for OpenAI embeddings
|
||||
"index": True,
|
||||
"similarity": "cosine"
|
||||
},
|
||||
"metadata": {"type": "object"}
|
||||
}
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Elasticsearch is not installed. Please install it with `pip install elasticsearch`."
|
||||
)
|
||||
except Exception as e:
|
||||
Logger(verbose=True).log(
|
||||
"error",
|
||||
f"Error initializing Elasticsearch: {str(e)}",
|
||||
"red"
|
||||
)
|
||||
raise Exception(f"Error initializing Elasticsearch: {str(e)}")
|
||||
|
||||
def reset(self) -> None:
|
||||
try:
|
||||
if self.app is not None:
|
||||
if self.app.indices.exists(index=self.index_name):
|
||||
self.app.indices.delete(index=self.index_name)
|
||||
|
||||
self.initialize_knowledge_storage()
|
||||
except Exception as e:
|
||||
raise Exception(
|
||||
f"An error occurred while resetting the knowledge storage: {e}"
|
||||
)
|
||||
|
||||
def save(
|
||||
self,
|
||||
documents: List[str],
|
||||
metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None,
|
||||
) -> None:
|
||||
if not self.app:
|
||||
self.initialize_knowledge_storage()
|
||||
|
||||
try:
|
||||
unique_docs = {}
|
||||
|
||||
for idx, doc in enumerate(documents):
|
||||
doc_id = hashlib.sha256(doc.encode("utf-8")).hexdigest()
|
||||
doc_metadata = None
|
||||
if metadata is not None:
|
||||
if isinstance(metadata, list):
|
||||
doc_metadata = metadata[idx]
|
||||
else:
|
||||
doc_metadata = metadata
|
||||
unique_docs[doc_id] = (doc, doc_metadata)
|
||||
|
||||
for doc_id, (doc, meta) in unique_docs.items():
|
||||
embedding = self._get_embedding_for_text(doc)
|
||||
|
||||
doc_body = {
|
||||
"text": doc,
|
||||
"embedding": embedding,
|
||||
"metadata": meta or {},
|
||||
}
|
||||
|
||||
if self.app is not None and hasattr(self.app, "index") and callable(getattr(self.app, "index")):
|
||||
index_func = getattr(self.app, "index")
|
||||
index_func(
|
||||
index=self.index_name,
|
||||
id=doc_id,
|
||||
document=doc_body,
|
||||
refresh=True # Make the document immediately available for search
|
||||
)
|
||||
else:
|
||||
Logger(verbose=True).log("error", "Elasticsearch client is not initialized", "red")
|
||||
|
||||
except Exception as e:
|
||||
Logger(verbose=True).log("error", f"Save error: {e}", "red")
|
||||
raise Exception(f"Error during knowledge save: {str(e)}")
|
||||
|
||||
def _get_embedding_for_text(self, text: str) -> List[float]:
|
||||
"""Get embedding for text using the configured embedder."""
|
||||
if self.embedder_config is None:
|
||||
raise ValueError("Embedder configuration is not set")
|
||||
|
||||
embedder = self.embedder_config
|
||||
if hasattr(embedder, "embed_documents") and callable(getattr(embedder, "embed_documents")):
|
||||
embed_func = getattr(embedder, "embed_documents")
|
||||
return embed_func([text])[0]
|
||||
elif hasattr(embedder, "embed") and callable(getattr(embedder, "embed")):
|
||||
embed_func = getattr(embedder, "embed")
|
||||
return embed_func(text)
|
||||
else:
|
||||
raise ValueError("Invalid embedding function configuration")
|
||||
|
||||
def _create_default_embedding_function(self):
|
||||
from chromadb.utils.embedding_functions.openai_embedding_function import (
|
||||
OpenAIEmbeddingFunction,
|
||||
)
|
||||
|
||||
return OpenAIEmbeddingFunction(
|
||||
api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"
|
||||
)
|
||||
|
||||
def _set_embedder_config(
|
||||
self, embedder: Optional[Dict[str, Any]] = None
|
||||
) -> None:
|
||||
"""Set the embedding configuration for the knowledge storage.
|
||||
|
||||
Args:
|
||||
embedder (Optional[Dict[str, Any]]): Configuration dictionary for the embedder.
|
||||
If None or empty, defaults to the default embedding function.
|
||||
"""
|
||||
self.embedder_config = (
|
||||
EmbeddingConfigurator().configure_embedder(embedder)
|
||||
if embedder
|
||||
else self._create_default_embedding_function()
|
||||
)
|
||||
@@ -4,7 +4,7 @@ import io
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
from typing import Any, Dict, List, Optional, Union, cast
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
import chromadb
|
||||
import chromadb.errors
|
||||
|
||||
@@ -22,7 +22,9 @@ class EntityMemory(Memory):
|
||||
else:
|
||||
memory_provider = None
|
||||
|
||||
if memory_provider == "mem0":
|
||||
if storage:
|
||||
pass
|
||||
elif memory_provider == "mem0":
|
||||
try:
|
||||
from crewai.memory.storage.mem0_storage import Mem0Storage
|
||||
except ImportError:
|
||||
@@ -30,17 +32,26 @@ class EntityMemory(Memory):
|
||||
"Mem0 is not installed. Please install it with `pip install mem0ai`."
|
||||
)
|
||||
storage = Mem0Storage(type="entities", crew=crew)
|
||||
else:
|
||||
storage = (
|
||||
storage
|
||||
if storage
|
||||
else RAGStorage(
|
||||
elif memory_provider == "elasticsearch":
|
||||
try:
|
||||
storage = self._create_elasticsearch_storage(
|
||||
type="entities",
|
||||
allow_reset=True,
|
||||
embedder_config=embedder_config,
|
||||
crew=crew,
|
||||
path=path,
|
||||
)
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Elasticsearch is not installed. Please install it with `pip install elasticsearch`."
|
||||
)
|
||||
else:
|
||||
storage = RAGStorage(
|
||||
type="entities",
|
||||
allow_reset=True,
|
||||
embedder_config=embedder_config,
|
||||
crew=crew,
|
||||
path=path,
|
||||
)
|
||||
|
||||
super().__init__(storage=storage)
|
||||
@@ -59,6 +70,11 @@ class EntityMemory(Memory):
|
||||
data = f"{item.name}({item.type}): {item.description}"
|
||||
super().save(data, item.metadata)
|
||||
|
||||
def _create_elasticsearch_storage(self, **kwargs):
|
||||
"""Create an Elasticsearch storage instance."""
|
||||
from crewai.memory.storage.elasticsearch_storage import ElasticsearchStorage
|
||||
return ElasticsearchStorage(**kwargs)
|
||||
|
||||
def reset(self) -> None:
|
||||
try:
|
||||
self.storage.reset()
|
||||
|
||||
@@ -24,7 +24,9 @@ class ShortTermMemory(Memory):
|
||||
else:
|
||||
memory_provider = None
|
||||
|
||||
if memory_provider == "mem0":
|
||||
if storage:
|
||||
pass
|
||||
elif memory_provider == "mem0":
|
||||
try:
|
||||
from crewai.memory.storage.mem0_storage import Mem0Storage
|
||||
except ImportError:
|
||||
@@ -32,16 +34,24 @@ class ShortTermMemory(Memory):
|
||||
"Mem0 is not installed. Please install it with `pip install mem0ai`."
|
||||
)
|
||||
storage = Mem0Storage(type="short_term", crew=crew)
|
||||
else:
|
||||
storage = (
|
||||
storage
|
||||
if storage
|
||||
else RAGStorage(
|
||||
elif memory_provider == "elasticsearch":
|
||||
try:
|
||||
storage = self._create_elasticsearch_storage(
|
||||
type="short_term",
|
||||
embedder_config=embedder_config,
|
||||
crew=crew,
|
||||
path=path,
|
||||
)
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Elasticsearch is not installed. Please install it with `pip install elasticsearch`."
|
||||
)
|
||||
else:
|
||||
storage = RAGStorage(
|
||||
type="short_term",
|
||||
embedder_config=embedder_config,
|
||||
crew=crew,
|
||||
path=path,
|
||||
)
|
||||
super().__init__(storage=storage)
|
||||
self._memory_provider = memory_provider
|
||||
@@ -68,6 +78,11 @@ class ShortTermMemory(Memory):
|
||||
query=query, limit=limit, score_threshold=score_threshold
|
||||
) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters
|
||||
|
||||
def _create_elasticsearch_storage(self, **kwargs):
|
||||
"""Create an Elasticsearch storage instance."""
|
||||
from crewai.memory.storage.elasticsearch_storage import ElasticsearchStorage
|
||||
return ElasticsearchStorage(**kwargs)
|
||||
|
||||
def reset(self) -> None:
|
||||
try:
|
||||
self.storage.reset()
|
||||
|
||||
275
src/crewai/memory/storage/elasticsearch_storage.py
Normal file
275
src/crewai/memory/storage/elasticsearch_storage.py
Normal file
@@ -0,0 +1,275 @@
|
||||
import contextlib
|
||||
import io
|
||||
import logging
|
||||
import os
|
||||
import uuid
|
||||
from typing import Any, Dict, List, Optional, cast
|
||||
|
||||
from crewai.memory.storage.base_rag_storage import BaseRAGStorage
|
||||
from crewai.utilities import EmbeddingConfigurator
|
||||
from crewai.utilities.constants import MAX_FILE_NAME_LENGTH
|
||||
from crewai.utilities.logger import Logger
|
||||
from crewai.utilities.paths import db_storage_path
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def suppress_logging(logger_name="elasticsearch", level=logging.ERROR):
|
||||
logger = logging.getLogger(logger_name)
|
||||
original_level = logger.getEffectiveLevel()
|
||||
logger.setLevel(level)
|
||||
with (
|
||||
contextlib.redirect_stdout(io.StringIO()),
|
||||
contextlib.redirect_stderr(io.StringIO()),
|
||||
contextlib.suppress(UserWarning),
|
||||
):
|
||||
yield
|
||||
logger.setLevel(original_level)
|
||||
|
||||
|
||||
class ElasticsearchStorage(BaseRAGStorage):
|
||||
"""
|
||||
Extends BaseRAGStorage to use Elasticsearch for storing embeddings
|
||||
and improving search efficiency.
|
||||
"""
|
||||
|
||||
app: Any = None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
type: str,
|
||||
allow_reset: bool = True,
|
||||
embedder_config: Any = None,
|
||||
crew: Any = None,
|
||||
path: Optional[str] = None,
|
||||
host: str = "localhost",
|
||||
port: int = 9200,
|
||||
username: Optional[str] = None,
|
||||
password: Optional[str] = None,
|
||||
**kwargs: Any
|
||||
):
|
||||
super().__init__(type, allow_reset, embedder_config, crew)
|
||||
agents = crew.agents if crew else []
|
||||
agents = [self._sanitize_role(agent.role) for agent in agents]
|
||||
agents = "_".join(agents)
|
||||
self.agents = agents
|
||||
self.storage_file_name = self._build_storage_file_name(type, agents)
|
||||
|
||||
self.type = type
|
||||
self.allow_reset = allow_reset
|
||||
self.path = path
|
||||
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.username = username
|
||||
self.password = password
|
||||
self.index_name = f"crewai_{type}".lower()
|
||||
self.additional_config = kwargs
|
||||
|
||||
self._initialize_app()
|
||||
|
||||
def _sanitize_role(self, role: str) -> str:
|
||||
"""
|
||||
Sanitizes agent roles to ensure valid directory and index names.
|
||||
"""
|
||||
return role.replace("\n", "").replace(" ", "_").replace("/", "_")
|
||||
|
||||
def _build_storage_file_name(self, type: str, file_name: str) -> str:
|
||||
"""
|
||||
Ensures file name does not exceed max allowed by OS
|
||||
"""
|
||||
base_path = f"{db_storage_path()}/{type}"
|
||||
|
||||
if len(file_name) > MAX_FILE_NAME_LENGTH:
|
||||
logging.warning(
|
||||
f"Trimming file name from {len(file_name)} to {MAX_FILE_NAME_LENGTH} characters."
|
||||
)
|
||||
file_name = file_name[:MAX_FILE_NAME_LENGTH]
|
||||
|
||||
return f"{base_path}/{file_name}"
|
||||
|
||||
def _set_embedder_config(self):
|
||||
configurator = EmbeddingConfigurator()
|
||||
self.embedder_config = configurator.configure_embedder(self.embedder_config)
|
||||
|
||||
def _initialize_app(self):
|
||||
try:
|
||||
from elasticsearch import Elasticsearch
|
||||
|
||||
self._set_embedder_config()
|
||||
|
||||
es_auth = {}
|
||||
if self.username and self.password:
|
||||
es_auth = {"basic_auth": (self.username, self.password)}
|
||||
|
||||
self.app = Elasticsearch(
|
||||
[f"http://{self.host}:{self.port}"],
|
||||
**es_auth,
|
||||
**self.additional_config
|
||||
)
|
||||
|
||||
if not self.app.indices.exists(index=self.index_name):
|
||||
self.app.indices.create(
|
||||
index=self.index_name,
|
||||
body={
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"text": {"type": "text"},
|
||||
"embedding": {
|
||||
"type": "dense_vector",
|
||||
"dims": 1536, # Default for OpenAI embeddings
|
||||
"index": True,
|
||||
"similarity": "cosine"
|
||||
},
|
||||
"metadata": {"type": "object"}
|
||||
}
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Elasticsearch is not installed. Please install it with `pip install elasticsearch`."
|
||||
)
|
||||
except Exception as e:
|
||||
Logger(verbose=True).log(
|
||||
"error",
|
||||
f"Error initializing Elasticsearch: {str(e)}",
|
||||
"red"
|
||||
)
|
||||
raise Exception(f"Error initializing Elasticsearch: {str(e)}")
|
||||
|
||||
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
|
||||
if not hasattr(self, "app"):
|
||||
self._initialize_app()
|
||||
|
||||
try:
|
||||
self._generate_embedding(value, metadata)
|
||||
except Exception as e:
|
||||
logging.error(f"Error during {self.type} save: {str(e)}")
|
||||
|
||||
def search(
|
||||
self,
|
||||
query: str,
|
||||
limit: int = 3,
|
||||
filter: Optional[dict] = None,
|
||||
score_threshold: float = 0.35,
|
||||
) -> List[Any]:
|
||||
if not hasattr(self, "app") or self.app is None:
|
||||
self._initialize_app()
|
||||
|
||||
try:
|
||||
embedding = self._get_embedding_for_text(query)
|
||||
|
||||
search_query: Dict[str, Any] = {
|
||||
"size": limit,
|
||||
"query": {
|
||||
"script_score": {
|
||||
"query": {"match_all": {}},
|
||||
"script": {
|
||||
"source": "cosineSimilarity(params.query_vector, 'embedding') + 1.0",
|
||||
"params": {"query_vector": embedding}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if filter:
|
||||
query_obj = search_query.get("query", {})
|
||||
if isinstance(query_obj, dict):
|
||||
script_score_obj = query_obj.get("script_score", {})
|
||||
if isinstance(script_score_obj, dict):
|
||||
query_part = script_score_obj.get("query", {})
|
||||
if isinstance(query_part, dict):
|
||||
for key, value in filter.items():
|
||||
new_query = {
|
||||
"bool": {
|
||||
"must": [
|
||||
query_part,
|
||||
{"match": {f"metadata.{key}": value}}
|
||||
]
|
||||
}
|
||||
}
|
||||
if isinstance(script_score_obj, dict):
|
||||
script_score_obj["query"] = new_query
|
||||
|
||||
with suppress_logging():
|
||||
if self.app is not None and hasattr(self.app, "search") and callable(getattr(self.app, "search")):
|
||||
search_func = getattr(self.app, "search")
|
||||
response = search_func(
|
||||
index=self.index_name,
|
||||
body=search_query
|
||||
)
|
||||
|
||||
results = []
|
||||
for hit in response["hits"]["hits"]:
|
||||
adjusted_score = (hit["_score"] - 1.0)
|
||||
|
||||
if adjusted_score >= score_threshold:
|
||||
results.append({
|
||||
"id": hit["_id"],
|
||||
"metadata": hit["_source"]["metadata"],
|
||||
"context": hit["_source"]["text"],
|
||||
"score": adjusted_score,
|
||||
})
|
||||
|
||||
return results
|
||||
else:
|
||||
logging.error("Elasticsearch client is not initialized")
|
||||
return []
|
||||
except Exception as e:
|
||||
logging.error(f"Error during {self.type} search: {str(e)}")
|
||||
return []
|
||||
|
||||
def _get_embedding_for_text(self, text: str) -> List[float]:
|
||||
"""Get embedding for text using the configured embedder."""
|
||||
if self.embedder_config is None:
|
||||
raise ValueError("Embedder configuration is not set")
|
||||
|
||||
embedder = self.embedder_config
|
||||
if hasattr(embedder, "embed_documents") and callable(getattr(embedder, "embed_documents")):
|
||||
embed_func = getattr(embedder, "embed_documents")
|
||||
return embed_func([text])[0]
|
||||
elif hasattr(embedder, "embed") and callable(getattr(embedder, "embed")):
|
||||
embed_func = getattr(embedder, "embed")
|
||||
return embed_func(text)
|
||||
else:
|
||||
raise ValueError("Invalid embedding function configuration")
|
||||
|
||||
def _generate_embedding(self, text: str, metadata: Optional[Dict[str, Any]] = None) -> Any:
|
||||
"""Generate embedding for text and save to Elasticsearch.
|
||||
|
||||
This method overrides the BaseRAGStorage method to use Elasticsearch.
|
||||
"""
|
||||
if not hasattr(self, "app") or self.app is None:
|
||||
self._initialize_app()
|
||||
|
||||
embedding = self._get_embedding_for_text(text)
|
||||
|
||||
doc = {
|
||||
"text": text,
|
||||
"embedding": embedding,
|
||||
"metadata": metadata or {},
|
||||
}
|
||||
|
||||
if self.app is not None and hasattr(self.app, "index") and callable(getattr(self.app, "index")):
|
||||
index_func = getattr(self.app, "index")
|
||||
result = index_func(
|
||||
index=self.index_name,
|
||||
id=str(uuid.uuid4()),
|
||||
document=doc,
|
||||
refresh=True # Make the document immediately available for search
|
||||
)
|
||||
return result
|
||||
return None
|
||||
|
||||
def reset(self) -> None:
|
||||
try:
|
||||
if self.app is not None:
|
||||
if self.app.indices.exists(index=self.index_name):
|
||||
self.app.indices.delete(index=self.index_name)
|
||||
|
||||
self._initialize_app()
|
||||
except Exception as e:
|
||||
raise Exception(
|
||||
f"An error occurred while resetting the {self.type} memory: {e}"
|
||||
)
|
||||
81
src/crewai/memory/storage/storage_factory.py
Normal file
81
src/crewai/memory/storage/storage_factory.py
Normal file
@@ -0,0 +1,81 @@
|
||||
from typing import Any, Dict, Optional, Type, cast
|
||||
|
||||
from crewai.memory.storage.base_rag_storage import BaseRAGStorage
|
||||
from crewai.memory.storage.rag_storage import RAGStorage
|
||||
from crewai.utilities.logger import Logger
|
||||
|
||||
try:
|
||||
from crewai.memory.storage.elasticsearch_storage import ElasticsearchStorage
|
||||
except ImportError:
|
||||
ElasticsearchStorage = None
|
||||
|
||||
try:
|
||||
from crewai.memory.storage.mem0_storage import Mem0Storage
|
||||
except ImportError:
|
||||
Mem0Storage = None
|
||||
|
||||
|
||||
class StorageFactory:
|
||||
"""Factory for creating storage instances based on provider type."""
|
||||
|
||||
@classmethod
|
||||
def create_storage(
|
||||
cls,
|
||||
provider: str,
|
||||
type: str,
|
||||
allow_reset: bool = True,
|
||||
embedder_config: Optional[Any] = None,
|
||||
crew: Any = None,
|
||||
path: Optional[str] = None,
|
||||
**kwargs,
|
||||
) -> BaseRAGStorage:
|
||||
"""Create a storage instance based on the provider type.
|
||||
|
||||
Args:
|
||||
provider: Type of storage provider ("chromadb", "elasticsearch", "mem0").
|
||||
type: Type of memory storage (e.g., "short_term", "entity").
|
||||
allow_reset: Whether to allow resetting the storage.
|
||||
embedder_config: Configuration for the embedder.
|
||||
crew: Crew instance.
|
||||
path: Path to the storage.
|
||||
**kwargs: Additional arguments.
|
||||
|
||||
Returns:
|
||||
Storage instance.
|
||||
"""
|
||||
if provider == "elasticsearch":
|
||||
if ElasticsearchStorage is None:
|
||||
Logger(verbose=True).log(
|
||||
"error",
|
||||
"Elasticsearch is not installed. Please install it with `pip install elasticsearch`.",
|
||||
"red",
|
||||
)
|
||||
raise ImportError(
|
||||
"Elasticsearch is not installed. Please install it with `pip install elasticsearch`."
|
||||
)
|
||||
return ElasticsearchStorage(
|
||||
type=type,
|
||||
allow_reset=allow_reset,
|
||||
embedder_config=embedder_config,
|
||||
crew=crew,
|
||||
path=path,
|
||||
**kwargs,
|
||||
)
|
||||
elif provider == "mem0":
|
||||
if Mem0Storage is None:
|
||||
Logger(verbose=True).log(
|
||||
"error",
|
||||
"Mem0 is not installed. Please install it with `pip install mem0ai`.",
|
||||
"red",
|
||||
)
|
||||
raise ImportError(
|
||||
"Mem0 is not installed. Please install it with `pip install mem0ai`."
|
||||
)
|
||||
return cast(BaseRAGStorage, Mem0Storage(type=type, crew=crew))
|
||||
return RAGStorage(
|
||||
type=type,
|
||||
allow_reset=allow_reset,
|
||||
embedder_config=embedder_config,
|
||||
crew=crew,
|
||||
path=path,
|
||||
)
|
||||
@@ -216,7 +216,7 @@ def convert_with_instructions(
|
||||
|
||||
def get_conversion_instructions(model: Type[BaseModel], llm: Any) -> str:
|
||||
instructions = "Please convert the following text into valid JSON."
|
||||
if llm.supports_function_calling():
|
||||
if llm and not isinstance(llm, str) and llm.supports_function_calling():
|
||||
model_schema = PydanticSchemaParser(model=model).get_schema()
|
||||
instructions += (
|
||||
f"\n\nOutput ONLY the valid JSON and nothing else.\n\n"
|
||||
|
||||
@@ -10,6 +10,8 @@ from crewai import Agent, Crew, Task
|
||||
from crewai.agents.cache import CacheHandler
|
||||
from crewai.agents.crew_agent_executor import AgentFinish, CrewAgentExecutor
|
||||
from crewai.agents.parser import CrewAgentParser, OutputParserException
|
||||
from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.knowledge.knowledge_config import KnowledgeConfig
|
||||
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
|
||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||
from crewai.llm import LLM
|
||||
@@ -259,7 +261,9 @@ def test_cache_hitting():
|
||||
def handle_tool_end(source, event):
|
||||
received_events.append(event)
|
||||
|
||||
with (patch.object(CacheHandler, "read") as read,):
|
||||
with (
|
||||
patch.object(CacheHandler, "read") as read,
|
||||
):
|
||||
read.return_value = "0"
|
||||
task = Task(
|
||||
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool, you must use the tool.",
|
||||
@@ -1611,6 +1615,78 @@ def test_agent_with_knowledge_sources():
|
||||
assert "red" in result.raw.lower()
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_with_knowledge_sources_with_query_limit_and_score_threshold():
|
||||
content = "Brandon's favorite color is red and he likes Mexican food."
|
||||
string_source = StringKnowledgeSource(content=content)
|
||||
knowledge_config = KnowledgeConfig(results_limit=10, score_threshold=0.5)
|
||||
with patch(
|
||||
"crewai.knowledge.storage.knowledge_storage.KnowledgeStorage"
|
||||
) as MockKnowledge:
|
||||
mock_knowledge_instance = MockKnowledge.return_value
|
||||
mock_knowledge_instance.sources = [string_source]
|
||||
mock_knowledge_instance.query.return_value = [{"content": content}]
|
||||
with patch.object(Knowledge, "query") as mock_knowledge_query:
|
||||
agent = Agent(
|
||||
role="Information Agent",
|
||||
goal="Provide information based on knowledge sources",
|
||||
backstory="You have access to specific knowledge sources.",
|
||||
llm=LLM(model="gpt-4o-mini"),
|
||||
knowledge_sources=[string_source],
|
||||
knowledge_config=knowledge_config,
|
||||
)
|
||||
task = Task(
|
||||
description="What is Brandon's favorite color?",
|
||||
expected_output="Brandon's favorite color.",
|
||||
agent=agent,
|
||||
)
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
crew.kickoff()
|
||||
|
||||
assert agent.knowledge is not None
|
||||
mock_knowledge_query.assert_called_once_with(
|
||||
[task.prompt()],
|
||||
**knowledge_config.model_dump(),
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_with_knowledge_sources_with_query_limit_and_score_threshold_default():
|
||||
content = "Brandon's favorite color is red and he likes Mexican food."
|
||||
string_source = StringKnowledgeSource(content=content)
|
||||
knowledge_config = KnowledgeConfig()
|
||||
with patch(
|
||||
"crewai.knowledge.storage.knowledge_storage.KnowledgeStorage"
|
||||
) as MockKnowledge:
|
||||
mock_knowledge_instance = MockKnowledge.return_value
|
||||
mock_knowledge_instance.sources = [string_source]
|
||||
mock_knowledge_instance.query.return_value = [{"content": content}]
|
||||
with patch.object(Knowledge, "query") as mock_knowledge_query:
|
||||
string_source = StringKnowledgeSource(content=content)
|
||||
knowledge_config = KnowledgeConfig()
|
||||
agent = Agent(
|
||||
role="Information Agent",
|
||||
goal="Provide information based on knowledge sources",
|
||||
backstory="You have access to specific knowledge sources.",
|
||||
llm=LLM(model="gpt-4o-mini"),
|
||||
knowledge_sources=[string_source],
|
||||
knowledge_config=knowledge_config,
|
||||
)
|
||||
task = Task(
|
||||
description="What is Brandon's favorite color?",
|
||||
expected_output="Brandon's favorite color.",
|
||||
agent=agent,
|
||||
)
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
crew.kickoff()
|
||||
|
||||
assert agent.knowledge is not None
|
||||
mock_knowledge_query.assert_called_once_with(
|
||||
[task.prompt()],
|
||||
**knowledge_config.model_dump(),
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_with_knowledge_sources_extensive_role():
|
||||
content = "Brandon's favorite color is red and he likes Mexican food."
|
||||
|
||||
113
tests/agents/agent_adapters/test_base_agent_adapter.py
Normal file
113
tests/agents/agent_adapters/test_base_agent_adapter.py
Normal file
@@ -0,0 +1,113 @@
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
import pytest
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.agent import BaseAgent
|
||||
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
|
||||
from crewai.tools import BaseTool
|
||||
from crewai.utilities.token_counter_callback import TokenProcess
|
||||
|
||||
|
||||
# Concrete implementation for testing
|
||||
class ConcreteAgentAdapter(BaseAgentAdapter):
|
||||
def configure_tools(
|
||||
self, tools: Optional[List[BaseTool]] = None, **kwargs: Any
|
||||
) -> None:
|
||||
# Simple implementation for testing
|
||||
self.tools = tools or []
|
||||
|
||||
def execute_task(
|
||||
self,
|
||||
task: Any,
|
||||
context: Optional[str] = None,
|
||||
tools: Optional[List[Any]] = None,
|
||||
) -> str:
|
||||
# Dummy implementation needed due to BaseAgent inheritance
|
||||
return "Task executed"
|
||||
|
||||
def create_agent_executor(self, tools: Optional[List[BaseTool]] = None) -> Any:
|
||||
# Dummy implementation
|
||||
return None
|
||||
|
||||
def get_delegation_tools(
|
||||
self, tools: List[BaseTool], tool_map: Optional[Dict[str, BaseTool]]
|
||||
) -> List[BaseTool]:
|
||||
# Dummy implementation
|
||||
return []
|
||||
|
||||
def _parse_output(self, agent_output: Any, token_process: TokenProcess):
|
||||
# Dummy implementation
|
||||
pass
|
||||
|
||||
def get_output_converter(self, tools: Optional[List[BaseTool]] = None) -> Any:
|
||||
# Dummy implementation
|
||||
return None
|
||||
|
||||
|
||||
def test_base_agent_adapter_initialization():
|
||||
"""Test initialization of the concrete agent adapter."""
|
||||
adapter = ConcreteAgentAdapter(
|
||||
role="test role", goal="test goal", backstory="test backstory"
|
||||
)
|
||||
assert isinstance(adapter, BaseAgent)
|
||||
assert isinstance(adapter, BaseAgentAdapter)
|
||||
assert adapter.role == "test role"
|
||||
assert adapter._agent_config is None
|
||||
assert adapter.adapted_structured_output is False
|
||||
|
||||
|
||||
def test_base_agent_adapter_initialization_with_config():
|
||||
"""Test initialization with agent_config."""
|
||||
config = {"model": "gpt-4"}
|
||||
adapter = ConcreteAgentAdapter(
|
||||
agent_config=config,
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
)
|
||||
assert adapter._agent_config == config
|
||||
|
||||
|
||||
def test_configure_tools_method_exists():
|
||||
"""Test that configure_tools method exists and can be called."""
|
||||
adapter = ConcreteAgentAdapter(
|
||||
role="test role", goal="test goal", backstory="test backstory"
|
||||
)
|
||||
# Create dummy tools if needed, or pass None
|
||||
tools = []
|
||||
adapter.configure_tools(tools)
|
||||
assert hasattr(adapter, "tools")
|
||||
assert adapter.tools == tools
|
||||
|
||||
|
||||
def test_configure_structured_output_method_exists():
|
||||
"""Test that configure_structured_output method exists and can be called."""
|
||||
adapter = ConcreteAgentAdapter(
|
||||
role="test role", goal="test goal", backstory="test backstory"
|
||||
)
|
||||
|
||||
# Define a dummy structure or pass None/Any
|
||||
class DummyOutput(BaseModel):
|
||||
data: str
|
||||
|
||||
structured_output = DummyOutput
|
||||
adapter.configure_structured_output(structured_output)
|
||||
# Add assertions here if configure_structured_output modifies state
|
||||
# For now, just ensuring it runs without error is sufficient
|
||||
pass
|
||||
|
||||
|
||||
def test_base_agent_adapter_inherits_base_agent():
|
||||
"""Test that BaseAgentAdapter inherits from BaseAgent."""
|
||||
assert issubclass(BaseAgentAdapter, BaseAgent)
|
||||
|
||||
|
||||
class ConcreteAgentAdapterWithoutRequiredMethods(BaseAgentAdapter):
|
||||
pass
|
||||
|
||||
|
||||
def test_base_agent_adapter_fails_without_required_methods():
|
||||
"""Test that BaseAgentAdapter fails without required methods."""
|
||||
with pytest.raises(TypeError):
|
||||
ConcreteAgentAdapterWithoutRequiredMethods() # type: ignore
|
||||
94
tests/agents/agent_adapters/test_base_tool_adapter.py
Normal file
94
tests/agents/agent_adapters/test_base_tool_adapter.py
Normal file
@@ -0,0 +1,94 @@
|
||||
from typing import Any, List
|
||||
from unittest.mock import Mock
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
|
||||
|
||||
class ConcreteToolAdapter(BaseToolAdapter):
|
||||
def configure_tools(self, tools: List[BaseTool]) -> None:
|
||||
self.converted_tools = [f"converted_{tool.name}" for tool in tools]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_tool_1():
|
||||
tool = Mock(spec=BaseTool)
|
||||
tool.name = "Mock Tool 1"
|
||||
return tool
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_tool_2():
|
||||
tool = Mock(spec=BaseTool)
|
||||
tool.name = "MockTool2"
|
||||
return tool
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tools_list(mock_tool_1, mock_tool_2):
|
||||
return [mock_tool_1, mock_tool_2]
|
||||
|
||||
|
||||
def test_initialization_with_tools(tools_list):
|
||||
adapter = ConcreteToolAdapter(tools=tools_list)
|
||||
assert adapter.original_tools == tools_list
|
||||
assert adapter.converted_tools == [] # Conversion happens in configure_tools
|
||||
|
||||
|
||||
def test_initialization_without_tools():
|
||||
adapter = ConcreteToolAdapter()
|
||||
assert adapter.original_tools == []
|
||||
assert adapter.converted_tools == []
|
||||
|
||||
|
||||
def test_configure_tools(tools_list):
|
||||
adapter = ConcreteToolAdapter()
|
||||
adapter.configure_tools(tools_list)
|
||||
assert adapter.converted_tools == ["converted_Mock Tool 1", "converted_MockTool2"]
|
||||
assert adapter.original_tools == [] # original_tools is only set in init
|
||||
|
||||
adapter_with_init_tools = ConcreteToolAdapter(tools=tools_list)
|
||||
adapter_with_init_tools.configure_tools(tools_list)
|
||||
assert adapter_with_init_tools.converted_tools == [
|
||||
"converted_Mock Tool 1",
|
||||
"converted_MockTool2",
|
||||
]
|
||||
assert adapter_with_init_tools.original_tools == tools_list
|
||||
|
||||
|
||||
def test_tools_method(tools_list):
|
||||
adapter = ConcreteToolAdapter()
|
||||
adapter.configure_tools(tools_list)
|
||||
assert adapter.tools() == ["converted_Mock Tool 1", "converted_MockTool2"]
|
||||
|
||||
|
||||
def test_tools_method_empty():
|
||||
adapter = ConcreteToolAdapter()
|
||||
assert adapter.tools() == []
|
||||
|
||||
|
||||
def test_sanitize_tool_name_with_spaces():
|
||||
adapter = ConcreteToolAdapter()
|
||||
assert adapter.sanitize_tool_name("Tool With Spaces") == "Tool_With_Spaces"
|
||||
|
||||
|
||||
def test_sanitize_tool_name_without_spaces():
|
||||
adapter = ConcreteToolAdapter()
|
||||
assert adapter.sanitize_tool_name("ToolWithoutSpaces") == "ToolWithoutSpaces"
|
||||
|
||||
|
||||
def test_sanitize_tool_name_empty():
|
||||
adapter = ConcreteToolAdapter()
|
||||
assert adapter.sanitize_tool_name("") == ""
|
||||
|
||||
|
||||
class ConcreteToolAdapterWithoutRequiredMethods(BaseToolAdapter):
|
||||
pass
|
||||
|
||||
|
||||
def test_tool_adapted_fails_without_required_methods():
|
||||
"""Test that BaseToolAdapter fails without required methods."""
|
||||
with pytest.raises(TypeError):
|
||||
ConcreteToolAdapterWithoutRequiredMethods() # type: ignore
|
||||
@@ -18,9 +18,6 @@ class MockAgent(BaseAgent):
|
||||
|
||||
def create_agent_executor(self, tools=None) -> None: ...
|
||||
|
||||
def _parse_tools(self, tools: List[BaseTool]) -> List[BaseTool]:
|
||||
return []
|
||||
|
||||
def get_delegation_tools(self, agents: List["BaseAgent"]): ...
|
||||
|
||||
def get_output_converter(
|
||||
|
||||
@@ -0,0 +1,330 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"input": ["Brandon''s favorite color is red and he likes Mexican food."],
|
||||
"model": "text-embedding-3-small", "encoding_format": "base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '137'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.68.2
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.68.2
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1SaWw+yPtfmz59Pced/yrwR2bV9zhAQ2UkRFHEymYAiOxHZtEDfvN99ovdkNicm
|
||||
YiNpu1bXdf1W//Nff/7802V1fp/++feff17VOP3z377PHumU/vPvP//9X3/+/Pnzn7/P/29k3mb5
|
||||
41G9i9/w34/V+5Ev//z7D/9/nvzfQf/+889JZZQeb+UOCJHjaQqtRQfv1mXUaf0OTfTuHjx+CvAU
|
||||
CWC/KEik3pNeZScAwr5Zzkgne4Gqd6jX2+oDW3BxGx8nMkfrxaq0GM2PNaV5G6QZM0P1joRJl32W
|
||||
BVuXtTPPo02jZhRX8gjWdj7MgDz2D+wRexhoUHsaTN9P0RfLw5itFrmbCgzCHVFOdx/wQte1qJvK
|
||||
FxH556YeT0pqoJ0RTNhqPwiskhTe0T7qIpzrwS4arGS24D4uc6y90d4VpMq+wy7hntS8mG297p2B
|
||||
QNrwZ5pV1p4RZ6vPEHEPDtuWVA0L/w451CylgPUZBhFvXaQWWqXwwKobdNmavrcCyvDk4aRDezby
|
||||
2c5Hym33obtXgLKlioYGKUfZovrOBtkSu6cOQbHw6POcJoyE+vmMxtegUst+HNh2ZImhpFTNsJ4V
|
||||
73p1XlWO8NsSscGvH7asThqCqtoe6anKt+6SsI0K91Ef0UtGS5dIB8DD3QV2vtwaybB8lksCj5K7
|
||||
pTjWccSrU5igG+hfVOuSRl+4sPGhtJ4Qtfr4w4g0EQcaRfCgnrKxXVFD8hnSZ6jTk5PN9ax4JxU5
|
||||
T/5MsnTwskU6ARNOHnCJYHpsoPbgFfDzsRYaLe9dPbfLoiDTls800h+6vjruXYDSGiGs3fJFXw5Z
|
||||
2APLfh+ok3y2YDvrfQvLm3/x5fPervlV0QUUF5c33aFAzQRua0JwHeCKb5/FGITQVzsgc1KIg6vA
|
||||
Mmo+Xg5CiY/80LSf2UqSFwdOKDHoMVyBvk6jq0DzTRTs1uUHLFui9ciAyYGq8fHOhDqzzoCpgkE9
|
||||
Ta/q9aW/VnRO/IR6zaVjs8DvQ9Q/byvOji+YzUV7ztGHji4+b4t7LZqHuwO9+NRR9aZ0YFFWKQF8
|
||||
oXu+5D2rgfee2RlmG3/CO+mFXbq0SgJ35cGm+JxKjEnnY6vsjlOI9+aSsPUqdSNMj3FBD3qdMHHA
|
||||
UgCXpjj46xaeI6GBnAZtrfZJOLMKLKg6VrDCz5jaF6K6/LjWGpwt36FHS430JTwlHtrvYIL3q03d
|
||||
JUkWDWVaEtKLvglq9sj4HiRDHtFo3yjuyGe2D/nPY8ER4xwmtq/AQtvoptF7rdiAX9jdgRtn7+Bn
|
||||
tJEZJc2GgD4yNZz3YBjWu3ziAaFrhXenYwImk8wNpOrj/M13t+Y7wz/D3/57Lz0b2K1kHixsVceP
|
||||
ztUy8VYyH90e/AbvXU+sGZ8aDnKhc6LWu9nVPJp3CYqztfBbYCn6DDf9CG16+VBf9VswK6seI7h7
|
||||
nqirt5POrDOr4LC/ExoMssgmx6EzlCfv+s2fnS5e3vIM9VeX0YDfaUxs04EHhXb/4KPzJoC1ac2j
|
||||
jeltac5xgc5gd++B3PYI22Bmw1g9KgFZx+cdGynQMv7ilz7yDuFKkP351LMfTiYsJQCpLhIpWguN
|
||||
76FsiVd6cLRJX4LU5qEFOUQdt3OBGCfJHX7PD/x7H+9L1xThHJQYF+HiUrWYe6DvogzvTlKvM6OP
|
||||
EijuT5i01wcF7Pg6QvgK4pZ6vlzo82dzCmBAD4zuU7MfFvypHbRLREZku91ma5+PBfjGjz8e7mbN
|
||||
sFByKDkcInrs5Iv+oVDhoTppJTYI17LVHD4qvBBo4NB9lmyRDuYMt0bqkQWbJZteYsaBUpZNvFf7
|
||||
Su+cVQ5gr6YI+7u3A0Sev83KJowLXyzuwGV7N5jR68Zc6iWPoB71O63gwiqEtU9sR3wq2yG0nPZE
|
||||
ja7fuKuzmQ10PnUctriPysROHCtogQ3D+JPs3O1a1AGULo+SauHJ0XlpX8ZI350yeqCtmjFn4T0w
|
||||
SHyDz/xqg+2xnU3UZWJP7VURXHaDlgBHg8P00NhvfTy2rgqPRM1pvKIjmPwodNDU8neq2ydTZ2O7
|
||||
C1F5qFKKX8PFHRZvXGFV81eaBMou4wcOjzLwbk/qpxe1/uyMoEAmCc4Yl4GQrZzstsgq+Qe9D4cZ
|
||||
LFkbJCg6NjG9Wy8pWzj/3cBY7VSaymuf0eK2jnBXmRE+bZkJlqdp+NAf6gFbh9saPQ/d7Q6siTtg
|
||||
HG1ubDtLnA8/1fuK/TpfXVbO7gijGR7pQ3e2EVPA2qJ170v+FosWWEPf6iBRgI6dku7Z0ianM2Jl
|
||||
scWq4R/Z9qkd2l98YU3isD7z7UuBtw4V9Hbef+pZu2wEEE/3DCdFdXDF6j7NMOmftg99uXDFME1T
|
||||
MCX6SA/OvAGTcm4tlJ1uFoHzsdQZvxHu8BQ+eurtkiXrcuobMLzW2Tf+TCZka+8or+FpYqypDhBU
|
||||
5eahY3a54ftLBzULDTuF0f3t4X2N7+4QGrsUifHG8We0fdXzbCoOQHjzJlwbKFk/umYDMQsqmunt
|
||||
0RUDj2ko+/QP6ofSW19xoDVQUHHuw289YJf3MiM+vwQ+f5Xf9Xw3OwIT3eNwcu59sDruWUBjOQ/f
|
||||
/FPd+V0FDZKlWPrG71Nf1UJvYOyh2R+rUxQxL88D0KnUoeq8sfTV3+cBME6Pid5uPnDXZBOk6Fyf
|
||||
ZRxe02wQd6LdweOknzHu4zYj42WV0P5QddSbjzv3qydMEMutia3sHgOmk0CDnZ5gjO1HEwm9kUsg
|
||||
7R47cu6j7bBWwVlDfsy9sNHXls5jXV+R6sYt3hdSyJZQdAo4dvGeWtHJGeYUpzEUkpzhg3UT6l5c
|
||||
+hVAw4rxJeUfmejpQYAe7+6CdUM4ZGPt2wW4RvRBTtK2rCdD9yE8lUGK3fdJjNa++ljoKDsbsnkN
|
||||
F53pZyeF72U806h3OnfVpwqiOx8+fbEH2F1lv65QNhUrPp6FYGChsUuQ44QG+XyWpu72qOHhmi4W
|
||||
tqSLrI8lfOXwu/4EWcF+WPwK3OHNF2y8++UP90gq6LuCgdXyqkXisl5N+DS3D38cDjOb2LrpQYap
|
||||
R+1XvGckOz8lKB7gGVtNyA9MLeYO1uSe/I1/sm/kM4Q0FPAh4LfR3JwXH+7ulYbNfa6yZdqPBZyK
|
||||
k0b4Q5zrM4hUAscBB/7mUXRgtsU2hJvLcaDhV691b0uDCJ2G2lcS2YqEX/1Q93NMFi0wIuZTJYQf
|
||||
73yhd1Tm7Lee8Fffn2G0YyR5OTMEiiz81cNzNXkWKOjJwLio9IHXmm6Gh06JCVceLPbyo9BCttTY
|
||||
9EGHhz562zGGhnn1iOQLA1jj65BCw5NaelNnkTGzLGcknMDNp+c0ASztKgWN+mZPeLtvXbYTdx0q
|
||||
J5Zh4/OqgNhc5xWl92tMHca9I4L7xEPcqzDopdvfwGLnxxGQ15nhOMWSPj60xoROc8uwNRhwWC9Z
|
||||
pwFXwzufhMctYKdJilFrhyN1JbQOS+ExCV5KMaA4JUUtkkYkcHekId1J23KYPlc8Q6F+nbHpiAYT
|
||||
/H0cgqbY7nF2/SzDeidTAcXus6exvKsznvVVBWzcaVitLu+azP2OQ3nQ3vFR35rDSuswAN/zi6bc
|
||||
0XZnTnvwEOXrlSyXThyW4v7wIS5eDwJvAj98/VGOXpVWUvVkDBnLrZhAr4MTTQ63NXv1JTThZz8D
|
||||
ejZOlc7sgFNgs9SCr4xwBMOK3AQw8bDHTnhQ3fmcOxVQRVvyW+cdZrNw23RQTsuQXvdvEnXCTezg
|
||||
V/9jXyyFYU7eUwC+649x0A/Z2F6bEH52UovvrviKmGQ0AZQjJPn0W8+WROs0eNPGJ/7uV0ZO4baH
|
||||
7MnLOEorLZtruvGgbGw0uie3sl5W89Er6fFc0KvCDpHYEHuGj6zoaWC/FzDSOMqha60ZvcX3ldEw
|
||||
TRN48Z83ImSSP7ztfeijDzftyLwXm2hpa0GFtVFb1Lrtp3q2XC+AN5+38a1Gu2HN1spCfpC5fu1F
|
||||
Klubq05A9zIdvFPfE2P35xIgpQptvAOew8b+Up1RoEUJWVq31L/+xwKdOjk+SIcxG7/+BT6FmMM7
|
||||
3f/UyzbaqlAe8RG7toOG1RtTC87BpyXKc5gzej51K5DTOiTrz3+MrtlC7Xzf0MygOViFneEgTWMG
|
||||
EU/PdqDQxyr4xruvnP2rK1YCSGDPVkbk1dOjNS83IXynuUV2TjYPrBJYgooP4b/1XAbjwB0IspIX
|
||||
wvjrJxfjcW+g/RjI3/zeEsG1ABUqTNAeopppx5CgQx2bRA53Z33xo9SCX/9CQ6fh9NkQtg5U+qnx
|
||||
pcVadfbYvVV4lOwtdV3dHOayuvLwr5442DAah3uoAilsYuqm2Y0xv0g4aDTWilWeXob+5yerbRHS
|
||||
3OMXtvRjl8DoNu588P0/oVlOZyTprMWHT5jUc1xsW8QB80z3NYb6Kk2tBQzz4vmgrZuaeGNooe/+
|
||||
4Hs7hvo69zsICkuT8eGCz7qwSS0Ic1F442P08CIWJ0n+mx/OT3iumXB8KNAk4Zna25HLuqYJKyV/
|
||||
uRXWb29uIHxnqYg7jDmOv/p5xYHTgs/HWbB52PYubV2nAr1xCjEOoxIsFSYQds1B9cEwCvqH65Cn
|
||||
tLkAqblbeMZW2ZxBe3R3hKvzVV/VxvLh6SjVON9fpYg8thcffuOF7le+yJazUUL4He+DMDjWq9AK
|
||||
Ofz6bep/kBmtgfoJwM+P//QSAcPowcpUF2oNZ6zz2+nSwmePI+p847G/eKIJvzzG59zsOKzFsYOQ
|
||||
K7dbrOW2Fq3skRH48rueujh/RYufcwJMQ2nxV0n9AJ7TLgI8vh8G9oZr/eUxZwOs9v74t77MQWMb
|
||||
cFrhAf/4AlnDdwJrkifY9rpZnwu1smBwMix6KRxVF+JFVmFgCasv3MpXtBYPJ4Bf/uR/84P1nlZ7
|
||||
sDrZIba//n972BYB7ByXYrtGZT13ecYB4zNp1N33VT3xaOBgkjCf2l//skRPysHPTmmpcxzLaN4e
|
||||
hgA+5DjA9qrE7rLDyAdAgwrWpdsH9N94Adc0EPGVxRMYXwKRBN1PMVXdwIrGSpzv6PIIbGorh4e+
|
||||
3MP1DrlPevOVIi7ZOp24EH6su4rdp+nqc7o1eoBs74AzdXPK1s/q3OGzmAm9G45c08tbXuGrkUey
|
||||
kblRX81LoMIg2kRkMa4gm8N+nuFq+hrWw/RabzNPWxEHjDPeqWKRfeshgXypxv6WB9eMNRFvIWJ0
|
||||
2pff2GB94EpBT7uoaGRvJDAE6NSDn5/An6TUZ/JZUmgZqkj9n59Mt0YHvTeqMW5QFTFOnnv0KE3g
|
||||
S9JFdsku6EOIL3dINet4ZswwPsbPj1Kz6gX2V/898tsWm5F20lnwVjsISTQS5uAh+/kLkBi+hfeL
|
||||
a4P1Jdkqsh8f8ounaO2yJYeeax+wh5ZTti2ESoDILDVqA61gLdjLCrw/1AjnXaNFQmqqFpJkofzq
|
||||
rXMtPLJDAwf7qpNNGJVs1Squh0shSr5eTWRgnZYoP/5Hf/V9eegnBzk93JJlSSx3+3EH86/+TJxs
|
||||
0sePw2k/v4F97TW7rDrZBuTX8UT98HgBf+P9wlcXapy2a0Q9PQhhnM0FtnxhYKstqykiUS8RepR0
|
||||
ff6ej+jnv9WrEGWdwB9D+OODNnu9o+X1MM8wf9kVVu3rJluaUuNQuz1wVP2spfv14xU0p8ih2lfv
|
||||
j/1pGuH7cYmw7WIXTOZ+10FWVlsif/VmZa3JqHz1IE6Q2mdTzXcK+PmPmzpfmdjbaovW07mmh1sf
|
||||
slwB+wKi+pLjQ35p3JHgYYXvQr3R5+x8shliwwBBWkzU6U6gZkpshYB1Jv7LE8lbEO+wjo869SWv
|
||||
05dVkRQYO+aLajei12xrEg+ixEM07tddLcZJkENjy084v6V3MJfVk4dKTxu6S/U4E19GKoDlnYXY
|
||||
GODodibRq59/95tOWQZi9887hMcW+BsBnrKuv1Qxep42nE8bpGX0OSUdsvqT9NX7N3196dMK3jGT
|
||||
8Y8v0vvF4+DcKcDnC/rK2NVYC9gmUYXNuW3qsTnLPuT4QvnL36gZWjkgVnHF8VDw0ZJ06Rn89OXh
|
||||
y5/nYotm8KwH2R+Ss66z80hWeLZvnb9aj0O09pJigVbYW2TzfG/qL//t4fvVfDCWudFlfc8F4Fuv
|
||||
/RmUj+FvfP/48zGTXH3eo0aA3SFofHFNunruPlao6PcbwCY7qpFgrQGBJ0f18cXakujzOy/uZPTp
|
||||
MVVeTOQbKf/xJh8Yu1GfnFtC/uphR65EMCpVpaLwg2O6D8I3G7n7cIfh5xhT9cu/tk2pwb/n704o
|
||||
5ZrlUOb++m9POIvscxpzD/C6N/scM69s9kdlhPFbcrGfjL07J+uphfkwQiIAS3HZrAQxOoXPHtuq
|
||||
ttdZ6ZxS2NCtgs3XfaOz1H+EEE/jiUZZsNX71wOkcHFaAWP+1X39b5LA5elCbHCtNyyf10xgfUQT
|
||||
tsL2FgkuYHf441Xu+3TNVuUSeSDMjS3+zg9M9ypa4Tk71PjohoW7dNtUhd4hWKm1nwFgr7zI4U9f
|
||||
eN/6Roqu5cBV0yvy+vLHJXx/PAhvpMV6L6VsNHZlBcUYOfiIPM9d3pakKi+xWagfd+eBedIgKXUV
|
||||
E+zue60WY1LeUX1fH4Trmiqa72OmAtDGM7WGjZqtvixX8DQQ++cHojZLzhWyLmSlh52oAvLVI1Ai
|
||||
RPXnznZqwQTOGZJBOn737wAYy9UZXomI/K6Sg4wqh2sMPvsV+Eyyxay5pHsJSmEb00MPqD5yZnqH
|
||||
h13n+gVYAp0gq+GQ8nzr2KrkOXsqr7KFocZdsEfXSm+8/XGEsekH2NIClQnNoUrhV/9hz8nL7Dt/
|
||||
Dfn87OIo0P1sCuaUh6Of8Pi51FM04scpgYdOinE4LZa7ts1gQOsyrth5C3B4r9E5h29gyT48qaK+
|
||||
wh0KfvoIm6P7AsupWCowjKNMjU451bM1cw388lYfda4W8T//8tO3qL40NZ3C4QzltkMkXa5DvV7e
|
||||
xgjD7WVHrbCVo/Vz5Xl4dYMdPnz546/fBNVrfiLsNDZsLTTY/+XBzlu418365DyABK32pQDPOuOC
|
||||
HUGPdRWwffOBvn75E7D3fuRvP2TOft+V3/ltPOOdvrinZw7H10elKh7e+qwr8vyXJ+jLuxx+fAea
|
||||
B9X20bc/Jy7r0wT1W+KoSUI4rHC3DWBQrTr2PmQEM51LE3KPgFDHPmn1ll7VCpW9EhDu64f7Je16
|
||||
CPJDSv/y09/+QXIa8U14V+53fh3gdq5B/Uxn2boeNPJ3v3+8aL7HjgbsVvH9zVqGNVPNWkWFC3yM
|
||||
maeDWZ3SBB57y8YPN1RdsT4ad3iuY5kweZ3YlHnajMjj8PDrxlwZKVgWKN/8JLydsmGcH64FvvyG
|
||||
2l7yqIXJPDXw16+5uJ445C03E7Sl1egPYVHojNJGQy3sK7LVYkVfvL0yg/FWX+h+cT9gDo6z+ePZ
|
||||
1PsQD/A7fbgrWtH09FLWjT7LmRKD24frCT9fB33clpICnVug4PjC37J5rG0Cw3MXUHVBU0buZKqg
|
||||
9pAbIn/95dZpUAKNIvz2x6gwEJl/8Art04akzwyB1VkOFij8JvKRKZbRXDMmwFLv9vj2kMdo2Vyu
|
||||
IXwKZw5rEoEDO5eage7lOyVojsmwGn53/ulzwmXRC3Q/fbbi7o3d8fqu57MUWCiWGxMb57cyrPHE
|
||||
J3Cjdi+q7kUjEz+N3/x4Az7qzjZjiHox9Iz3RL/nq1v20tGAuXUx8SGWnJpwsttA8VE1VK0uh0E0
|
||||
9WsFke0fiJArn+jbLwgB7qwdVvFwcLfpLr//rV93NfV0QZ96CF3zEGMHZCoQxtkq4MtZL1g7GLo7
|
||||
f/UUcJos80EkGy7jnrWKmizkCbt4G7AU94uHCpBSAvNyD2axGTiYB82dgAFag6iatQY3jZZh/Cgs
|
||||
9u0H8gicu+2X56g1PxmbHF6r4ogvQjOBbhaDFZqmusEnutHB+uuH3o2Vo4ar9+58wKqCEsOzsMtX
|
||||
jM03q+pQrfYMHxVniZaW0zuknfPNr75m7CxGBrSLM4+vn2pwl3Qrj8BiRYKdahrc6ZhgCwqaUmNL
|
||||
/zRg+PYf//qLxxBw0fzrXzqiUdCQShyjXz+tpM6QUe/L9+kkXToI1nuGL1qc6uu9VgVk1G6HHRO/
|
||||
f/mz/vgNdpX247L+eWrR1WogVpOyAV9+3EKBozw2/L0BmKy+O0iiTsJJHL0yui0YD6XLsyTAdh71
|
||||
Mj9mH/7zuxXwX//68+d//G4YtN0jf30vBkz5Mv3H/7kq8B/if4xt+nr9vYZAxrTI//n3/76B8M9n
|
||||
6NrP9D+nrsnf4z///rMV/t41+GfqpvT1/z7/1/dV//Wv/wUAAP//AwBcfFVx4CAAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 931fcf607c16eb34-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 17 Apr 2025 23:47:53 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=CncSMPCav.9EJL3emmM0sTqugx5GN6_Oy8JPFBssVho-1744933673-1.0.1.1-Q1XMvHbQdrfEWkkCYeeNHwFdZ1NpjAGJ_0jOUYIk_APelFe7nCanjW_xlOj12b3JQql9.iWQDiHvCeYJDTWkdxnNiMQOEiFMYHX5YZXUuJs;
|
||||
path=/; expires=Fri, 18-Apr-25 00:17:53 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=unfPTYCpF5COtm5PuZDuaJqlhefP0iibfjsXHc9lKq0-1744933673515-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '75'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-8687b6cbdb-4qpmr
|
||||
x-envoy-upstream-service-time:
|
||||
- '46'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999986'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_b8c884a7fe2bd4732903ecbdc632576d
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Information Agent.
|
||||
You have access to specific knowledge sources.\nYour personal goal is: Provide
|
||||
information based on knowledge sources\nTo give my best complete final answer
|
||||
to the task respond using the exact following format:\n\nThought: I now can
|
||||
give a great answer\nFinal Answer: Your final answer must be the great and the
|
||||
most complete as possible, it must be outcome described.\n\nI MUST use these
|
||||
formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent Task:
|
||||
What is Brandon''s favorite color?\n\nThis is the expected criteria for your
|
||||
final answer: Brandon''s favorite color.\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '926'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.68.2
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.68.2
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jJNNi9swEIbv+RWDLr0ki/NBvm5NYUsplFK29NAuZiKNnWlkjVaSk02X
|
||||
/PdiJxtn2y30YrCeecfvvCM/9QAUG7UEpTeYdOXtYPXp7vbLw7evm4ePmObvt/jrsVgVn9/5dbhj
|
||||
1W8Usv5JOj2rbrRU3lJicSesA2GiputwNpksxuPpbNyCSgzZRlb6NJjIoGLHg1E2mgyy2WA4P6s3
|
||||
wpqiWsL3HgDAU/tsfDpDj2oJWf/5pKIYsSS1vBQBqCC2OVEYI8eELql+B7W4RK61/gGc7EGjg5J3
|
||||
BAhlYxvQxT0FgB/ulh1aeNu+L2EV0BlxbyIUuJPAiUCLlQAcwUkCX68ta3sAI7quyCUywA72bMge
|
||||
AHfIFteWYOtkb8mUBFHqoCneXPsLVNQRm4xcbe0VQOckYZNxm8z9mRwvWVgpfZB1/EOqCnYcN3kg
|
||||
jOKauWMSr1p67AHct5nXL2JUPkjlU55kS+3nhtP5qZ/qVt3R0fQMkyS0V6rFpP9Kv9xQQrbxamtK
|
||||
o96Q6aTdirE2LFegdzX1325e632anF35P+07oDX5RCb3gQzrlxN3ZYGaP+FfZZeUW8MqUtixpjwx
|
||||
hWYThgqs7el+qniIiaq8YFdS8IFPl7TweTZejOajUbbIVO/Y+w0AAP//AwA4a1/QsgMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 931fcf649bdbed40-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 17 Apr 2025 23:47:54 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=8vySwO0xqpm0u93_1_rXQwTeEIWa2ei3_CD5sAdoo3o-1744933674-1.0.1.1-iqZDpH5poOUp4Rcnhfrb0N2Z0c2662RBiPEcx_gefNW.m3tBA3qyFa8tmFv7PitH8u9vyYK7jxUwy4lPiSi830QWNbTMgCMTbrJ7iaUV7hY;
|
||||
path=/; expires=Fri, 18-Apr-25 00:17:54 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=IXAyT8eWpFERM53ngcYNmaqhocfGbOHWSoe7SFNdoGI-1744933674288-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '489'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999801'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_151f9d0b786f2022f249ee20ea108b43
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,330 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"input": ["Brandon''s favorite color is red and he likes Mexican food."],
|
||||
"model": "text-embedding-3-small", "encoding_format": "base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '137'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.68.2
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.68.2
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1SaWw+yPtfmz59Pced/yrwR2bV9zhAQ2UkRFHEymYAiOxHZtEDfvN99ovdkNicm
|
||||
YiNpu1bXdf1W//Nff/7802V1fp/++feff17VOP3z377PHumU/vPvP//9X3/+/Pnzn7/P/29k3mb5
|
||||
41G9i9/w34/V+5Ev//z7D/9/nvzfQf/+889JZZQeb+UOCJHjaQqtRQfv1mXUaf0OTfTuHjx+CvAU
|
||||
CWC/KEik3pNeZScAwr5Zzkgne4Gqd6jX2+oDW3BxGx8nMkfrxaq0GM2PNaV5G6QZM0P1joRJl32W
|
||||
BVuXtTPPo02jZhRX8gjWdj7MgDz2D+wRexhoUHsaTN9P0RfLw5itFrmbCgzCHVFOdx/wQte1qJvK
|
||||
FxH556YeT0pqoJ0RTNhqPwiskhTe0T7qIpzrwS4arGS24D4uc6y90d4VpMq+wy7hntS8mG297p2B
|
||||
QNrwZ5pV1p4RZ6vPEHEPDtuWVA0L/w451CylgPUZBhFvXaQWWqXwwKobdNmavrcCyvDk4aRDezby
|
||||
2c5Hym33obtXgLKlioYGKUfZovrOBtkSu6cOQbHw6POcJoyE+vmMxtegUst+HNh2ZImhpFTNsJ4V
|
||||
73p1XlWO8NsSscGvH7asThqCqtoe6anKt+6SsI0K91Ef0UtGS5dIB8DD3QV2vtwaybB8lksCj5K7
|
||||
pTjWccSrU5igG+hfVOuSRl+4sPGhtJ4Qtfr4w4g0EQcaRfCgnrKxXVFD8hnSZ6jTk5PN9ax4JxU5
|
||||
T/5MsnTwskU6ARNOHnCJYHpsoPbgFfDzsRYaLe9dPbfLoiDTls800h+6vjruXYDSGiGs3fJFXw5Z
|
||||
2APLfh+ok3y2YDvrfQvLm3/x5fPervlV0QUUF5c33aFAzQRua0JwHeCKb5/FGITQVzsgc1KIg6vA
|
||||
Mmo+Xg5CiY/80LSf2UqSFwdOKDHoMVyBvk6jq0DzTRTs1uUHLFui9ciAyYGq8fHOhDqzzoCpgkE9
|
||||
Ta/q9aW/VnRO/IR6zaVjs8DvQ9Q/byvOji+YzUV7ztGHji4+b4t7LZqHuwO9+NRR9aZ0YFFWKQF8
|
||||
oXu+5D2rgfee2RlmG3/CO+mFXbq0SgJ35cGm+JxKjEnnY6vsjlOI9+aSsPUqdSNMj3FBD3qdMHHA
|
||||
UgCXpjj46xaeI6GBnAZtrfZJOLMKLKg6VrDCz5jaF6K6/LjWGpwt36FHS430JTwlHtrvYIL3q03d
|
||||
JUkWDWVaEtKLvglq9sj4HiRDHtFo3yjuyGe2D/nPY8ER4xwmtq/AQtvoptF7rdiAX9jdgRtn7+Bn
|
||||
tJEZJc2GgD4yNZz3YBjWu3ziAaFrhXenYwImk8wNpOrj/M13t+Y7wz/D3/57Lz0b2K1kHixsVceP
|
||||
ztUy8VYyH90e/AbvXU+sGZ8aDnKhc6LWu9nVPJp3CYqztfBbYCn6DDf9CG16+VBf9VswK6seI7h7
|
||||
nqirt5POrDOr4LC/ExoMssgmx6EzlCfv+s2fnS5e3vIM9VeX0YDfaUxs04EHhXb/4KPzJoC1ac2j
|
||||
jeltac5xgc5gd++B3PYI22Bmw1g9KgFZx+cdGynQMv7ilz7yDuFKkP351LMfTiYsJQCpLhIpWguN
|
||||
76FsiVd6cLRJX4LU5qEFOUQdt3OBGCfJHX7PD/x7H+9L1xThHJQYF+HiUrWYe6DvogzvTlKvM6OP
|
||||
EijuT5i01wcF7Pg6QvgK4pZ6vlzo82dzCmBAD4zuU7MfFvypHbRLREZku91ma5+PBfjGjz8e7mbN
|
||||
sFByKDkcInrs5Iv+oVDhoTppJTYI17LVHD4qvBBo4NB9lmyRDuYMt0bqkQWbJZteYsaBUpZNvFf7
|
||||
Su+cVQ5gr6YI+7u3A0Sev83KJowLXyzuwGV7N5jR68Zc6iWPoB71O63gwiqEtU9sR3wq2yG0nPZE
|
||||
ja7fuKuzmQ10PnUctriPysROHCtogQ3D+JPs3O1a1AGULo+SauHJ0XlpX8ZI350yeqCtmjFn4T0w
|
||||
SHyDz/xqg+2xnU3UZWJP7VURXHaDlgBHg8P00NhvfTy2rgqPRM1pvKIjmPwodNDU8neq2ydTZ2O7
|
||||
C1F5qFKKX8PFHRZvXGFV81eaBMou4wcOjzLwbk/qpxe1/uyMoEAmCc4Yl4GQrZzstsgq+Qe9D4cZ
|
||||
LFkbJCg6NjG9Wy8pWzj/3cBY7VSaymuf0eK2jnBXmRE+bZkJlqdp+NAf6gFbh9saPQ/d7Q6siTtg
|
||||
HG1ubDtLnA8/1fuK/TpfXVbO7gijGR7pQ3e2EVPA2qJ170v+FosWWEPf6iBRgI6dku7Z0ianM2Jl
|
||||
scWq4R/Z9qkd2l98YU3isD7z7UuBtw4V9Hbef+pZu2wEEE/3DCdFdXDF6j7NMOmftg99uXDFME1T
|
||||
MCX6SA/OvAGTcm4tlJ1uFoHzsdQZvxHu8BQ+eurtkiXrcuobMLzW2Tf+TCZka+8or+FpYqypDhBU
|
||||
5eahY3a54ftLBzULDTuF0f3t4X2N7+4QGrsUifHG8We0fdXzbCoOQHjzJlwbKFk/umYDMQsqmunt
|
||||
0RUDj2ko+/QP6ofSW19xoDVQUHHuw289YJf3MiM+vwQ+f5Xf9Xw3OwIT3eNwcu59sDruWUBjOQ/f
|
||||
/FPd+V0FDZKlWPrG71Nf1UJvYOyh2R+rUxQxL88D0KnUoeq8sfTV3+cBME6Pid5uPnDXZBOk6Fyf
|
||||
ZRxe02wQd6LdweOknzHu4zYj42WV0P5QddSbjzv3qydMEMutia3sHgOmk0CDnZ5gjO1HEwm9kUsg
|
||||
7R47cu6j7bBWwVlDfsy9sNHXls5jXV+R6sYt3hdSyJZQdAo4dvGeWtHJGeYUpzEUkpzhg3UT6l5c
|
||||
+hVAw4rxJeUfmejpQYAe7+6CdUM4ZGPt2wW4RvRBTtK2rCdD9yE8lUGK3fdJjNa++ljoKDsbsnkN
|
||||
F53pZyeF72U806h3OnfVpwqiOx8+fbEH2F1lv65QNhUrPp6FYGChsUuQ44QG+XyWpu72qOHhmi4W
|
||||
tqSLrI8lfOXwu/4EWcF+WPwK3OHNF2y8++UP90gq6LuCgdXyqkXisl5N+DS3D38cDjOb2LrpQYap
|
||||
R+1XvGckOz8lKB7gGVtNyA9MLeYO1uSe/I1/sm/kM4Q0FPAh4LfR3JwXH+7ulYbNfa6yZdqPBZyK
|
||||
k0b4Q5zrM4hUAscBB/7mUXRgtsU2hJvLcaDhV691b0uDCJ2G2lcS2YqEX/1Q93NMFi0wIuZTJYQf
|
||||
73yhd1Tm7Lee8Fffn2G0YyR5OTMEiiz81cNzNXkWKOjJwLio9IHXmm6Gh06JCVceLPbyo9BCttTY
|
||||
9EGHhz562zGGhnn1iOQLA1jj65BCw5NaelNnkTGzLGcknMDNp+c0ASztKgWN+mZPeLtvXbYTdx0q
|
||||
J5Zh4/OqgNhc5xWl92tMHca9I4L7xEPcqzDopdvfwGLnxxGQ15nhOMWSPj60xoROc8uwNRhwWC9Z
|
||||
pwFXwzufhMctYKdJilFrhyN1JbQOS+ExCV5KMaA4JUUtkkYkcHekId1J23KYPlc8Q6F+nbHpiAYT
|
||||
/H0cgqbY7nF2/SzDeidTAcXus6exvKsznvVVBWzcaVitLu+azP2OQ3nQ3vFR35rDSuswAN/zi6bc
|
||||
0XZnTnvwEOXrlSyXThyW4v7wIS5eDwJvAj98/VGOXpVWUvVkDBnLrZhAr4MTTQ63NXv1JTThZz8D
|
||||
ejZOlc7sgFNgs9SCr4xwBMOK3AQw8bDHTnhQ3fmcOxVQRVvyW+cdZrNw23RQTsuQXvdvEnXCTezg
|
||||
V/9jXyyFYU7eUwC+649x0A/Z2F6bEH52UovvrviKmGQ0AZQjJPn0W8+WROs0eNPGJ/7uV0ZO4baH
|
||||
7MnLOEorLZtruvGgbGw0uie3sl5W89Er6fFc0KvCDpHYEHuGj6zoaWC/FzDSOMqha60ZvcX3ldEw
|
||||
TRN48Z83ImSSP7ztfeijDzftyLwXm2hpa0GFtVFb1Lrtp3q2XC+AN5+38a1Gu2HN1spCfpC5fu1F
|
||||
Klubq05A9zIdvFPfE2P35xIgpQptvAOew8b+Up1RoEUJWVq31L/+xwKdOjk+SIcxG7/+BT6FmMM7
|
||||
3f/UyzbaqlAe8RG7toOG1RtTC87BpyXKc5gzej51K5DTOiTrz3+MrtlC7Xzf0MygOViFneEgTWMG
|
||||
EU/PdqDQxyr4xruvnP2rK1YCSGDPVkbk1dOjNS83IXynuUV2TjYPrBJYgooP4b/1XAbjwB0IspIX
|
||||
wvjrJxfjcW+g/RjI3/zeEsG1ABUqTNAeopppx5CgQx2bRA53Z33xo9SCX/9CQ6fh9NkQtg5U+qnx
|
||||
pcVadfbYvVV4lOwtdV3dHOayuvLwr5442DAah3uoAilsYuqm2Y0xv0g4aDTWilWeXob+5yerbRHS
|
||||
3OMXtvRjl8DoNu588P0/oVlOZyTprMWHT5jUc1xsW8QB80z3NYb6Kk2tBQzz4vmgrZuaeGNooe/+
|
||||
4Hs7hvo69zsICkuT8eGCz7qwSS0Ic1F442P08CIWJ0n+mx/OT3iumXB8KNAk4Zna25HLuqYJKyV/
|
||||
uRXWb29uIHxnqYg7jDmOv/p5xYHTgs/HWbB52PYubV2nAr1xCjEOoxIsFSYQds1B9cEwCvqH65Cn
|
||||
tLkAqblbeMZW2ZxBe3R3hKvzVV/VxvLh6SjVON9fpYg8thcffuOF7le+yJazUUL4He+DMDjWq9AK
|
||||
Ofz6bep/kBmtgfoJwM+P//QSAcPowcpUF2oNZ6zz2+nSwmePI+p847G/eKIJvzzG59zsOKzFsYOQ
|
||||
K7dbrOW2Fq3skRH48rueujh/RYufcwJMQ2nxV0n9AJ7TLgI8vh8G9oZr/eUxZwOs9v74t77MQWMb
|
||||
cFrhAf/4AlnDdwJrkifY9rpZnwu1smBwMix6KRxVF+JFVmFgCasv3MpXtBYPJ4Bf/uR/84P1nlZ7
|
||||
sDrZIba//n972BYB7ByXYrtGZT13ecYB4zNp1N33VT3xaOBgkjCf2l//skRPysHPTmmpcxzLaN4e
|
||||
hgA+5DjA9qrE7rLDyAdAgwrWpdsH9N94Adc0EPGVxRMYXwKRBN1PMVXdwIrGSpzv6PIIbGorh4e+
|
||||
3MP1DrlPevOVIi7ZOp24EH6su4rdp+nqc7o1eoBs74AzdXPK1s/q3OGzmAm9G45c08tbXuGrkUey
|
||||
kblRX81LoMIg2kRkMa4gm8N+nuFq+hrWw/RabzNPWxEHjDPeqWKRfeshgXypxv6WB9eMNRFvIWJ0
|
||||
2pff2GB94EpBT7uoaGRvJDAE6NSDn5/An6TUZ/JZUmgZqkj9n59Mt0YHvTeqMW5QFTFOnnv0KE3g
|
||||
S9JFdsku6EOIL3dINet4ZswwPsbPj1Kz6gX2V/898tsWm5F20lnwVjsISTQS5uAh+/kLkBi+hfeL
|
||||
a4P1Jdkqsh8f8ounaO2yJYeeax+wh5ZTti2ESoDILDVqA61gLdjLCrw/1AjnXaNFQmqqFpJkofzq
|
||||
rXMtPLJDAwf7qpNNGJVs1Squh0shSr5eTWRgnZYoP/5Hf/V9eegnBzk93JJlSSx3+3EH86/+TJxs
|
||||
0sePw2k/v4F97TW7rDrZBuTX8UT98HgBf+P9wlcXapy2a0Q9PQhhnM0FtnxhYKstqykiUS8RepR0
|
||||
ff6ej+jnv9WrEGWdwB9D+OODNnu9o+X1MM8wf9kVVu3rJluaUuNQuz1wVP2spfv14xU0p8ih2lfv
|
||||
j/1pGuH7cYmw7WIXTOZ+10FWVlsif/VmZa3JqHz1IE6Q2mdTzXcK+PmPmzpfmdjbaovW07mmh1sf
|
||||
slwB+wKi+pLjQ35p3JHgYYXvQr3R5+x8shliwwBBWkzU6U6gZkpshYB1Jv7LE8lbEO+wjo869SWv
|
||||
05dVkRQYO+aLajei12xrEg+ixEM07tddLcZJkENjy084v6V3MJfVk4dKTxu6S/U4E19GKoDlnYXY
|
||||
GODodibRq59/95tOWQZi9887hMcW+BsBnrKuv1Qxep42nE8bpGX0OSUdsvqT9NX7N3196dMK3jGT
|
||||
8Y8v0vvF4+DcKcDnC/rK2NVYC9gmUYXNuW3qsTnLPuT4QvnL36gZWjkgVnHF8VDw0ZJ06Rn89OXh
|
||||
y5/nYotm8KwH2R+Ss66z80hWeLZvnb9aj0O09pJigVbYW2TzfG/qL//t4fvVfDCWudFlfc8F4Fuv
|
||||
/RmUj+FvfP/48zGTXH3eo0aA3SFofHFNunruPlao6PcbwCY7qpFgrQGBJ0f18cXakujzOy/uZPTp
|
||||
MVVeTOQbKf/xJh8Yu1GfnFtC/uphR65EMCpVpaLwg2O6D8I3G7n7cIfh5xhT9cu/tk2pwb/n704o
|
||||
5ZrlUOb++m9POIvscxpzD/C6N/scM69s9kdlhPFbcrGfjL07J+uphfkwQiIAS3HZrAQxOoXPHtuq
|
||||
ttdZ6ZxS2NCtgs3XfaOz1H+EEE/jiUZZsNX71wOkcHFaAWP+1X39b5LA5elCbHCtNyyf10xgfUQT
|
||||
tsL2FgkuYHf441Xu+3TNVuUSeSDMjS3+zg9M9ypa4Tk71PjohoW7dNtUhd4hWKm1nwFgr7zI4U9f
|
||||
eN/6Roqu5cBV0yvy+vLHJXx/PAhvpMV6L6VsNHZlBcUYOfiIPM9d3pakKi+xWagfd+eBedIgKXUV
|
||||
E+zue60WY1LeUX1fH4Trmiqa72OmAtDGM7WGjZqtvixX8DQQ++cHojZLzhWyLmSlh52oAvLVI1Ai
|
||||
RPXnznZqwQTOGZJBOn737wAYy9UZXomI/K6Sg4wqh2sMPvsV+Eyyxay5pHsJSmEb00MPqD5yZnqH
|
||||
h13n+gVYAp0gq+GQ8nzr2KrkOXsqr7KFocZdsEfXSm+8/XGEsekH2NIClQnNoUrhV/9hz8nL7Dt/
|
||||
Dfn87OIo0P1sCuaUh6Of8Pi51FM04scpgYdOinE4LZa7ts1gQOsyrth5C3B4r9E5h29gyT48qaK+
|
||||
wh0KfvoIm6P7AsupWCowjKNMjU451bM1cw388lYfda4W8T//8tO3qL40NZ3C4QzltkMkXa5DvV7e
|
||||
xgjD7WVHrbCVo/Vz5Xl4dYMdPnz546/fBNVrfiLsNDZsLTTY/+XBzlu418365DyABK32pQDPOuOC
|
||||
HUGPdRWwffOBvn75E7D3fuRvP2TOft+V3/ltPOOdvrinZw7H10elKh7e+qwr8vyXJ+jLuxx+fAea
|
||||
B9X20bc/Jy7r0wT1W+KoSUI4rHC3DWBQrTr2PmQEM51LE3KPgFDHPmn1ll7VCpW9EhDu64f7Je16
|
||||
CPJDSv/y09/+QXIa8U14V+53fh3gdq5B/Uxn2boeNPJ3v3+8aL7HjgbsVvH9zVqGNVPNWkWFC3yM
|
||||
maeDWZ3SBB57y8YPN1RdsT4ad3iuY5kweZ3YlHnajMjj8PDrxlwZKVgWKN/8JLydsmGcH64FvvyG
|
||||
2l7yqIXJPDXw16+5uJ445C03E7Sl1egPYVHojNJGQy3sK7LVYkVfvL0yg/FWX+h+cT9gDo6z+ePZ
|
||||
1PsQD/A7fbgrWtH09FLWjT7LmRKD24frCT9fB33clpICnVug4PjC37J5rG0Cw3MXUHVBU0buZKqg
|
||||
9pAbIn/95dZpUAKNIvz2x6gwEJl/8Art04akzwyB1VkOFij8JvKRKZbRXDMmwFLv9vj2kMdo2Vyu
|
||||
IXwKZw5rEoEDO5eage7lOyVojsmwGn53/ulzwmXRC3Q/fbbi7o3d8fqu57MUWCiWGxMb57cyrPHE
|
||||
J3Cjdi+q7kUjEz+N3/x4Az7qzjZjiHox9Iz3RL/nq1v20tGAuXUx8SGWnJpwsttA8VE1VK0uh0E0
|
||||
9WsFke0fiJArn+jbLwgB7qwdVvFwcLfpLr//rV93NfV0QZ96CF3zEGMHZCoQxtkq4MtZL1g7GLo7
|
||||
f/UUcJos80EkGy7jnrWKmizkCbt4G7AU94uHCpBSAvNyD2axGTiYB82dgAFag6iatQY3jZZh/Cgs
|
||||
9u0H8gicu+2X56g1PxmbHF6r4ogvQjOBbhaDFZqmusEnutHB+uuH3o2Vo4ar9+58wKqCEsOzsMtX
|
||||
jM03q+pQrfYMHxVniZaW0zuknfPNr75m7CxGBrSLM4+vn2pwl3Qrj8BiRYKdahrc6ZhgCwqaUmNL
|
||||
/zRg+PYf//qLxxBw0fzrXzqiUdCQShyjXz+tpM6QUe/L9+kkXToI1nuGL1qc6uu9VgVk1G6HHRO/
|
||||
f/mz/vgNdpX247L+eWrR1WogVpOyAV9+3EKBozw2/L0BmKy+O0iiTsJJHL0yui0YD6XLsyTAdh71
|
||||
Mj9mH/7zuxXwX//68+d//G4YtN0jf30vBkz5Mv3H/7kq8B/if4xt+nr9vYZAxrTI//n3/76B8M9n
|
||||
6NrP9D+nrsnf4z///rMV/t41+GfqpvT1/z7/1/dV//Wv/wUAAP//AwBcfFVx4CAAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 931fceef786ded38-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 17 Apr 2025 23:47:35 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=fj4RMXSXRDQjE2CFC6CGC3dVcJ8cl2Cbu8alijwMHA8-1744933655-1.0.1.1-M3c3AI4XQa.0GJoanNACuOm2aEL4xjqHR1grxIP3olFvq3e0eFHwQTvCF20YwR_OLiMJUH87eNUwgziawMccsxjR9OVZyDr5._5Wts6CrqA;
|
||||
path=/; expires=Fri, 18-Apr-25 00:17:35 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=MSkpJsQZtdyIGvrl2mIwy0a_We8H6CIrS7etFgRBl2Y-1744933655703-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '140'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-84959bbcd5-rzqvq
|
||||
x-envoy-upstream-service-time:
|
||||
- '110'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999986'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_dd3ef61c4765b46ed7db80ddfe261f41
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Information Agent.
|
||||
You have access to specific knowledge sources.\nYour personal goal is: Provide
|
||||
information based on knowledge sources\nTo give my best complete final answer
|
||||
to the task respond using the exact following format:\n\nThought: I now can
|
||||
give a great answer\nFinal Answer: Your final answer must be the great and the
|
||||
most complete as possible, it must be outcome described.\n\nI MUST use these
|
||||
formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent Task:
|
||||
What is Brandon''s favorite color?\n\nThis is the expected criteria for your
|
||||
final answer: Brandon''s favorite color.\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '926'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.68.2
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.68.2
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLBbtswDL37KwhdeokLx0mbOLd2WIAC63YZdthWGIpMO9xkUZDkpEWR
|
||||
fx/kpLG7dUAvBszHR733yOcEQFAlViDUVgbVWp3efv66LvKPRd7Q/f57vtaf7m6ePjze774sv23E
|
||||
JDJ48wtVeGFdKm6txkBsjrByKAPGqdPFfF7MZtdXVz3QcoU60hob0jmnLRlK8yyfp9kinS5P7C2T
|
||||
Qi9W8CMBAHjuv1GnqfBRrCCbvFRa9F42KFbnJgDhWMeKkN6TD9IEMRlAxSag6aXfgeE9KGmgoR2C
|
||||
hCbKBmn8Hh3AT7MmIzXc9P8ruHXSVGwuPNRyx44CgmLNDsjDRnd4OX7GYd15Ga2aTusRII3hIGNU
|
||||
vcGHE3I4W9LcWMcb/xdV1GTIb0uH0rOJ8n1gK3r0kAA89NF1r9IQ1nFrQxn4N/bPTa+Xx3li2NgI
|
||||
LU5g4CD1qL5cTN6YV1YYJGk/Cl8oqbZYDdRhU7KriEdAMnL9r5q3Zh+dk2neM34AlEIbsCqtw4rU
|
||||
a8dDm8N40P9rO6fcCxYe3Y4UloHQxU1UWMtOH89M+CcfsC1rMg066+h4a7Uts1mRL/M8KzKRHJI/
|
||||
AAAA//8DALRhJdF5AwAA
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 931fcef51a67f947-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 17 Apr 2025 23:47:36 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=7agwu5JV1OJvEFNhvfvqdgWf.HoMyIni9D85soRl3WE-1744933656-1.0.1.1-dKUwAZnjjuuiswFKWGsxpwHNBJUpjhYlZvfZpyNQIejxEJrXMCppgPvtQ9wa4SKezLmKqftvn_H.bAx_AEFJD2EWm5V6R_uK8.odneErR6A;
|
||||
path=/; expires=Fri, 18-Apr-25 00:17:36 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=LdTrzwZYrB6ZyQLY7NdaaHVpDVFvIjYm3arSpNy87wU-1744933656504-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '540'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999802'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_8837be6510731522fd5ac4b75c11d486
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
643
tests/cassettes/test_task_with_max_execution_time.yaml
Normal file
643
tests/cassettes/test_task_with_max_execution_time.yaml
Normal file
@@ -0,0 +1,643 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Researcher. You''re
|
||||
an expert researcher, specialized in technology, software engineering, AI and
|
||||
startups. You work as a freelancer and are now working on doing research and
|
||||
analysis for a new customer.\nYour personal goal is: Make the best research
|
||||
and analysis on content about AI and AI agents. Use the tool provided to you.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: what amazing tool\nTool Arguments: {}\nTool
|
||||
Description: My tool\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, only one
|
||||
name of [what amazing tool], just the name, exactly as it''s written.\nAction
|
||||
Input: the input to the action, just a simple JSON object, enclosed in curly
|
||||
braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
a list of 5 interesting ideas to explore for an article, what makes them unique
|
||||
and interesting.\n\nThis is the expected criteria for your final answer: Bullet
|
||||
point list of 5 interesting ideas.\nyou MUST return the actual complete content
|
||||
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
|
||||
use the tools available and give your best Final Answer, your job depends on
|
||||
it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1666'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.68.2
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.68.2
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.12
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-BNNPLDDeGQYegE6neZK6ogJmDOMYs\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1744911223,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I need to generate interesting ideas
|
||||
for an article related to AI and AI agents. \\n\\nAction: what amazing tool
|
||||
\ \\nAction Input: {} \",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 326,\n \"completion_tokens\":
|
||||
28,\n \"total_tokens\": 354,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
|
||||
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_0392822090\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 931dab4c79581b2e-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 17 Apr 2025 17:33:44 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '736'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999620'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_0767caa75d1851f392ea34a68763b1bc
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: !!binary |
|
||||
CqzaAQokCiIKDHNlcnZpY2UubmFtZRISChBjcmV3QUktdGVsZW1ldHJ5EoLaAQoSChBjcmV3YWku
|
||||
dGVsZW1ldHJ5EpYIChAm+VDVVrKTGFLYrGmqyFN/EgiVIKrLvczC6SoMQ3JldyBDcmVhdGVkMAE5
|
||||
uBnOOYMrNxhBWHDXOYMrNxhKGwoOY3Jld2FpX3ZlcnNpb24SCQoHMC4xMTQuMEobCg5weXRob25f
|
||||
dmVyc2lvbhIJCgczLjExLjEySi4KCGNyZXdfa2V5EiIKIDVlNmVmZmU2ODBhNWQ5N2RjMzg3M2Ix
|
||||
NDgyNWNjZmEzSjEKB2NyZXdfaWQSJgokMzI4MzZhOTctMGIyYi00MTAwLTgxZDYtYmIwZWJjY2I2
|
||||
ZDYyShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRj
|
||||
cmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAUo6ChBj
|
||||
cmV3X2ZpbmdlcnByaW50EiYKJGQ5YmEwZDE3LTE5YjMtNGQ5Yi04ZTNmLThiMjNhOGM0YzgyM0o7
|
||||
ChtjcmV3X2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0wNC0xN1QxNDozMzo0My4yMzg4
|
||||
MzhKzQIKC2NyZXdfYWdlbnRzEr0CCroCW3sia2V5IjogIjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdk
|
||||
NDI0MGEyOTRkIiwgImlkIjogImMzMWMwNzRmLTg5Y2YtNGRkNi04ZTY2LWM3NDM1OTc0NmNkMSIs
|
||||
ICJyb2xlIjogIlNjb3JlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyNSwgIm1h
|
||||
eF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8t
|
||||
bWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlv
|
||||
bj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K+wEK
|
||||
CmNyZXdfdGFza3MS7AEK6QFbeyJrZXkiOiAiMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFi
|
||||
ODYiLCAiaWQiOiAiYTNiODU1ODAtZTVjNC00YmU2LWI0ZmItMDU1NDU2Y2RkZWJkIiwgImFzeW5j
|
||||
X2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6
|
||||
ICJTY29yZXIiLCAiYWdlbnRfa2V5IjogIjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRk
|
||||
IiwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAASgAQKED9XhldOUhgSlcVbrT/HuRwSCCF7
|
||||
8YboEaZnKgxUYXNrIENyZWF0ZWQwATlYmeU5gys3GEHwx+c5gys3GEouCghjcmV3X2tleRIiCiA1
|
||||
ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2ZhM0oxCgdjcmV3X2lkEiYKJDMyODM2YTk3LTBi
|
||||
MmItNDEwMC04MWQ2LWJiMGViY2NiNmQ2MkouCgh0YXNrX2tleRIiCiAyN2VmMzhjYzk5ZGE0YThk
|
||||
ZWQ3MGVkNDA2ZTQ0YWI4NkoxCgd0YXNrX2lkEiYKJGEzYjg1NTgwLWU1YzQtNGJlNi1iNGZiLTA1
|
||||
NTQ1NmNkZGViZEo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJGQ5YmEwZDE3LTE5YjMtNGQ5Yi04ZTNm
|
||||
LThiMjNhOGM0YzgyM0o6ChB0YXNrX2ZpbmdlcnByaW50EiYKJDc0OTEwMDcxLTM1MWQtNDljNy1h
|
||||
YmZlLTJmMWZhNWUzZTEyYko7Cht0YXNrX2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0w
|
||||
NC0xN1QxNDozMzo0My4yMzg4MDFKOwoRYWdlbnRfZmluZ2VycHJpbnQSJgokYzMyNmIyMWUtYWJh
|
||||
YS00N2ZjLWE0NGQtMzk2NTQ4ZjczZTRiegIYAYUBAAEAABKYCAoQH3AbYnQyty/P5gHiJzrjLxII
|
||||
PR8P3tNCS74qDENyZXcgQ3JlYXRlZDABOYgCyj6DKzcYQSD71D6DKzcYShsKDmNyZXdhaV92ZXJz
|
||||
aW9uEgkKBzAuMTE0LjBKGwoOcHl0aG9uX3ZlcnNpb24SCQoHMy4xMS4xMkouCghjcmV3X2tleRIi
|
||||
CiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2ZhM0oxCgdjcmV3X2lkEiYKJDBkY2JhOGY4
|
||||
LTdhYzQtNDdmMC04ZWIxLWE0ZTJkODFjOGZkYkoeCgxjcmV3X3Byb2Nlc3MSDgoMaGllcmFyY2hp
|
||||
Y2FsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jl
|
||||
d19udW1iZXJfb2ZfYWdlbnRzEgIYAUo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJDJiYzYzYWE1LWU5
|
||||
YzYtNDAxNi1hMTdiLWFmZDM5ZWJlYmEwNko7ChtjcmV3X2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQS
|
||||
HAoaMjAyNS0wNC0xN1QxNDozMzo0My4zMjI3OTlKzQIKC2NyZXdfYWdlbnRzEr0CCroCW3sia2V5
|
||||
IjogIjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRkIiwgImlkIjogImYyY2Y4YWRlLTdj
|
||||
YmYtNDkwMS1hODMwLWE3ZDkxODA4NjI3NCIsICJyb2xlIjogIlNjb3JlciIsICJ2ZXJib3NlPyI6
|
||||
IGZhbHNlLCAibWF4X2l0ZXIiOiAyNSwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGlu
|
||||
Z19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/Ijog
|
||||
ZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6
|
||||
IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K+wEKCmNyZXdfdGFza3MS7AEK6QFbeyJrZXkiOiAiMjdl
|
||||
ZjM4Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFiODYiLCAiaWQiOiAiMWY3MTljNTktOTJjMC00NGU1
|
||||
LWFmMjUtNzIwYjE1NWE1Njg1IiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lu
|
||||
cHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJTY29yZXIiLCAiYWdlbnRfa2V5IjogIjkyZTdl
|
||||
YjE5MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRkIiwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQAB
|
||||
AAASgAQKEIVZhLW3PhEOxntJYQn8IYkSCDNELyrmM+eYKgxUYXNrIENyZWF0ZWQwATlYr+0+gys3
|
||||
GEGIJO4+gys3GEouCghjcmV3X2tleRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2Zh
|
||||
M0oxCgdjcmV3X2lkEiYKJDBkY2JhOGY4LTdhYzQtNDdmMC04ZWIxLWE0ZTJkODFjOGZkYkouCgh0
|
||||
YXNrX2tleRIiCiAyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4NkoxCgd0YXNrX2lkEiYK
|
||||
JDFmNzE5YzU5LTkyYzAtNDRlNS1hZjI1LTcyMGIxNTVhNTY4NUo6ChBjcmV3X2ZpbmdlcnByaW50
|
||||
EiYKJDJiYzYzYWE1LWU5YzYtNDAxNi1hMTdiLWFmZDM5ZWJlYmEwNko6ChB0YXNrX2ZpbmdlcnBy
|
||||
aW50EiYKJGJmMDU5YjBiLWFlYmYtNGIzMS04YTc4LTA2ZTlmMjcyZDQ2MEo7Cht0YXNrX2Zpbmdl
|
||||
cnByaW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0wNC0xN1QxNDozMzo0My4zMjI3NTVKOwoRYWdlbnRf
|
||||
ZmluZ2VycHJpbnQSJgokY2IzZWQ2ZGQtNzEzNC00YTc3LThiMjctZWIwZGRkZGZlMjdiegIYAYUB
|
||||
AAEAABKdAQoQ6S1CnqyOxKwRN/Vq7X81HRIIso7ugOmXnjEqClRvb2wgVXNhZ2UwATnAIk4/gys3
|
||||
GEE4YlU/gys3GEobCg5jcmV3YWlfdmVyc2lvbhIJCgcwLjExNC4wSigKCXRvb2xfbmFtZRIbChlE
|
||||
ZWxlZ2F0ZSB3b3JrIHRvIGNvd29ya2VySg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAASlggKENaM
|
||||
zYnzHCL03v+Ihe5ZidsSCIXobzKG03Z6KgxDcmV3IENyZWF0ZWQwATlQ8uM/gys3GEFIfOk/gys3
|
||||
GEobCg5jcmV3YWlfdmVyc2lvbhIJCgcwLjExNC4wShsKDnB5dGhvbl92ZXJzaW9uEgkKBzMuMTEu
|
||||
MTJKLgoIY3Jld19rZXkSIgogNWU2ZWZmZTY4MGE1ZDk3ZGMzODczYjE0ODI1Y2NmYTNKMQoHY3Jl
|
||||
d19pZBImCiQ5ZmYzZmU1ZC01YmYyLTRlNzEtODU0ZS05OTAyZGYxYzIxYTRKHAoMY3Jld19wcm9j
|
||||
ZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rh
|
||||
c2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSjoKEGNyZXdfZmluZ2VycHJpbnQS
|
||||
JgokOTkwOTQ1YmUtOTczMS00ZTQxLTg5ZDAtYzEzMjljNGQ3NjQ5SjsKG2NyZXdfZmluZ2VycHJp
|
||||
bnRfY3JlYXRlZF9hdBIcChoyMDI1LTA0LTE3VDE0OjMzOjQzLjM0MTIyM0rNAgoLY3Jld19hZ2Vu
|
||||
dHMSvQIKugJbeyJrZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVkN2Q0MjQwYTI5NGQiLCAiaWQi
|
||||
OiAiMGM3NDhlNDgtOGNmNC00ZDJhLWFlYWMtZDY5OTUzMDkwZThmIiwgInJvbGUiOiAiU2NvcmVy
|
||||
IiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDI1LCAibWF4X3JwbSI6IG51bGwsICJm
|
||||
dW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVnYXRp
|
||||
b25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4
|
||||
X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUr7AQoKY3Jld190YXNrcxLsAQrp
|
||||
AVt7ImtleSI6ICIyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4NiIsICJpZCI6ICI2NzBj
|
||||
MjdhOS1kYTc3LTRhNTQtYmYxYS1mM2M0YjVlNTcwNDkiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZh
|
||||
bHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIlNjb3JlciIsICJhZ2Vu
|
||||
dF9rZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVkN2Q0MjQwYTI5NGQiLCAidG9vbHNfbmFtZXMi
|
||||
OiBbXX1degIYAYUBAAEAABKABAoQ0PunZB/jswUd8i8ahTz20BIIFtRhrWbDsGIqDFRhc2sgQ3Jl
|
||||
YXRlZDABOejM8z+DKzcYQZAu9D+DKzcYSi4KCGNyZXdfa2V5EiIKIDVlNmVmZmU2ODBhNWQ5N2Rj
|
||||
Mzg3M2IxNDgyNWNjZmEzSjEKB2NyZXdfaWQSJgokOWZmM2ZlNWQtNWJmMi00ZTcxLTg1NGUtOTkw
|
||||
MmRmMWMyMWE0Si4KCHRhc2tfa2V5EiIKIDI3ZWYzOGNjOTlkYTRhOGRlZDcwZWQ0MDZlNDRhYjg2
|
||||
SjEKB3Rhc2tfaWQSJgokNjcwYzI3YTktZGE3Ny00YTU0LWJmMWEtZjNjNGI1ZTU3MDQ5SjoKEGNy
|
||||
ZXdfZmluZ2VycHJpbnQSJgokOTkwOTQ1YmUtOTczMS00ZTQxLTg5ZDAtYzEzMjljNGQ3NjQ5SjoK
|
||||
EHRhc2tfZmluZ2VycHJpbnQSJgokMjczZmM3YjYtZGJmYS00MzYyLWEwZTEtNzhhNjJjNDY0OTlh
|
||||
SjsKG3Rhc2tfZmluZ2VycHJpbnRfY3JlYXRlZF9hdBIcChoyMDI1LTA0LTE3VDE0OjMzOjQzLjM0
|
||||
MTE5NUo7ChFhZ2VudF9maW5nZXJwcmludBImCiQwNTc3MWI4Ny04ZGIzLTRhZGYtOWJhZC0yNzcw
|
||||
ZjgzYTZiYTN6AhgBhQEAAQAAEpgIChA5RacttE9X5amPuTPHLuYDEgiZ/RIthMTU5yoMQ3JldyBD
|
||||
cmVhdGVkMAE5EC+lQIMrNxhBsJGsQIMrNxhKGwoOY3Jld2FpX3ZlcnNpb24SCQoHMC4xMTQuMEob
|
||||
Cg5weXRob25fdmVyc2lvbhIJCgczLjExLjEySi4KCGNyZXdfa2V5EiIKIDVlNmVmZmU2ODBhNWQ5
|
||||
N2RjMzg3M2IxNDgyNWNjZmEzSjEKB2NyZXdfaWQSJgokMTc5MzYxNTItNDJmMi00YmY3LWIzOTEt
|
||||
ZTU0MDU1ZTY2NGU4Sh4KDGNyZXdfcHJvY2VzcxIOCgxoaWVyYXJjaGljYWxKEQoLY3Jld19tZW1v
|
||||
cnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2Vu
|
||||
dHMSAhgBSjoKEGNyZXdfZmluZ2VycHJpbnQSJgokNjI0ZTI1NjEtMDFkOC00YmNkLWJhMjEtZDcx
|
||||
ZTQ0MTZkNGJiSjsKG2NyZXdfZmluZ2VycHJpbnRfY3JlYXRlZF9hdBIcChoyMDI1LTA0LTE3VDE0
|
||||
OjMzOjQzLjM1Mzg3M0rNAgoLY3Jld19hZ2VudHMSvQIKugJbeyJrZXkiOiAiOTJlN2ViMTkxNjY0
|
||||
YzkzNTc4NWVkN2Q0MjQwYTI5NGQiLCAiaWQiOiAiYTEyMTFiNmYtMzFhMy00MzY4LWI5YzItZDNh
|
||||
NjA1NTZlOTk2IiwgInJvbGUiOiAiU2NvcmVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRl
|
||||
ciI6IDI1LCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxt
|
||||
IjogImdwdC00by1taW5pIiwgImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2Nv
|
||||
ZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVz
|
||||
IjogW119XUr7AQoKY3Jld190YXNrcxLsAQrpAVt7ImtleSI6ICIyN2VmMzhjYzk5ZGE0YThkZWQ3
|
||||
MGVkNDA2ZTQ0YWI4NiIsICJpZCI6ICI3ZjJjNGIxMi00MjNhLTRmMTctOTZiMS0zZmEyNmUzMDM3
|
||||
MmMiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJh
|
||||
Z2VudF9yb2xlIjogIlNjb3JlciIsICJhZ2VudF9rZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVk
|
||||
N2Q0MjQwYTI5NGQiLCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUBAAEAABKABAoQG2i2Mf2AK81I
|
||||
gFDuyTZ4hxIIGTCH8bEW9REqDFRhc2sgQ3JlYXRlZDABOajZvECDKzcYQbArvUCDKzcYSi4KCGNy
|
||||
ZXdfa2V5EiIKIDVlNmVmZmU2ODBhNWQ5N2RjMzg3M2IxNDgyNWNjZmEzSjEKB2NyZXdfaWQSJgok
|
||||
MTc5MzYxNTItNDJmMi00YmY3LWIzOTEtZTU0MDU1ZTY2NGU4Si4KCHRhc2tfa2V5EiIKIDI3ZWYz
|
||||
OGNjOTlkYTRhOGRlZDcwZWQ0MDZlNDRhYjg2SjEKB3Rhc2tfaWQSJgokN2YyYzRiMTItNDIzYS00
|
||||
ZjE3LTk2YjEtM2ZhMjZlMzAzNzJjSjoKEGNyZXdfZmluZ2VycHJpbnQSJgokNjI0ZTI1NjEtMDFk
|
||||
OC00YmNkLWJhMjEtZDcxZTQ0MTZkNGJiSjoKEHRhc2tfZmluZ2VycHJpbnQSJgokZjViNmU5YjUt
|
||||
ZjcxMi00YzM3LTkyYjAtMWFjNzQ2ZTYzYWJjSjsKG3Rhc2tfZmluZ2VycHJpbnRfY3JlYXRlZF9h
|
||||
dBIcChoyMDI1LTA0LTE3VDE0OjMzOjQzLjM1Mzg0M0o7ChFhZ2VudF9maW5nZXJwcmludBImCiQ1
|
||||
ZjhkM2YwYS1lODZhLTRiMmUtYWFmMC1jOWMyMDg2N2M1ODR6AhgBhQEAAQAAEpwBChBCoZs2F2Pk
|
||||
GqD1dlo+B0jIEgh/8w+r7HLXDioKVG9vbCBVc2FnZTABOfhQHUGDKzcYQeChJUGDKzcYShsKDmNy
|
||||
ZXdhaV92ZXJzaW9uEgkKBzAuMTE0LjBKJwoJdG9vbF9uYW1lEhoKGEFzayBxdWVzdGlvbiB0byBj
|
||||
b3dvcmtlckoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAAEpYIChAtKHG1/6WFVXbuKiUU/tulEggA
|
||||
5PFku/BBtSoMQ3JldyBDcmVhdGVkMAE5qOuzQYMrNxhBYNO5QYMrNxhKGwoOY3Jld2FpX3ZlcnNp
|
||||
b24SCQoHMC4xMTQuMEobCg5weXRob25fdmVyc2lvbhIJCgczLjExLjEySi4KCGNyZXdfa2V5EiIK
|
||||
IDVlNmVmZmU2ODBhNWQ5N2RjMzg3M2IxNDgyNWNjZmEzSjEKB2NyZXdfaWQSJgokNWQzODQyMmQt
|
||||
NTk2MC00NGQ0LWFmZjctYWM5MDFiMjU1NzM5ShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFs
|
||||
ShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19u
|
||||
dW1iZXJfb2ZfYWdlbnRzEgIYAUo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJGZhNWYxODc1LWRiYTQt
|
||||
NDc2MS04ZDk4LTliNzlmNDg2ZTNlY0o7ChtjcmV3X2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoa
|
||||
MjAyNS0wNC0xN1QxNDozMzo0My4zNzE2MzZKzQIKC2NyZXdfYWdlbnRzEr0CCroCW3sia2V5Ijog
|
||||
IjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRkIiwgImlkIjogIjgwZDg1ZDY3LTg3M2Qt
|
||||
NDFhNi05MzY1LWJkODVjYTc5MGI2MyIsICJyb2xlIjogIlNjb3JlciIsICJ2ZXJib3NlPyI6IGZh
|
||||
bHNlLCAibWF4X2l0ZXIiOiAyNSwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19s
|
||||
bG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFs
|
||||
c2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIs
|
||||
ICJ0b29sc19uYW1lcyI6IFtdfV1K+wEKCmNyZXdfdGFza3MS7AEK6QFbeyJrZXkiOiAiMjdlZjM4
|
||||
Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFiODYiLCAiaWQiOiAiZWYwNmQ3NTEtZWY2Yy00YjJiLWI2
|
||||
MTQtMmVhMmU1NGM0MjVlIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0
|
||||
PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJTY29yZXIiLCAiYWdlbnRfa2V5IjogIjkyZTdlYjE5
|
||||
MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRkIiwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAAS
|
||||
gAQKEJUYH+zYJdlQxAU/SEz07wwSCHIyR0YIxKoQKgxUYXNrIENyZWF0ZWQwATnYg8NBgys3GEFQ
|
||||
7cNBgys3GEouCghjcmV3X2tleRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2ZhM0ox
|
||||
CgdjcmV3X2lkEiYKJDVkMzg0MjJkLTU5NjAtNDRkNC1hZmY3LWFjOTAxYjI1NTczOUouCgh0YXNr
|
||||
X2tleRIiCiAyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4NkoxCgd0YXNrX2lkEiYKJGVm
|
||||
MDZkNzUxLWVmNmMtNGIyYi1iNjE0LTJlYTJlNTRjNDI1ZUo6ChBjcmV3X2ZpbmdlcnByaW50EiYK
|
||||
JGZhNWYxODc1LWRiYTQtNDc2MS04ZDk4LTliNzlmNDg2ZTNlY0o6ChB0YXNrX2ZpbmdlcnByaW50
|
||||
EiYKJDA1ZTU2ZTIzLWI5YjgtNDIwMy05MWYwLTY2ZmE5MDgzNzYzNUo7Cht0YXNrX2ZpbmdlcnBy
|
||||
aW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0wNC0xN1QxNDozMzo0My4zNzE2MDJKOwoRYWdlbnRfZmlu
|
||||
Z2VycHJpbnQSJgokZGZiMDEzYTItNzg0MC00NDFhLTg1YzMtMzI0OWQ1OGJhNmIzegIYAYUBAAEA
|
||||
ABKWCAoQ67KSQgBBFIpwJAjqKwKTNxIIPHptHAGKIGYqDENyZXcgQ3JlYXRlZDABOeB7T0KDKzcY
|
||||
QVhEVUKDKzcYShsKDmNyZXdhaV92ZXJzaW9uEgkKBzAuMTE0LjBKGwoOcHl0aG9uX3ZlcnNpb24S
|
||||
CQoHMy4xMS4xMkouCghjcmV3X2tleRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2Zh
|
||||
M0oxCgdjcmV3X2lkEiYKJDRmOTE0YWZhLTVlMTAtNDU3Ni1hYjJjLWVkZmNlZWQzYTZiYkocCgxj
|
||||
cmV3X3Byb2Nlc3MSDAoKc2VxdWVudGlhbEoRCgtjcmV3X21lbW9yeRICEABKGgoUY3Jld19udW1i
|
||||
ZXJfb2ZfdGFza3MSAhgBShsKFWNyZXdfbnVtYmVyX29mX2FnZW50cxICGAFKOgoQY3Jld19maW5n
|
||||
ZXJwcmludBImCiQzODFkOTIwYi1iMGE1LTRiNGUtYTQ0OS1kZjg5OGNjZjVmZDdKOwobY3Jld19m
|
||||
aW5nZXJwcmludF9jcmVhdGVkX2F0EhwKGjIwMjUtMDQtMTdUMTQ6MzM6NDMuMzgxODc1Ss0CCgtj
|
||||
cmV3X2FnZW50cxK9Agq6Alt7ImtleSI6ICI5MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0
|
||||
ZCIsICJpZCI6ICI5ZjVhZTRmOC02MjkwLTQ5NTUtOGI2OC00YmNjOTM5ZjhhMDkiLCAicm9sZSI6
|
||||
ICJTY29yZXIiLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjUsICJtYXhfcnBtIjog
|
||||
bnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAi
|
||||
ZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFs
|
||||
c2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX1dSvsBCgpjcmV3X3Rh
|
||||
c2tzEuwBCukBW3sia2V5IjogIjI3ZWYzOGNjOTlkYTRhOGRlZDcwZWQ0MDZlNDRhYjg2IiwgImlk
|
||||
IjogIjViODI1OWU3LWI2Y2YtNGJlYi1iYTRiLWY3MDg4Y2E4YWM3NiIsICJhc3luY19leGVjdXRp
|
||||
b24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAiU2NvcmVy
|
||||
IiwgImFnZW50X2tleSI6ICI5MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJ0b29s
|
||||
c19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEoAEChCDE114uxBCLFPLICeD1DCNEgiFJ3IambqX4yoM
|
||||
VGFzayBDcmVhdGVkMAE5+P9iQoMrNxhBcGljQoMrNxhKLgoIY3Jld19rZXkSIgogNWU2ZWZmZTY4
|
||||
MGE1ZDk3ZGMzODczYjE0ODI1Y2NmYTNKMQoHY3Jld19pZBImCiQ0ZjkxNGFmYS01ZTEwLTQ1NzYt
|
||||
YWIyYy1lZGZjZWVkM2E2YmJKLgoIdGFza19rZXkSIgogMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQw
|
||||
NmU0NGFiODZKMQoHdGFza19pZBImCiQ1YjgyNTllNy1iNmNmLTRiZWItYmE0Yi1mNzA4OGNhOGFj
|
||||
NzZKOgoQY3Jld19maW5nZXJwcmludBImCiQzODFkOTIwYi1iMGE1LTRiNGUtYTQ0OS1kZjg5OGNj
|
||||
ZjVmZDdKOgoQdGFza19maW5nZXJwcmludBImCiRmZWIzZDVjYy0yMzEwLTRhNDgtOWQ5My1jMzQ5
|
||||
MTI0MGU3NTlKOwobdGFza19maW5nZXJwcmludF9jcmVhdGVkX2F0EhwKGjIwMjUtMDQtMTdUMTQ6
|
||||
MzM6NDMuMzgxODQ4SjsKEWFnZW50X2ZpbmdlcnByaW50EiYKJDdhYzY3YTkzLTkwNjctNGJjNC1i
|
||||
NmFmLTcxZWVjZmRjNzEzOHoCGAGFAQABAAASmAgKEGmCIFMw9GayxClAketnXWsSCPvb3mKDZYhn
|
||||
KgxDcmV3IENyZWF0ZWQwATkA88JGgys3GEGAhMpGgys3GEobCg5jcmV3YWlfdmVyc2lvbhIJCgcw
|
||||
LjExNC4wShsKDnB5dGhvbl92ZXJzaW9uEgkKBzMuMTEuMTJKLgoIY3Jld19rZXkSIgogNWU2ZWZm
|
||||
ZTY4MGE1ZDk3ZGMzODczYjE0ODI1Y2NmYTNKMQoHY3Jld19pZBImCiRmNmE1MGVkMy02ODk5LTQ4
|
||||
MTItODVkNy1iNDZlYTQyNzUwN2FKHgoMY3Jld19wcm9jZXNzEg4KDGhpZXJhcmNoaWNhbEoRCgtj
|
||||
cmV3X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgBShsKFWNyZXdfbnVtYmVy
|
||||
X29mX2FnZW50cxICGAFKOgoQY3Jld19maW5nZXJwcmludBImCiQzYzRlM2VmZS01YmQ0LTRkNDgt
|
||||
YThmNS0yOGY0NDdhNDI0OGRKOwobY3Jld19maW5nZXJwcmludF9jcmVhdGVkX2F0EhwKGjIwMjUt
|
||||
MDQtMTdUMTQ6MzM6NDMuNDU2MDg4Ss0CCgtjcmV3X2FnZW50cxK9Agq6Alt7ImtleSI6ICI5MmU3
|
||||
ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJpZCI6ICJmNzMwNTE0Mi1jOTViLTQ0MmQt
|
||||
YjJiOS1jYTVhN2IzMzZlYjMiLCAicm9sZSI6ICJTY29yZXIiLCAidmVyYm9zZT8iOiBmYWxzZSwg
|
||||
Im1heF9pdGVyIjogMjUsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjog
|
||||
IiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAi
|
||||
YWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9v
|
||||
bHNfbmFtZXMiOiBbXX1dSvsBCgpjcmV3X3Rhc2tzEuwBCukBW3sia2V5IjogIjI3ZWYzOGNjOTlk
|
||||
YTRhOGRlZDcwZWQ0MDZlNDRhYjg2IiwgImlkIjogIjU5MDRiMDg5LTMzNzYtNDZhMy04NWU1LTRk
|
||||
ZTc0NDQyYTljYyIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBm
|
||||
YWxzZSwgImFnZW50X3JvbGUiOiAiU2NvcmVyIiwgImFnZW50X2tleSI6ICI5MmU3ZWIxOTE2NjRj
|
||||
OTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEoAEChD9
|
||||
DkCz0iVsHA4a1S3+yN8lEgjsMseCiYSvFioMVGFzayBDcmVhdGVkMAE5+ATcRoMrNxhB0F7cRoMr
|
||||
NxhKLgoIY3Jld19rZXkSIgogNWU2ZWZmZTY4MGE1ZDk3ZGMzODczYjE0ODI1Y2NmYTNKMQoHY3Jl
|
||||
d19pZBImCiRmNmE1MGVkMy02ODk5LTQ4MTItODVkNy1iNDZlYTQyNzUwN2FKLgoIdGFza19rZXkS
|
||||
IgogMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFiODZKMQoHdGFza19pZBImCiQ1OTA0YjA4
|
||||
OS0zMzc2LTQ2YTMtODVlNS00ZGU3NDQ0MmE5Y2NKOgoQY3Jld19maW5nZXJwcmludBImCiQzYzRl
|
||||
M2VmZS01YmQ0LTRkNDgtYThmNS0yOGY0NDdhNDI0OGRKOgoQdGFza19maW5nZXJwcmludBImCiQ3
|
||||
MjExMmY3OS1iMWU3LTRkYTctYTg4YS00NjU3NTJiZTIwZDdKOwobdGFza19maW5nZXJwcmludF9j
|
||||
cmVhdGVkX2F0EhwKGjIwMjUtMDQtMTdUMTQ6MzM6NDMuNDU2MDU5SjsKEWFnZW50X2ZpbmdlcnBy
|
||||
aW50EiYKJGJiMzQ0NzIxLTE3M2QtNGRmNS1iMWRmLWQ2NjBkZjZlZmVjZnoCGAGFAQABAAASnAEK
|
||||
EOg/b+TBCd7kQOaEdNwvgZoSCGYJctohLuuaKgpUb29sIFVzYWdlMAE58JA0R4MrNxhBqOM9R4Mr
|
||||
NxhKGwoOY3Jld2FpX3ZlcnNpb24SCQoHMC4xMTQuMEonCgl0b29sX25hbWUSGgoYQXNrIHF1ZXN0
|
||||
aW9uIHRvIGNvd29ya2VySg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAASlwoKEMn/aYwi4Jc7AohP
|
||||
Y7puRXsSCB83OjVnCZfLKgxDcmV3IENyZWF0ZWQwATlw8dVHgys3GEGA9NtHgys3GEobCg5jcmV3
|
||||
YWlfdmVyc2lvbhIJCgcwLjExNC4wShsKDnB5dGhvbl92ZXJzaW9uEgkKBzMuMTEuMTJKLgoIY3Jl
|
||||
d19rZXkSIgogZDQyNjA4MzNhYjBjMjBiYjQ0OTIyYzc5OWFhOTZiNGFKMQoHY3Jld19pZBImCiQx
|
||||
ZjA0NjU0YS05ODE5LTQ0MTEtOTVlZC1kMmMwMTJlNWU3YjJKHAoMY3Jld19wcm9jZXNzEgwKCnNl
|
||||
cXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAkob
|
||||
ChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSjoKEGNyZXdfZmluZ2VycHJpbnQSJgokYjhhYjU0
|
||||
YzEtYjFjZS00OGIyLTlkODMtNDVkNzkyMTkxOWQ0SjsKG2NyZXdfZmluZ2VycHJpbnRfY3JlYXRl
|
||||
ZF9hdBIcChoyMDI1LTA0LTE3VDE0OjMzOjQzLjQ3NDIyN0rlAgoLY3Jld19hZ2VudHMS1QIK0gJb
|
||||
eyJrZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVkN2Q0MjQwYTI5NGQiLCAiaWQiOiAiMjg5YzIz
|
||||
NzgtNWUxOC00YzJiLWE1NDMtNzVkOTk5YWUyYWQyIiwgInJvbGUiOiAiU2NvcmVyIiwgInZlcmJv
|
||||
c2U/IjogdHJ1ZSwgIm1heF9pdGVyIjogMjUsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2Nh
|
||||
bGxpbmdfbGxtIjogImdwdC0zLjUtdHVyYm8tMDEyNSIsICJsbG0iOiAiZ3B0LTQtMDEyNS1wcmV2
|
||||
aWV3IiwgImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9u
|
||||
PyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUrkAwoK
|
||||
Y3Jld190YXNrcxLVAwrSA1t7ImtleSI6ICIyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4
|
||||
NiIsICJpZCI6ICJiNTAxMWUwNi02YTdmLTQzYWItYWQzNy1mNGI4ODBhMmJlYjgiLCAiYXN5bmNf
|
||||
ZXhlY3V0aW9uPyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjog
|
||||
IlNjb3JlciIsICJhZ2VudF9rZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVkN2Q0MjQwYTI5NGQi
|
||||
LCAidG9vbHNfbmFtZXMiOiBbXX0sIHsia2V5IjogIjYwOWRlZTM5MTA4OGNkMWM4N2I4ZmE2NmFh
|
||||
NjdhZGJlIiwgImlkIjogIjRiODE5NjQ5LTYyMjQtNGQ3Mi1hZDFkLTM0ODZhYzBkODQwNSIsICJh
|
||||
c3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3Jv
|
||||
bGUiOiAiU2NvcmVyIiwgImFnZW50X2tleSI6ICI5MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBh
|
||||
Mjk0ZCIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEoAEChCdiwcYoV5twj//vpjaNfMl
|
||||
EghUv7y5H7+dXioMVGFzayBDcmVhdGVkMAE5wN3kR4MrNxhBsDPlR4MrNxhKLgoIY3Jld19rZXkS
|
||||
IgogZDQyNjA4MzNhYjBjMjBiYjQ0OTIyYzc5OWFhOTZiNGFKMQoHY3Jld19pZBImCiQxZjA0NjU0
|
||||
YS05ODE5LTQ0MTEtOTVlZC1kMmMwMTJlNWU3YjJKLgoIdGFza19rZXkSIgogMjdlZjM4Y2M5OWRh
|
||||
NGE4ZGVkNzBlZDQwNmU0NGFiODZKMQoHdGFza19pZBImCiRiNTAxMWUwNi02YTdmLTQzYWItYWQz
|
||||
Ny1mNGI4ODBhMmJlYjhKOgoQY3Jld19maW5nZXJwcmludBImCiRiOGFiNTRjMS1iMWNlLTQ4YjIt
|
||||
OWQ4My00NWQ3OTIxOTE5ZDRKOgoQdGFza19maW5nZXJwcmludBImCiQ3NDcyMTM3Yi0wYzBkLTRi
|
||||
OTEtYTgwYy01YzZkYWQwZmM3YTBKOwobdGFza19maW5nZXJwcmludF9jcmVhdGVkX2F0EhwKGjIw
|
||||
MjUtMDQtMTdUMTQ6MzM6NDMuNDc0MTc1SjsKEWFnZW50X2ZpbmdlcnByaW50EiYKJDVlMWU1Nzdh
|
||||
LWY5N2QtNDA2OS04NzNmLTc1ZjYyMDE0ZWJlNnoCGAGFAQABAAASgAQKEA6OsjCwXi+o2MUfKyS+
|
||||
TmgSCOIWlM7TtxNCKgxUYXNrIENyZWF0ZWQwATmQ915Igys3GEHYaF9Igys3GEouCghjcmV3X2tl
|
||||
eRIiCiBkNDI2MDgzM2FiMGMyMGJiNDQ5MjJjNzk5YWE5NmI0YUoxCgdjcmV3X2lkEiYKJDFmMDQ2
|
||||
NTRhLTk4MTktNDQxMS05NWVkLWQyYzAxMmU1ZTdiMkouCgh0YXNrX2tleRIiCiA2MDlkZWUzOTEw
|
||||
ODhjZDFjODdiOGZhNjZhYTY3YWRiZUoxCgd0YXNrX2lkEiYKJDRiODE5NjQ5LTYyMjQtNGQ3Mi1h
|
||||
ZDFkLTM0ODZhYzBkODQwNUo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJGI4YWI1NGMxLWIxY2UtNDhi
|
||||
Mi05ZDgzLTQ1ZDc5MjE5MTlkNEo6ChB0YXNrX2ZpbmdlcnByaW50EiYKJGYwZWUxMjY2LWExNzIt
|
||||
NDhiMi1iMTM2LTczN2I3YWNkYWYwNko7Cht0YXNrX2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoa
|
||||
MjAyNS0wNC0xN1QxNDozMzo0My40NzQyMDVKOwoRYWdlbnRfZmluZ2VycHJpbnQSJgokNWUxZTU3
|
||||
N2EtZjk3ZC00MDY5LTg3M2YtNzVmNjIwMTRlYmU2egIYAYUBAAEAABL/CQoQOon8z16iVXeCtEnr
|
||||
visR+hII1ztFvfUBPyAqDENyZXcgQ3JlYXRlZDABOYC2CUmDKzcYQVhjEUmDKzcYShsKDmNyZXdh
|
||||
aV92ZXJzaW9uEgkKBzAuMTE0LjBKGwoOcHl0aG9uX3ZlcnNpb24SCQoHMy4xMS4xMkouCghjcmV3
|
||||
X2tleRIiCiBhOTU0MGNkMGVhYTUzZjY3NTQzN2U5YmQ0ZmE1ZTQ0Y0oxCgdjcmV3X2lkEiYKJDFh
|
||||
MDA5NjZhLTcwYmQtNDc4OS1hZmQxLTY5NjgzMzZjYjc2NkocCgxjcmV3X3Byb2Nlc3MSDAoKc2Vx
|
||||
dWVudGlhbEoRCgtjcmV3X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgCShsK
|
||||
FWNyZXdfbnVtYmVyX29mX2FnZW50cxICGAFKOgoQY3Jld19maW5nZXJwcmludBImCiRjOGI1ZDUx
|
||||
My0xNzY0LTQyYWQtOGVlYS0wYWU0MDAzZTRmNTRKOwobY3Jld19maW5nZXJwcmludF9jcmVhdGVk
|
||||
X2F0EhwKGjIwMjUtMDQtMTdUMTQ6MzM6NDMuNDk0ODMySs0CCgtjcmV3X2FnZW50cxK9Agq6Alt7
|
||||
ImtleSI6ICI5MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJpZCI6ICIwZTg1MzFl
|
||||
YS05MmM4LTQzMGQtYmE0Yy1jMmIxYzkwZDBlMWQiLCAicm9sZSI6ICJTY29yZXIiLCAidmVyYm9z
|
||||
ZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjUsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2Nh
|
||||
bGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdhdGlvbl9lbmFibGVk
|
||||
PyI6IGZhbHNlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGlt
|
||||
aXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX1dSuQDCgpjcmV3X3Rhc2tzEtUDCtIDW3sia2V5Ijog
|
||||
IjI3ZWYzOGNjOTlkYTRhOGRlZDcwZWQ0MDZlNDRhYjg2IiwgImlkIjogImI2NWQ1ZGUzLWY5MjAt
|
||||
NDVhZi1hOTgwLTA2NjBkMDU5YzdiZiIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1h
|
||||
bl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAiU2NvcmVyIiwgImFnZW50X2tleSI6ICI5
|
||||
MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJr
|
||||
ZXkiOiAiYjBkMzRhNmY2MjFhN2IzNTgwZDVkMWY0ZTI2NjViOTIiLCAiaWQiOiAiZjk2MDU5NzMt
|
||||
YzYyMi00NzRjLWFhZjktYWJiOWMwZWZhYmQ0IiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwg
|
||||
Imh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJTY29yZXIiLCAiYWdlbnRfa2V5
|
||||
IjogIjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRkIiwgInRvb2xzX25hbWVzIjogW119
|
||||
XXoCGAGFAQABAAASgAQKEGysuSY/RpRwwsmMtZmmkLgSCB3RCKVN5vMGKgxUYXNrIENyZWF0ZWQw
|
||||
ATmYyRpJgys3GEFwIxtJgys3GEouCghjcmV3X2tleRIiCiBhOTU0MGNkMGVhYTUzZjY3NTQzN2U5
|
||||
YmQ0ZmE1ZTQ0Y0oxCgdjcmV3X2lkEiYKJDFhMDA5NjZhLTcwYmQtNDc4OS1hZmQxLTY5NjgzMzZj
|
||||
Yjc2NkouCgh0YXNrX2tleRIiCiAyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4NkoxCgd0
|
||||
YXNrX2lkEiYKJGI2NWQ1ZGUzLWY5MjAtNDVhZi1hOTgwLTA2NjBkMDU5YzdiZko6ChBjcmV3X2Zp
|
||||
bmdlcnByaW50EiYKJGM4YjVkNTEzLTE3NjQtNDJhZC04ZWVhLTBhZTQwMDNlNGY1NEo6ChB0YXNr
|
||||
X2ZpbmdlcnByaW50EiYKJDZjNDhlMTkxLTViMjEtNDBmNi1iNmQwLWRjMWY3ZmM2YWQzN0o7Cht0
|
||||
YXNrX2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0wNC0xN1QxNDozMzo0My40OTQ3NzdK
|
||||
OwoRYWdlbnRfZmluZ2VycHJpbnQSJgokNTZmYTQ1MTYtM2RiNS00Mjk3LTg3NzUtOTI2MWVhNWI4
|
||||
OWI2egIYAYUBAAEAABKABAoQJRURFvAOz5/5e2bQNRT4ChIIpSiy2tnBCrsqDFRhc2sgQ3JlYXRl
|
||||
ZDABOdgreEmDKzcYQSCdeEmDKzcYSi4KCGNyZXdfa2V5EiIKIGE5NTQwY2QwZWFhNTNmNjc1NDM3
|
||||
ZTliZDRmYTVlNDRjSjEKB2NyZXdfaWQSJgokMWEwMDk2NmEtNzBiZC00Nzg5LWFmZDEtNjk2ODMz
|
||||
NmNiNzY2Si4KCHRhc2tfa2V5EiIKIGIwZDM0YTZmNjIxYTdiMzU4MGQ1ZDFmNGUyNjY1YjkySjEK
|
||||
B3Rhc2tfaWQSJgokZjk2MDU5NzMtYzYyMi00NzRjLWFhZjktYWJiOWMwZWZhYmQ0SjoKEGNyZXdf
|
||||
ZmluZ2VycHJpbnQSJgokYzhiNWQ1MTMtMTc2NC00MmFkLThlZWEtMGFlNDAwM2U0ZjU0SjoKEHRh
|
||||
c2tfZmluZ2VycHJpbnQSJgokMWI0YzI5NTYtMDZkOC00NWNjLWFmYWMtNmZhZDk0MzdkNTZmSjsK
|
||||
G3Rhc2tfZmluZ2VycHJpbnRfY3JlYXRlZF9hdBIcChoyMDI1LTA0LTE3VDE0OjMzOjQzLjQ5NDgw
|
||||
OEo7ChFhZ2VudF9maW5nZXJwcmludBImCiQ1NmZhNDUxNi0zZGI1LTQyOTctODc3NS05MjYxZWE1
|
||||
Yjg5YjZ6AhgBhQEAAQAAEpYIChCt1KD2VBrK6+JIjPGbSQYWEghyEaRdKejivSoMQ3JldyBDcmVh
|
||||
dGVkMAE54Fn7SYMrNxhBMPkBSoMrNxhKGwoOY3Jld2FpX3ZlcnNpb24SCQoHMC4xMTQuMEobCg5w
|
||||
eXRob25fdmVyc2lvbhIJCgczLjExLjEySi4KCGNyZXdfa2V5EiIKIDVlNmVmZmU2ODBhNWQ5N2Rj
|
||||
Mzg3M2IxNDgyNWNjZmEzSjEKB2NyZXdfaWQSJgokMDQ4ZDQ5MzctNmU3OC00OWFkLTllZDMtYzVi
|
||||
MjdlNDgyNThlShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQ
|
||||
AEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIY
|
||||
AUo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJDIxYjViOWQ3LWVjNTktNDBhYi1iZjY1LTk1NzM2M2Fl
|
||||
ZDRkZEo7ChtjcmV3X2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0wNC0xN1QxNDozMzo0
|
||||
My41MTA0NjZKzQIKC2NyZXdfYWdlbnRzEr0CCroCW3sia2V5IjogIjkyZTdlYjE5MTY2NGM5MzU3
|
||||
ODVlZDdkNDI0MGEyOTRkIiwgImlkIjogImFkN2ExMTJiLWY5NDQtNDViYy05YzE4LWJjOWQzMDE4
|
||||
NjE1OCIsICJyb2xlIjogIlNjb3JlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAy
|
||||
NSwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJn
|
||||
cHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4
|
||||
ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtd
|
||||
fV1K+wEKCmNyZXdfdGFza3MS7AEK6QFbeyJrZXkiOiAiMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQw
|
||||
NmU0NGFiODYiLCAiaWQiOiAiZTJmMDI3YzItZWQ2NC00MDU4LThkNzUtNjQ1OWMzYTllYWIwIiwg
|
||||
ImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRf
|
||||
cm9sZSI6ICJTY29yZXIiLCAiYWdlbnRfa2V5IjogIjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdkNDI0
|
||||
MGEyOTRkIiwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAASgAQKELdQRmZyaC0pfEDI4tAa
|
||||
vRsSCCE13pISM1wEKgxUYXNrIENyZWF0ZWQwATmwBwpKgys3GEG4WQpKgys3GEouCghjcmV3X2tl
|
||||
eRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2ZhM0oxCgdjcmV3X2lkEiYKJDA0OGQ0
|
||||
OTM3LTZlNzgtNDlhZC05ZWQzLWM1YjI3ZTQ4MjU4ZUouCgh0YXNrX2tleRIiCiAyN2VmMzhjYzk5
|
||||
ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4NkoxCgd0YXNrX2lkEiYKJGUyZjAyN2MyLWVkNjQtNDA1OC04
|
||||
ZDc1LTY0NTljM2E5ZWFiMEo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJDIxYjViOWQ3LWVjNTktNDBh
|
||||
Yi1iZjY1LTk1NzM2M2FlZDRkZEo6ChB0YXNrX2ZpbmdlcnByaW50EiYKJDNmMDU4NDE1LTNkNzUt
|
||||
NDZhNi05OWFjLTIyZmM5OWM4OTBmM0o7Cht0YXNrX2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoa
|
||||
MjAyNS0wNC0xN1QxNDozMzo0My41MTA0NDBKOwoRYWdlbnRfZmluZ2VycHJpbnQSJgokMjViMTgx
|
||||
NjYtYjk3My00ZDM3LWI0OTUtMTI4YmIyMzQyNjhmegIYAYUBAAEAABKWCAoQuKzLEfp7NclRHh6f
|
||||
Sm2/nxIIucJxIhUlkkwqDENyZXcgQ3JlYXRlZDABOfDOa0qDKzcYQfh/cUqDKzcYShsKDmNyZXdh
|
||||
aV92ZXJzaW9uEgkKBzAuMTE0LjBKGwoOcHl0aG9uX3ZlcnNpb24SCQoHMy4xMS4xMkouCghjcmV3
|
||||
X2tleRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2ZhM0oxCgdjcmV3X2lkEiYKJDhl
|
||||
YzE2ZWQ3LTNiYjItNDkxZC04YjYxLTI3MWYxNmZlOGFlMUocCgxjcmV3X3Byb2Nlc3MSDAoKc2Vx
|
||||
dWVudGlhbEoRCgtjcmV3X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgBShsK
|
||||
FWNyZXdfbnVtYmVyX29mX2FnZW50cxICGAFKOgoQY3Jld19maW5nZXJwcmludBImCiQxN2MxN2Qz
|
||||
NC1mZWZkLTQ1ZTktYWM2NS00ODY2ZTBlNTgwYTBKOwobY3Jld19maW5nZXJwcmludF9jcmVhdGVk
|
||||
X2F0EhwKGjIwMjUtMDQtMTdUMTQ6MzM6NDMuNTE4MDI0Ss0CCgtjcmV3X2FnZW50cxK9Agq6Alt7
|
||||
ImtleSI6ICI5MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJpZCI6ICJhODk2OTRm
|
||||
ZS1lNjE1LTQzOWItOTQ4MC02MmVmZTRiNGY4ODUiLCAicm9sZSI6ICJTY29yZXIiLCAidmVyYm9z
|
||||
ZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjUsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2Nh
|
||||
bGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdhdGlvbl9lbmFibGVk
|
||||
PyI6IGZhbHNlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGlt
|
||||
aXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX1dSvsBCgpjcmV3X3Rhc2tzEuwBCukBW3sia2V5Ijog
|
||||
IjI3ZWYzOGNjOTlkYTRhOGRlZDcwZWQ0MDZlNDRhYjg2IiwgImlkIjogIjMwYjNmYmQ4LWM0MmMt
|
||||
NDhlYi1hNDdlLTAzNzFlOTFmYTAxYiIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1h
|
||||
bl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAiU2NvcmVyIiwgImFnZW50X2tleSI6ICI5
|
||||
MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgB
|
||||
hQEAAQAAEoAEChBSqrS4nl90w/7Z/EGzEvYpEgj54GNjf+XWQyoMVGFzayBDcmVhdGVkMAE5GJ55
|
||||
SoMrNxhBOOx5SoMrNxhKLgoIY3Jld19rZXkSIgogNWU2ZWZmZTY4MGE1ZDk3ZGMzODczYjE0ODI1
|
||||
Y2NmYTNKMQoHY3Jld19pZBImCiQ4ZWMxNmVkNy0zYmIyLTQ5MWQtOGI2MS0yNzFmMTZmZThhZTFK
|
||||
LgoIdGFza19rZXkSIgogMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFiODZKMQoHdGFza19p
|
||||
ZBImCiQzMGIzZmJkOC1jNDJjLTQ4ZWItYTQ3ZS0wMzcxZTkxZmEwMWJKOgoQY3Jld19maW5nZXJw
|
||||
cmludBImCiQxN2MxN2QzNC1mZWZkLTQ1ZTktYWM2NS00ODY2ZTBlNTgwYTBKOgoQdGFza19maW5n
|
||||
ZXJwcmludBImCiQ5MGYyODZhZC1lOWQ4LTRkOWUtYjA5MC05YTQyY2I3ZjYzNDhKOwobdGFza19m
|
||||
aW5nZXJwcmludF9jcmVhdGVkX2F0EhwKGjIwMjUtMDQtMTdUMTQ6MzM6NDMuNTE3OTk4SjsKEWFn
|
||||
ZW50X2ZpbmdlcnByaW50EiYKJDhhMTliOTQxLWI4MTgtNDU3MC05NmJjLWZlYWQ4MWNkMjk1NXoC
|
||||
GAGFAQABAAASlggKEFFBQgm1QyQTSqKMpRKbGjwSCCoFcjlB3aOLKgxDcmV3IENyZWF0ZWQwATlY
|
||||
/RVLgys3GEEwLR1Lgys3GEobCg5jcmV3YWlfdmVyc2lvbhIJCgcwLjExNC4wShsKDnB5dGhvbl92
|
||||
ZXJzaW9uEgkKBzMuMTEuMTJKLgoIY3Jld19rZXkSIgogNWU2ZWZmZTY4MGE1ZDk3ZGMzODczYjE0
|
||||
ODI1Y2NmYTNKMQoHY3Jld19pZBImCiQ5NDYyMGE5YS1jMjc1LTRjMjItYjk1OC0zZmZlYWRiOGFl
|
||||
ZGJKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNy
|
||||
ZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSjoKEGNy
|
||||
ZXdfZmluZ2VycHJpbnQSJgokNzQzZDgxNDYtZTM0My00Njk0LTljODUtMmVmZWRkMzZhYTlkSjsK
|
||||
G2NyZXdfZmluZ2VycHJpbnRfY3JlYXRlZF9hdBIcChoyMDI1LTA0LTE3VDE0OjMzOjQzLjUyOTAz
|
||||
MkrNAgoLY3Jld19hZ2VudHMSvQIKugJbeyJrZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVkN2Q0
|
||||
MjQwYTI5NGQiLCAiaWQiOiAiZWE5NTMyZDktZDczNy00ZTIyLWE3MjItMDg2ODQ4NGViMmY5Iiwg
|
||||
InJvbGUiOiAiU2NvcmVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDI1LCAibWF4
|
||||
X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1t
|
||||
aW5pIiwgImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9u
|
||||
PyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUr7AQoK
|
||||
Y3Jld190YXNrcxLsAQrpAVt7ImtleSI6ICIyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4
|
||||
NiIsICJpZCI6ICIyMmUyMjZiNC1kNmI5LTRiMzgtOTQwMi05NDY0NTBhOWExZjUiLCAiYXN5bmNf
|
||||
ZXhlY3V0aW9uPyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjog
|
||||
IlNjb3JlciIsICJhZ2VudF9rZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVkN2Q0MjQwYTI5NGQi
|
||||
LCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUBAAEAABKABAoQBNXlZK0H3tATZKP2Uw7bBxII0nvw
|
||||
9UluHcgqDFRhc2sgQ3JlYXRlZDABOXhiJ0uDKzcYQZiwJ0uDKzcYSi4KCGNyZXdfa2V5EiIKIDVl
|
||||
NmVmZmU2ODBhNWQ5N2RjMzg3M2IxNDgyNWNjZmEzSjEKB2NyZXdfaWQSJgokOTQ2MjBhOWEtYzI3
|
||||
NS00YzIyLWI5NTgtM2ZmZWFkYjhhZWRiSi4KCHRhc2tfa2V5EiIKIDI3ZWYzOGNjOTlkYTRhOGRl
|
||||
ZDcwZWQ0MDZlNDRhYjg2SjEKB3Rhc2tfaWQSJgokMjJlMjI2YjQtZDZiOS00YjM4LTk0MDItOTQ2
|
||||
NDUwYTlhMWY1SjoKEGNyZXdfZmluZ2VycHJpbnQSJgokNzQzZDgxNDYtZTM0My00Njk0LTljODUt
|
||||
MmVmZWRkMzZhYTlkSjoKEHRhc2tfZmluZ2VycHJpbnQSJgokYjI3M2Q1OTMtYzcxZC00ZWRmLWI0
|
||||
NmMtMTkyMmIxY2ZmZjY1SjsKG3Rhc2tfZmluZ2VycHJpbnRfY3JlYXRlZF9hdBIcChoyMDI1LTA0
|
||||
LTE3VDE0OjMzOjQzLjUyOTAwMko7ChFhZ2VudF9maW5nZXJwcmludBImCiQ5ZWYxZjc4Ny1lZTIz
|
||||
LTRlNGUtOGJkYi04NGJiNjkwZjlkZjV6AhgBhQEAAQAAEpYIChCLERZYyvqc04xX3gWXPO59EggU
|
||||
L4MaGDu+FyoMQ3JldyBDcmVhdGVkMAE54NLGS4MrNxhBeGbNS4MrNxhKGwoOY3Jld2FpX3ZlcnNp
|
||||
b24SCQoHMC4xMTQuMEobCg5weXRob25fdmVyc2lvbhIJCgczLjExLjEySi4KCGNyZXdfa2V5EiIK
|
||||
IDVlNmVmZmU2ODBhNWQ5N2RjMzg3M2IxNDgyNWNjZmEzSjEKB2NyZXdfaWQSJgokZWI4MDdlMzEt
|
||||
MGFiZS00NjQ0LWEwZjEtMDM4N2ZkZTYzYWQ1ShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFs
|
||||
ShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19u
|
||||
dW1iZXJfb2ZfYWdlbnRzEgIYAUo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJDBkZDRmNDNlLTEzNzEt
|
||||
NDVlNS1iMGNmLWY2ZDE5YTNjNzZkOUo7ChtjcmV3X2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoa
|
||||
MjAyNS0wNC0xN1QxNDozMzo0My41NDA2MzhKzQIKC2NyZXdfYWdlbnRzEr0CCroCW3sia2V5Ijog
|
||||
IjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRkIiwgImlkIjogImY0ZDM5NzdlLTg4YjAt
|
||||
NDQ2Zi04YzQ5LThkMjc0NDgwOGQ4NiIsICJyb2xlIjogIlNjb3JlciIsICJ2ZXJib3NlPyI6IGZh
|
||||
bHNlLCAibWF4X2l0ZXIiOiAyNSwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19s
|
||||
bG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFs
|
||||
c2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIs
|
||||
ICJ0b29sc19uYW1lcyI6IFtdfV1K+wEKCmNyZXdfdGFza3MS7AEK6QFbeyJrZXkiOiAiMjdlZjM4
|
||||
Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFiODYiLCAiaWQiOiAiYjA4MTZmY2QtYjcyMS00YTZkLTk0
|
||||
ODQtNDc4YWM4ZDdkYTg4IiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0
|
||||
PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJTY29yZXIiLCAiYWdlbnRfa2V5IjogIjkyZTdlYjE5
|
||||
MTY2NGM5MzU3ODVlZDdkNDI0MGEyOTRkIiwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAAS
|
||||
gAQKEB0emJtWTE8969Cle2pbw8QSCFZbdjoOT655KgxUYXNrIENyZWF0ZWQwATlgt9VLgys3GEFo
|
||||
CdZLgys3GEouCghjcmV3X2tleRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2ZhM0ox
|
||||
CgdjcmV3X2lkEiYKJGViODA3ZTMxLTBhYmUtNDY0NC1hMGYxLTAzODdmZGU2M2FkNUouCgh0YXNr
|
||||
X2tleRIiCiAyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4NkoxCgd0YXNrX2lkEiYKJGIw
|
||||
ODE2ZmNkLWI3MjEtNGE2ZC05NDg0LTQ3OGFjOGQ3ZGE4OEo6ChBjcmV3X2ZpbmdlcnByaW50EiYK
|
||||
JDBkZDRmNDNlLTEzNzEtNDVlNS1iMGNmLWY2ZDE5YTNjNzZkOUo6ChB0YXNrX2ZpbmdlcnByaW50
|
||||
EiYKJGM1ODIyZGM4LWIyYmYtNDUyZS04YjQ2LWRkNjAyMDNkNTA0Zko7Cht0YXNrX2ZpbmdlcnBy
|
||||
aW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0wNC0xN1QxNDozMzo0My41NDA2MDlKOwoRYWdlbnRfZmlu
|
||||
Z2VycHJpbnQSJgokZjZhMjA0MDYtOTM0Yy00ZjVmLWI1MzMtMDYwMTQwMGUxMTM1egIYAYUBAAEA
|
||||
ABL4BwoQxJQoNY/stf/qihFVNGkp1hII9x5mkc7Cz5AqDENyZXcgQ3JlYXRlZDABOcB+REyDKzcY
|
||||
QYA7SkyDKzcYShsKDmNyZXdhaV92ZXJzaW9uEgkKBzAuMTE0LjBKGwoOcHl0aG9uX3ZlcnNpb24S
|
||||
CQoHMy4xMS4xMkouCghjcmV3X2tleRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2Zh
|
||||
M0oxCgdjcmV3X2lkEiYKJDY3NmIxNjlhLTI5YTYtNDVhOC05NmNjLTY4MjMyNzgzYmU3NUoeCgxj
|
||||
cmV3X3Byb2Nlc3MSDgoMaGllcmFyY2hpY2FsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251
|
||||
bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAUo6ChBjcmV3X2Zp
|
||||
bmdlcnByaW50EiYKJDMzZDRkZWU1LWIxOTEtNDAzYy04OWEwLTYyNWJiNjVmMmYxOUo7ChtjcmV3
|
||||
X2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoaMjAyNS0wNC0xN1QxNDozMzo0My41NDg4OThKzQIK
|
||||
C2NyZXdfYWdlbnRzEr0CCroCW3sia2V5IjogIjkyZTdlYjE5MTY2NGM5MzU3ODVlZDdkNDI0MGEy
|
||||
OTRkIiwgImlkIjogImE0Mzc1OWYyLTY0NDEtNDNiMy1hOGRjLWYxYmQzMTU3MDdlZiIsICJyb2xl
|
||||
IjogIlNjb3JlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyNSwgIm1heF9ycG0i
|
||||
OiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIs
|
||||
ICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBm
|
||||
YWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K2wEKCmNyZXdf
|
||||
dGFza3MSzAEKyQFbeyJrZXkiOiAiMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFiODYiLCAi
|
||||
aWQiOiAiMDVmYzk4OWItMDY5Mi00ODAzLTgyMTMtMWQ2ODY5YTYwMTBlIiwgImFzeW5jX2V4ZWN1
|
||||
dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJOb25l
|
||||
IiwgImFnZW50X2tleSI6IG51bGwsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEoAEChDT
|
||||
QOB/rAOBZumhFuP0yYFYEggS6cvMoHzBYioMVGFzayBDcmVhdGVkMAE5cB9dTIMrNxhBeHFdTIMr
|
||||
NxhKLgoIY3Jld19rZXkSIgogNWU2ZWZmZTY4MGE1ZDk3ZGMzODczYjE0ODI1Y2NmYTNKMQoHY3Jl
|
||||
d19pZBImCiQ2NzZiMTY5YS0yOWE2LTQ1YTgtOTZjYy02ODIzMjc4M2JlNzVKLgoIdGFza19rZXkS
|
||||
IgogMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0NGFiODZKMQoHdGFza19pZBImCiQwNWZjOTg5
|
||||
Yi0wNjkyLTQ4MDMtODIxMy0xZDY4NjlhNjAxMGVKOgoQY3Jld19maW5nZXJwcmludBImCiQzM2Q0
|
||||
ZGVlNS1iMTkxLTQwM2MtODlhMC02MjViYjY1ZjJmMTlKOgoQdGFza19maW5nZXJwcmludBImCiQ2
|
||||
MGIzYTVkNi04MWUxLTQzOGYtOTE3Yy0yNjU4MzU4NjdmNzZKOwobdGFza19maW5nZXJwcmludF9j
|
||||
cmVhdGVkX2F0EhwKGjIwMjUtMDQtMTdUMTQ6MzM6NDMuNTQ4ODcwSjsKEWFnZW50X2ZpbmdlcnBy
|
||||
aW50EiYKJDgwZjQ2NDkxLWM0YjItNDExNy05MTM2LTZhOTQxZjI5ODBiZHoCGAGFAQABAAASnAEK
|
||||
EObnhSrs8UQiBg/0Bricu2cSCMFW2rX4WlIFKgpUb29sIFVzYWdlMAE5CJ26TIMrNxhBmE/DTIMr
|
||||
NxhKGwoOY3Jld2FpX3ZlcnNpb24SCQoHMC4xMTQuMEonCgl0b29sX25hbWUSGgoYQXNrIHF1ZXN0
|
||||
aW9uIHRvIGNvd29ya2VySg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAAS0AoKEBk04hfXjSFQOWWK
|
||||
xJFMHB8SCLGFr6R51/OhKgxDcmV3IENyZWF0ZWQwATl4TzNNgys3GEFIMzlNgys3GEobCg5jcmV3
|
||||
YWlfdmVyc2lvbhIJCgcwLjExNC4wShsKDnB5dGhvbl92ZXJzaW9uEgkKBzMuMTEuMTJKLgoIY3Jl
|
||||
d19rZXkSIgogNzQyNzU3MzEyZWY3YmI0ZWUwYjA2NjJkMWMyZTIxNzlKMQoHY3Jld19pZBImCiQ2
|
||||
MWRiMTdjYi0xODQ3LTRmYzktOWNkMy1hMDY3ZjRkOGExMzRKHAoMY3Jld19wcm9jZXNzEgwKCnNl
|
||||
cXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUob
|
||||
ChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgCSjoKEGNyZXdfZmluZ2VycHJpbnQSJgokNjQ3OGY2
|
||||
MjItZGVlOC00OWYyLTljMDktOGNiOTBjMjEwYzNjSjsKG2NyZXdfZmluZ2VycHJpbnRfY3JlYXRl
|
||||
ZF9hdBIcChoyMDI1LTA0LTE3VDE0OjMzOjQzLjU2NDI2NEqGBQoLY3Jld19hZ2VudHMS9gQK8wRb
|
||||
eyJrZXkiOiAiODljZjMxMWI0OGI1MjE2OWQ0MmYzOTI1YzViZTFjNWEiLCAiaWQiOiAiM2E3YjUx
|
||||
ZGItYzhhZC00ZDU3LWE1OGUtOGIzY2EyYzIxZTI2IiwgInJvbGUiOiAiTWFuYWdlciIsICJ2ZXJi
|
||||
b3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyNSwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25f
|
||||
Y2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJs
|
||||
ZWQ/IjogdHJ1ZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xp
|
||||
bWl0IjogMiwgInRvb2xzX25hbWVzIjogW119LCB7ImtleSI6ICI5MmU3ZWIxOTE2NjRjOTM1Nzg1
|
||||
ZWQ3ZDQyNDBhMjk0ZCIsICJpZCI6ICIxNWZiMjExOC0zOTE0LTQ2ZTctODRkZi05M2E5ZDA0ZWVj
|
||||
M2YiLCAicm9sZSI6ICJTY29yZXIiLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjUs
|
||||
ICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0
|
||||
LTRvLW1pbmkiLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IHRydWUsICJhbGxvd19jb2RlX2V4ZWN1
|
||||
dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K
|
||||
/AEKCmNyZXdfdGFza3MS7QEK6gFbeyJrZXkiOiAiMjdlZjM4Y2M5OWRhNGE4ZGVkNzBlZDQwNmU0
|
||||
NGFiODYiLCAiaWQiOiAiMzcwNTE2YzQtNWNhMC00NDBjLWE3YTctNWFhMjZiMDQ4MmFmIiwgImFz
|
||||
eW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9s
|
||||
ZSI6ICJNYW5hZ2VyIiwgImFnZW50X2tleSI6ICI4OWNmMzExYjQ4YjUyMTY5ZDQyZjM5MjVjNWJl
|
||||
MWM1YSIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEoAEChBGCATWQjaQu6Zpx66QI8ik
|
||||
Egg05IWKCa/6oyoMVGFzayBDcmVhdGVkMAE54CVFTYMrNxhB0HtFTYMrNxhKLgoIY3Jld19rZXkS
|
||||
IgogNzQyNzU3MzEyZWY3YmI0ZWUwYjA2NjJkMWMyZTIxNzlKMQoHY3Jld19pZBImCiQ2MWRiMTdj
|
||||
Yi0xODQ3LTRmYzktOWNkMy1hMDY3ZjRkOGExMzRKLgoIdGFza19rZXkSIgogMjdlZjM4Y2M5OWRh
|
||||
NGE4ZGVkNzBlZDQwNmU0NGFiODZKMQoHdGFza19pZBImCiQzNzA1MTZjNC01Y2EwLTQ0MGMtYTdh
|
||||
Ny01YWEyNmIwNDgyYWZKOgoQY3Jld19maW5nZXJwcmludBImCiQ2NDc4ZjYyMi1kZWU4LTQ5ZjIt
|
||||
OWMwOS04Y2I5MGMyMTBjM2NKOgoQdGFza19maW5nZXJwcmludBImCiQxMWU1ZDRhNi04ODc5LTQx
|
||||
ZDktYWE3ZS04OTdhNDMzMDhlZWVKOwobdGFza19maW5nZXJwcmludF9jcmVhdGVkX2F0EhwKGjIw
|
||||
MjUtMDQtMTdUMTQ6MzM6NDMuNTY0MjMzSjsKEWFnZW50X2ZpbmdlcnByaW50EiYKJGUyMzRlYzA5
|
||||
LTk3MzktNGE3Zi04MDgxLTBhNjJkYzNlNzY1ZHoCGAGFAQABAAASnQEKEFjGvzVUde99Ey+6qDaD
|
||||
oG4SCFWLiXG8McaKKgpUb29sIFVzYWdlMAE5kI2lTYMrNxhBMG2tTYMrNxhKGwoOY3Jld2FpX3Zl
|
||||
cnNpb24SCQoHMC4xMTQuMEooCgl0b29sX25hbWUSGwoZRGVsZWdhdGUgd29yayB0byBjb3dvcmtl
|
||||
ckoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAAEooIChCB+QR8dk6jPImbeJiKh5UYEgj00bY0zkAX
|
||||
LSoMQ3JldyBDcmVhdGVkMAE5cAkrToMrNxhBKNM1ToMrNxhKGwoOY3Jld2FpX3ZlcnNpb24SCQoH
|
||||
MC4xMTQuMEobCg5weXRob25fdmVyc2lvbhIJCgczLjExLjEySi4KCGNyZXdfa2V5EiIKIDMwM2I4
|
||||
ZWRkNWIwMDg3MTBkNzZhYWY4MjVhNzA5ZTU1SjEKB2NyZXdfaWQSJgokNmUzM2ZiOTItNTZjMC00
|
||||
OTY3LTk1MjItZGQ3ZWFhOWY5NjFlSh4KDGNyZXdfcHJvY2VzcxIOCgxoaWVyYXJjaGljYWxKEQoL
|
||||
Y3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJl
|
||||
cl9vZl9hZ2VudHMSAhgBSjoKEGNyZXdfZmluZ2VycHJpbnQSJgokNDY5MjUyODctOTY2OS00MTRl
|
||||
LWEwNzctZGVjNDE2ZTE3NDRlSjsKG2NyZXdfZmluZ2VycHJpbnRfY3JlYXRlZF9hdBIcChoyMDI1
|
||||
LTA0LTE3VDE0OjMzOjQzLjU4MDUyM0rfAgoLY3Jld19hZ2VudHMSzwIKzAJbeyJrZXkiOiAiOTJl
|
||||
N2ViMTkxNjY0YzkzNTc4NWVkN2Q0MjQwYTI5NGQiLCAiaWQiOiAiNzllZTU5MjktNjdlMy00NjNi
|
||||
LTljYjEtMjFlNTdjNjZlNGQ4IiwgInJvbGUiOiAiU2NvcmVyIiwgInZlcmJvc2U/IjogZmFsc2Us
|
||||
ICJtYXhfaXRlciI6IDI1LCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6
|
||||
ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwg
|
||||
ImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRv
|
||||
b2xzX25hbWVzIjogWyJzY29yaW5nX2V4YW1wbGVzIl19XUrbAQoKY3Jld190YXNrcxLMAQrJAVt7
|
||||
ImtleSI6ICJmMzU3NWIwMTNjMjI5NGI3Y2M4ZTgwMzE1NWM4YmE0NiIsICJpZCI6ICJhZTlmMjgz
|
||||
MC1hY2NhLTRiNmQtOGRjNy01MjMzZmZlYmJhM2QiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNl
|
||||
LCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIk5vbmUiLCAiYWdlbnRfa2V5
|
||||
IjogbnVsbCwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAASgAQKEHOf6gKzlTQXpW3qTaF+
|
||||
AlQSCIXd9+wt3hyiKgxUYXNrIENyZWF0ZWQwATkws0hOgys3GEE4BUlOgys3GEouCghjcmV3X2tl
|
||||
eRIiCiAzMDNiOGVkZDViMDA4NzEwZDc2YWFmODI1YTcwOWU1NUoxCgdjcmV3X2lkEiYKJDZlMzNm
|
||||
YjkyLTU2YzAtNDk2Ny05NTIyLWRkN2VhYTlmOTYxZUouCgh0YXNrX2tleRIiCiBmMzU3NWIwMTNj
|
||||
MjI5NGI3Y2M4ZTgwMzE1NWM4YmE0NkoxCgd0YXNrX2lkEiYKJGFlOWYyODMwLWFjY2EtNGI2ZC04
|
||||
ZGM3LTUyMzNmZmViYmEzZEo6ChBjcmV3X2ZpbmdlcnByaW50EiYKJDQ2OTI1Mjg3LTk2NjktNDE0
|
||||
ZS1hMDc3LWRlYzQxNmUxNzQ0ZUo6ChB0YXNrX2ZpbmdlcnByaW50EiYKJDU4ZGVjODNjLWFjZTQt
|
||||
NGI1Ni05YzhjLWQyZGQ1YjVlMWYyNUo7Cht0YXNrX2ZpbmdlcnByaW50X2NyZWF0ZWRfYXQSHAoa
|
||||
MjAyNS0wNC0xN1QxNDozMzo0My41ODA0OTNKOwoRYWdlbnRfZmluZ2VycHJpbnQSJgokMGVjNzAw
|
||||
NWEtOGMzOC00N2RlLWI5YzctNjc3ZmFkOTU2ZmMzegIYAYUBAAEAABJpChDdogUTH/ocvPeyU/F2
|
||||
dgcNEghEqOA/7aEsiyoQVG9vbCBVc2FnZSBFcnJvcjABOVBbt06DKzcYQajPvU6DKzcYShsKDmNy
|
||||
ZXdhaV92ZXJzaW9uEgkKBzAuMTE0LjB6AhgBhQEAAQAAEp0BChD7kRw1jX3K2nusXZXPvHyGEghl
|
||||
QFQp8mFhuCoKVG9vbCBVc2FnZTABORCS/E6DKzcYQXDPBE+DKzcYShsKDmNyZXdhaV92ZXJzaW9u
|
||||
EgkKBzAuMTE0LjBKKAoJdG9vbF9uYW1lEhsKGURlbGVnYXRlIHdvcmsgdG8gY293b3JrZXJKDgoI
|
||||
YXR0ZW1wdHMSAhgBegIYAYUBAAEAAA==
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '27952'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
User-Agent:
|
||||
- OTel-OTLP-Exporter-Python/1.31.1
|
||||
method: POST
|
||||
uri: https://telemetry.crewai.com:4319/v1/traces
|
||||
response:
|
||||
body:
|
||||
string: "\n\0"
|
||||
headers:
|
||||
Content-Length:
|
||||
- '2'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
Date:
|
||||
- Thu, 17 Apr 2025 17:33:47 GMT
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,81 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"contents": [{"role": "user", "parts": [{"text": "\nCurrent Task: Give
|
||||
me a list of 5 interesting ideas to explore for an article, what makes them
|
||||
unique and interesting.\n\nThis is the expected criteria for your final answer:
|
||||
Bullet point list of 5 interesting ideas.\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}]}], "system_instruction": {"parts": [{"text": "You are
|
||||
Researcher. You''re an expert researcher, specialized in technology, software
|
||||
engineering, AI and startups. You work as a freelancer and are now working on
|
||||
doing research and analysis for a new customer.\nYour personal goal is: Make
|
||||
the best research and analysis on content about AI and AI agents. Use the tool
|
||||
provided to you.\nYou ONLY have access to the following tools, and should NEVER
|
||||
make up tools that are not listed here:\n\nTool Name: what amazing tool\nTool
|
||||
Arguments: {}\nTool Description: My tool\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [what amazing tool], just the name, exactly
|
||||
as it''s written.\nAction Input: the input to the action, just a simple JSON
|
||||
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
|
||||
the result of the action\n```\n\nOnce all necessary information is gathered,
|
||||
return the following format:\n\n```\nThought: I now know the final answer\nFinal
|
||||
Answer: the final answer to the original input question\n```"}]}, "generationConfig":
|
||||
{"temperature": 0.7, "stop_sequences": ["\nObservation:"]}}'
|
||||
headers:
|
||||
accept:
|
||||
- '*/*'
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1718'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- generativelanguage.googleapis.com
|
||||
user-agent:
|
||||
- litellm/1.60.2
|
||||
method: POST
|
||||
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-latest:generateContent
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAC/61STWvbQBC961cMe+klFnJUy8S30JZiaGloTQlEOaylibRktavujtKkxv+9s3Lk
|
||||
rE2PFUha9r2Z9+ZjlwCISppa1ZLQixXc8Q3AbvwGzBpCQwxMV3zZS0dv3MOzi85MIXwOQWLT2qFp
|
||||
aQVrMIg1kIUGDTpWA1Wj9PBgHUgDnFJVGkFu7UBwvea7evwxnXwKsH6nNQwegVp+rdUhV4u6hw5h
|
||||
66Qynqzr0tKU5roiZc0KfreSQHbyjzLNGDNBsDb9wK52pfg1oHsp2WspPk/OFqC4bIeeQuA/feJz
|
||||
r60L8LnXC6ZWgw8QC40WOvmIHlBW7ZgMBqNYdgyLhNJS7EXUxv3xfH/x1nxnNYbOdrZGPdH3E0E8
|
||||
KKN8+50dWxNoPzbfbsQRlU/NF9v0zm7D/GZZml1dLoti8X4+X+RFvszzPJmkR1ExeK7qK5LkBZHH
|
||||
NRCcoutpYx/RfLDDuCB5nh10ooU6IRTLV5wsSX0aezVhUWL/kWWVjjctWkKuX2pFL+OWfbrdiKhH
|
||||
dOZr6lISNfPc5X9SK5anYsnrcA7z+onOq8NgGux4VLN5uphxzbMsuxTJPvkLk2fgU5EDAAA=
|
||||
headers:
|
||||
Alt-Svc:
|
||||
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json; charset=UTF-8
|
||||
Date:
|
||||
- Thu, 17 Apr 2025 19:00:14 GMT
|
||||
Server:
|
||||
- scaffolding on HTTPServer2
|
||||
Server-Timing:
|
||||
- gfet4t7; dur=1972
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
Vary:
|
||||
- Origin
|
||||
- X-Origin
|
||||
- Referer
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
X-Frame-Options:
|
||||
- SAMEORIGIN
|
||||
X-XSS-Protection:
|
||||
- '0'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
90
tests/integration/elasticsearch_integration_test.py
Normal file
90
tests/integration/elasticsearch_integration_test.py
Normal file
@@ -0,0 +1,90 @@
|
||||
"""Integration test for Elasticsearch with CrewAI."""
|
||||
|
||||
import os
|
||||
import unittest
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.knowledge import Knowledge
|
||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||
|
||||
|
||||
@pytest.mark.skipif(
|
||||
os.environ.get("RUN_ELASTICSEARCH_TESTS") != "true",
|
||||
reason="Elasticsearch tests require RUN_ELASTICSEARCH_TESTS=true"
|
||||
)
|
||||
class TestElasticsearchIntegration(unittest.TestCase):
|
||||
"""Integration test for Elasticsearch with CrewAI."""
|
||||
|
||||
def test_crew_with_elasticsearch_memory(self):
|
||||
"""Test a crew with Elasticsearch memory."""
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="Research a topic",
|
||||
backstory="You are a researcher who loves to find information.",
|
||||
)
|
||||
|
||||
writer = Agent(
|
||||
role="Writer",
|
||||
goal="Write about a topic",
|
||||
backstory="You are a writer who loves to write about topics.",
|
||||
)
|
||||
|
||||
research_task = Task(
|
||||
description="Research about AI",
|
||||
expected_output="Information about AI",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
write_task = Task(
|
||||
description="Write about AI",
|
||||
expected_output="Article about AI",
|
||||
agent=writer,
|
||||
context=[research_task],
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[research_task, write_task],
|
||||
memory_config={"provider": "elasticsearch"},
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
|
||||
self.assertIsNotNone(result)
|
||||
|
||||
def test_crew_with_elasticsearch_knowledge(self):
|
||||
"""Test a crew with Elasticsearch knowledge."""
|
||||
content = "AI is a field of computer science that focuses on creating machines that can perform tasks that typically require human intelligence."
|
||||
string_source = StringKnowledgeSource(
|
||||
content=content, metadata={"topic": "AI"}
|
||||
)
|
||||
|
||||
knowledge = Knowledge(
|
||||
collection_name="test",
|
||||
sources=[string_source],
|
||||
storage_provider="elasticsearch",
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="AI Expert",
|
||||
goal="Explain AI",
|
||||
backstory="You are an AI expert who loves to explain AI concepts.",
|
||||
knowledge=[knowledge],
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Explain what AI is",
|
||||
expected_output="Explanation of AI",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
|
||||
self.assertIsNotNone(result)
|
||||
94
tests/knowledge/elasticsearch_knowledge_storage_test.py
Normal file
94
tests/knowledge/elasticsearch_knowledge_storage_test.py
Normal file
@@ -0,0 +1,94 @@
|
||||
"""Test Elasticsearch knowledge storage functionality."""
|
||||
|
||||
import os
|
||||
import unittest
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.knowledge.storage.elasticsearch_knowledge_storage import (
|
||||
ElasticsearchKnowledgeStorage,
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.skipif(
|
||||
os.environ.get("RUN_ELASTICSEARCH_TESTS") != "true",
|
||||
reason="Elasticsearch tests require RUN_ELASTICSEARCH_TESTS=true"
|
||||
)
|
||||
class TestElasticsearchKnowledgeStorage(unittest.TestCase):
|
||||
"""Test Elasticsearch knowledge storage functionality."""
|
||||
|
||||
def setUp(self):
|
||||
"""Set up test fixtures."""
|
||||
self.es_mock = MagicMock()
|
||||
self.es_mock.indices.exists.return_value = False
|
||||
|
||||
self.embedder_mock = MagicMock()
|
||||
self.embedder_mock.embed_documents.return_value = [[0.1, 0.2, 0.3]]
|
||||
|
||||
self.es_patcher = patch(
|
||||
"crewai.knowledge.storage.elasticsearch_knowledge_storage.Elasticsearch",
|
||||
return_value=self.es_mock
|
||||
)
|
||||
self.es_class_mock = self.es_patcher.start()
|
||||
|
||||
self.storage = ElasticsearchKnowledgeStorage(
|
||||
embedder_config=self.embedder_mock,
|
||||
collection_name="test"
|
||||
)
|
||||
self.storage.initialize_knowledge_storage()
|
||||
|
||||
def tearDown(self):
|
||||
"""Tear down test fixtures."""
|
||||
self.es_patcher.stop()
|
||||
|
||||
def test_initialization(self):
|
||||
"""Test initialization of Elasticsearch knowledge storage."""
|
||||
self.es_class_mock.assert_called_once()
|
||||
|
||||
self.es_mock.indices.create.assert_called_once()
|
||||
|
||||
def test_save(self):
|
||||
"""Test saving to Elasticsearch knowledge storage."""
|
||||
self.storage.save(["Test document 1", "Test document 2"], {"source": "test"})
|
||||
|
||||
self.assertEqual(self.es_mock.index.call_count, 2)
|
||||
|
||||
self.assertEqual(self.embedder_mock.embed_documents.call_count, 2)
|
||||
|
||||
def test_search(self):
|
||||
"""Test searching in Elasticsearch knowledge storage."""
|
||||
self.es_mock.search.return_value = {
|
||||
"hits": {
|
||||
"hits": [
|
||||
{
|
||||
"_id": "test_id",
|
||||
"_score": 1.5, # Score between 1-2 (Elasticsearch range)
|
||||
"_source": {
|
||||
"text": "Test document",
|
||||
"metadata": {"source": "test"},
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
results = self.storage.search(["test query"])
|
||||
|
||||
self.es_mock.search.assert_called_once()
|
||||
|
||||
self.assertEqual(len(results), 1)
|
||||
self.assertEqual(results[0]["id"], "test_id")
|
||||
self.assertEqual(results[0]["context"], "Test document")
|
||||
self.assertEqual(results[0]["metadata"], {"source": "test"})
|
||||
self.assertEqual(results[0]["score"], 0.5) # Adjusted to 0-1 range
|
||||
|
||||
def test_reset(self):
|
||||
"""Test resetting Elasticsearch knowledge storage."""
|
||||
self.es_mock.indices.exists.return_value = True
|
||||
|
||||
self.storage.reset()
|
||||
|
||||
self.es_mock.indices.delete.assert_called_once()
|
||||
|
||||
self.assertEqual(self.es_mock.indices.create.call_count, 2)
|
||||
91
tests/memory/elasticsearch_storage_test.py
Normal file
91
tests/memory/elasticsearch_storage_test.py
Normal file
@@ -0,0 +1,91 @@
|
||||
"""Test Elasticsearch storage functionality."""
|
||||
|
||||
import os
|
||||
import unittest
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.memory.storage.elasticsearch_storage import ElasticsearchStorage
|
||||
|
||||
|
||||
@pytest.mark.skipif(
|
||||
os.environ.get("RUN_ELASTICSEARCH_TESTS") != "true",
|
||||
reason="Elasticsearch tests require RUN_ELASTICSEARCH_TESTS=true"
|
||||
)
|
||||
class TestElasticsearchStorage(unittest.TestCase):
|
||||
"""Test Elasticsearch storage functionality."""
|
||||
|
||||
def setUp(self):
|
||||
"""Set up test fixtures."""
|
||||
self.es_mock = MagicMock()
|
||||
self.es_mock.indices.exists.return_value = False
|
||||
|
||||
self.embedder_mock = MagicMock()
|
||||
self.embedder_mock.embed_documents.return_value = [[0.1, 0.2, 0.3]]
|
||||
|
||||
self.es_patcher = patch(
|
||||
"crewai.memory.storage.elasticsearch_storage.Elasticsearch",
|
||||
return_value=self.es_mock
|
||||
)
|
||||
self.es_class_mock = self.es_patcher.start()
|
||||
|
||||
self.storage = ElasticsearchStorage(
|
||||
type="test",
|
||||
embedder_config=self.embedder_mock
|
||||
)
|
||||
|
||||
def tearDown(self):
|
||||
"""Tear down test fixtures."""
|
||||
self.es_patcher.stop()
|
||||
|
||||
def test_initialization(self):
|
||||
"""Test initialization of Elasticsearch storage."""
|
||||
self.es_class_mock.assert_called_once()
|
||||
|
||||
self.es_mock.indices.create.assert_called_once()
|
||||
|
||||
def test_save(self):
|
||||
"""Test saving to Elasticsearch storage."""
|
||||
self.storage.save("Test document", {"source": "test"})
|
||||
|
||||
self.es_mock.index.assert_called_once()
|
||||
|
||||
self.embedder_mock.embed_documents.assert_called_once_with(["Test document"])
|
||||
|
||||
def test_search(self):
|
||||
"""Test searching in Elasticsearch storage."""
|
||||
self.es_mock.search.return_value = {
|
||||
"hits": {
|
||||
"hits": [
|
||||
{
|
||||
"_id": "test_id",
|
||||
"_score": 1.5, # Score between 1-2 (Elasticsearch range)
|
||||
"_source": {
|
||||
"text": "Test document",
|
||||
"metadata": {"source": "test"},
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
results = self.storage.search("test query")
|
||||
|
||||
self.es_mock.search.assert_called_once()
|
||||
|
||||
self.assertEqual(len(results), 1)
|
||||
self.assertEqual(results[0]["id"], "test_id")
|
||||
self.assertEqual(results[0]["context"], "Test document")
|
||||
self.assertEqual(results[0]["metadata"], {"source": "test"})
|
||||
self.assertEqual(results[0]["score"], 0.5) # Adjusted to 0-1 range
|
||||
|
||||
def test_reset(self):
|
||||
"""Test resetting Elasticsearch storage."""
|
||||
self.es_mock.indices.exists.return_value = True
|
||||
|
||||
self.storage.reset()
|
||||
|
||||
self.es_mock.indices.delete.assert_called_once()
|
||||
|
||||
self.assertEqual(self.es_mock.indices.create.call_count, 2)
|
||||
@@ -3,6 +3,7 @@
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import time
|
||||
from functools import partial
|
||||
from typing import Tuple, Union
|
||||
from unittest.mock import MagicMock, patch
|
||||
@@ -1368,3 +1369,90 @@ def test_interpolate_valid_types():
|
||||
assert parsed["optional"] is None
|
||||
assert parsed["nested"]["flag"] is True
|
||||
assert parsed["nested"]["empty"] is None
|
||||
|
||||
|
||||
def test_task_with_no_max_execution_time():
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents",
|
||||
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
|
||||
allow_delegation=False,
|
||||
max_execution_time=None
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
|
||||
expected_output="Bullet point list of 5 interesting ideas.",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
with patch.object(Agent, "_execute_without_timeout", return_value = "ok") as execute:
|
||||
result = task.execute_sync(agent=researcher)
|
||||
assert result.raw == "ok"
|
||||
execute.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_with_max_execution_time():
|
||||
from crewai.tools import tool
|
||||
"""Test that execution raises TimeoutError when max_execution_time is exceeded."""
|
||||
|
||||
@tool("what amazing tool", result_as_answer=True)
|
||||
def my_tool() -> str:
|
||||
"My tool"
|
||||
time.sleep(1)
|
||||
return "okay"
|
||||
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents. Use the tool provided to you.",
|
||||
backstory=(
|
||||
"You're an expert researcher, specialized in technology, software engineering, AI and startups. "
|
||||
"You work as a freelancer and are now working on doing research and analysis for a new customer."
|
||||
),
|
||||
allow_delegation=False,
|
||||
tools=[my_tool],
|
||||
max_execution_time=4
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me a list of 5 interesting ideas to explore for an article, what makes them unique and interesting.",
|
||||
expected_output="Bullet point list of 5 interesting ideas.",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
result = task.execute_sync(agent=researcher)
|
||||
assert result.raw == "okay"
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_with_max_execution_time_exceeded():
|
||||
from crewai.tools import tool
|
||||
"""Test that execution raises TimeoutError when max_execution_time is exceeded."""
|
||||
|
||||
@tool("what amazing tool", result_as_answer=True)
|
||||
def my_tool() -> str:
|
||||
"My tool"
|
||||
time.sleep(10)
|
||||
return "okay"
|
||||
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents. Use the tool provided to you.",
|
||||
backstory=(
|
||||
"You're an expert researcher, specialized in technology, software engineering, AI and startups. "
|
||||
"You work as a freelancer and are now working on doing research and analysis for a new customer."
|
||||
),
|
||||
allow_delegation=False,
|
||||
tools=[my_tool],
|
||||
max_execution_time=1
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me a list of 5 interesting ideas to explore for an article, what makes them unique and interesting.",
|
||||
expected_output="Bullet point list of 5 interesting ideas.",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
with pytest.raises(TimeoutError):
|
||||
task.execute_sync(agent=researcher)
|
||||
33
uv.lock
generated
33
uv.lock
generated
@@ -1,5 +1,4 @@
|
||||
version = 1
|
||||
revision = 1
|
||||
requires-python = ">=3.10, <3.13"
|
||||
resolution-markers = [
|
||||
"python_full_version < '3.11' and platform_python_implementation == 'PyPy' and sys_platform == 'darwin'",
|
||||
@@ -628,6 +627,7 @@ dependencies = [
|
||||
{ name = "blinker" },
|
||||
{ name = "chromadb" },
|
||||
{ name = "click" },
|
||||
{ name = "elasticsearch" },
|
||||
{ name = "instructor" },
|
||||
{ name = "json-repair" },
|
||||
{ name = "json5" },
|
||||
@@ -710,6 +710,7 @@ requires-dist = [
|
||||
{ name = "click", specifier = ">=8.1.7" },
|
||||
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = "~=0.40.1" },
|
||||
{ name = "docling", marker = "extra == 'docling'", specifier = ">=2.12.0" },
|
||||
{ name = "elasticsearch", specifier = ">=9.0.0" },
|
||||
{ name = "fastembed", marker = "extra == 'fastembed'", specifier = ">=0.4.1" },
|
||||
{ name = "instructor", specifier = ">=1.3.3" },
|
||||
{ name = "json-repair", specifier = ">=0.25.2" },
|
||||
@@ -735,7 +736,6 @@ requires-dist = [
|
||||
{ name = "tomli-w", specifier = ">=1.1.0" },
|
||||
{ name = "uv", specifier = ">=0.4.25" },
|
||||
]
|
||||
provides-extras = ["tools", "embeddings", "agentops", "fastembed", "pdfplumber", "pandas", "openpyxl", "mem0", "docling", "aisuite"]
|
||||
|
||||
[package.metadata.requires-dev]
|
||||
dev = [
|
||||
@@ -1097,6 +1097,33 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/84/4a2cab0e6adde6a85e7ba543862e5fc0250c51f3ac721a078a55cdcff250/easyocr-1.7.2-py3-none-any.whl", hash = "sha256:5be12f9b0e595d443c9c3d10b0542074b50f0ec2d98b141a109cd961fd1c177c", size = 2870178 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "elastic-transport"
|
||||
version = "8.17.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "certifi" },
|
||||
{ name = "urllib3" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/6a/54/d498a766ac8fa475f931da85a154666cc81a70f8eb4a780bc8e4e934e9ac/elastic_transport-8.17.1.tar.gz", hash = "sha256:5edef32ac864dca8e2f0a613ef63491ee8d6b8cfb52881fa7313ba9290cac6d2", size = 73425 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/cf/cd/b71d5bc74cde7fc6fd9b2ff9389890f45d9762cbbbf81dc5e51fd7588c4a/elastic_transport-8.17.1-py3-none-any.whl", hash = "sha256:192718f498f1d10c5e9aa8b9cf32aed405e469a7f0e9d6a8923431dbb2c59fb8", size = 64969 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "elasticsearch"
|
||||
version = "9.0.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "elastic-transport" },
|
||||
{ name = "python-dateutil" },
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/26/ad/d76e88811e68d7bdd976c0ff6027e7c3b544a949c8d3de052adc5765e1a6/elasticsearch-9.0.0.tar.gz", hash = "sha256:c075ccdc7d5697e2a842a88418efdb6cf6732d7a62c77a25d60184db23fd1464", size = 823636 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/4b/b7/e85bdb8bed719dbf92780264d6c381186ced8eb7acc88bbe37a996f87b03/elasticsearch-9.0.0-py3-none-any.whl", hash = "sha256:295425172043e5db723d55cb3a5e28622696ca7739b466b812ab12ac938b6249", size = 895793 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "embedchain"
|
||||
version = "0.1.125"
|
||||
@@ -2988,6 +3015,7 @@ name = "nvidia-nccl-cu12"
|
||||
version = "2.20.5"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/c1/bb/d09dda47c881f9ff504afd6f9ca4f502ded6d8fc2f572cacc5e39da91c28/nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_aarch64.whl", hash = "sha256:1fc150d5c3250b170b29410ba682384b14581db722b2531b0d8d33c595f33d01", size = 176238458 },
|
||||
{ url = "https://files.pythonhosted.org/packages/4b/2a/0a131f572aa09f741c30ccd45a8e56316e8be8dfc7bc19bf0ab7cfef7b19/nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl", hash = "sha256:057f6bf9685f75215d0c53bf3ac4a10b3e6578351de307abad9e18a99182af56", size = 176249402 },
|
||||
]
|
||||
|
||||
@@ -2997,6 +3025,7 @@ version = "12.6.85"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/9d/d7/c5383e47c7e9bf1c99d5bd2a8c935af2b6d705ad831a7ec5c97db4d82f4f/nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl", hash = "sha256:eedc36df9e88b682efe4309aa16b5b4e78c2407eac59e8c10a6a47535164369a", size = 19744971 },
|
||||
{ url = "https://files.pythonhosted.org/packages/31/db/dc71113d441f208cdfe7ae10d4983884e13f464a6252450693365e166dcf/nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cf4eaa7d4b6b543ffd69d6abfb11efdeb2db48270d94dfd3a452c24150829e41", size = 19270338 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
||||
Reference in New Issue
Block a user