mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-03-07 04:18:13 +00:00
Compare commits
4 Commits
gl/feat/sy
...
docs/train
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e08d210bb8 | ||
|
|
34100d290b | ||
|
|
46f8fa59c1 | ||
|
|
4867dced0e |
@@ -320,7 +320,6 @@
|
||||
"en/enterprise/guides/update-crew",
|
||||
"en/enterprise/guides/enable-crew-studio",
|
||||
"en/enterprise/guides/azure-openai-setup",
|
||||
"en/enterprise/guides/automation-triggers",
|
||||
"en/enterprise/guides/hubspot-trigger",
|
||||
"en/enterprise/guides/react-component-export",
|
||||
"en/enterprise/guides/salesforce-trigger",
|
||||
@@ -659,7 +658,6 @@
|
||||
"pt-BR/enterprise/guides/update-crew",
|
||||
"pt-BR/enterprise/guides/enable-crew-studio",
|
||||
"pt-BR/enterprise/guides/azure-openai-setup",
|
||||
"pt-BR/enterprise/guides/automation-triggers",
|
||||
"pt-BR/enterprise/guides/hubspot-trigger",
|
||||
"pt-BR/enterprise/guides/react-component-export",
|
||||
"pt-BR/enterprise/guides/salesforce-trigger",
|
||||
@@ -1009,7 +1007,6 @@
|
||||
"ko/enterprise/guides/update-crew",
|
||||
"ko/enterprise/guides/enable-crew-studio",
|
||||
"ko/enterprise/guides/azure-openai-setup",
|
||||
"ko/enterprise/guides/automation-triggers",
|
||||
"ko/enterprise/guides/hubspot-trigger",
|
||||
"ko/enterprise/guides/react-component-export",
|
||||
"ko/enterprise/guides/salesforce-trigger",
|
||||
|
||||
@@ -59,7 +59,6 @@ crew = Crew(
|
||||
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
|
||||
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
|
||||
| **Guardrail** _(optional)_ | `guardrail` | `Optional[Callable]` | Function to validate task output before proceeding to next task. |
|
||||
| **Guardrail Max Retries** _(optional)_ | `guardrail_max_retries` | `Optional[int]` | Maximum number of retries when guardrail validation fails. Defaults to 3. |
|
||||
|
||||
## Creating Tasks
|
||||
|
||||
@@ -453,7 +452,7 @@ task = Task(
|
||||
expected_output="A valid JSON object",
|
||||
agent=analyst,
|
||||
guardrail=validate_json_output,
|
||||
guardrail_max_retries=3 # Limit retry attempts
|
||||
max_retries=3 # Limit retry attempts
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
@@ -1,171 +0,0 @@
|
||||
---
|
||||
title: "Automation Triggers"
|
||||
description: "Automatically execute your CrewAI workflows when specific events occur in connected integrations"
|
||||
icon: "bolt"
|
||||
---
|
||||
|
||||
Automation triggers enable you to automatically run your CrewAI deployments when specific events occur in your connected integrations, creating powerful event-driven workflows that respond to real-time changes in your business systems.
|
||||
|
||||
## Overview
|
||||
|
||||
With automation triggers, you can:
|
||||
|
||||
- **Respond to real-time events** - Automatically execute workflows when specific conditions are met
|
||||
- **Integrate with external systems** - Connect with platforms like Gmail, Outlook, OneDrive, JIRA, Slack, Stripe and more
|
||||
- **Scale your automation** - Handle high-volume events without manual intervention
|
||||
- **Maintain context** - Access trigger data within your crews and flows
|
||||
|
||||
## Managing Automation Triggers
|
||||
|
||||
### Viewing Available Triggers
|
||||
|
||||
To access and manage your automation triggers:
|
||||
|
||||
1. Navigate to your deployment in the CrewAI dashboard
|
||||
2. Click on the **Triggers** tab to view all available trigger integrations
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/list-available-triggers.png" alt="List of available automation triggers" />
|
||||
</Frame>
|
||||
|
||||
This view shows all the trigger integrations available for your deployment, along with their current connection status.
|
||||
|
||||
### Enabling and Disabling Triggers
|
||||
|
||||
Each trigger can be easily enabled or disabled using the toggle switch:
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/trigger-selected.png" alt="Enable or disable triggers with toggle" />
|
||||
</Frame>
|
||||
|
||||
- **Enabled (blue toggle)**: The trigger is active and will automatically execute your deployment when the specified events occur
|
||||
- **Disabled (gray toggle)**: The trigger is inactive and will not respond to events
|
||||
|
||||
Simply click the toggle to change the trigger state. Changes take effect immediately.
|
||||
|
||||
### Monitoring Trigger Executions
|
||||
|
||||
Track the performance and history of your triggered executions:
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/list-executions.png" alt="List of executions triggered by automation" />
|
||||
</Frame>
|
||||
|
||||
## Building Automation
|
||||
|
||||
Before building your automation, it's helpful to understand the structure of trigger payloads that your crews and flows will receive.
|
||||
|
||||
### Payload Samples Repository
|
||||
|
||||
We maintain a comprehensive repository with sample payloads from various trigger sources to help you build and test your automations:
|
||||
|
||||
**🔗 [CrewAI Enterprise Trigger Payload Samples](https://github.com/crewAIInc/crewai-enterprise-trigger-payload-samples)**
|
||||
|
||||
This repository contains:
|
||||
|
||||
- **Real payload examples** from different trigger sources (Gmail, Google Drive, etc.)
|
||||
- **Payload structure documentation** showing the format and available fields
|
||||
|
||||
### Triggers with Crew
|
||||
|
||||
Your existing crew definitions work seamlessly with triggers, you just need to have a task to parse the received payload:
|
||||
|
||||
```python
|
||||
@CrewBase
|
||||
class MyAutomatedCrew:
|
||||
@agent
|
||||
def researcher(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['researcher'],
|
||||
)
|
||||
|
||||
@task
|
||||
def parse_trigger_payload(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['parse_trigger_payload'],
|
||||
agent=self.researcher(),
|
||||
)
|
||||
|
||||
@task
|
||||
def analyze_trigger_content(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['analyze_trigger_data'],
|
||||
agent=self.researcher(),
|
||||
)
|
||||
```
|
||||
|
||||
The crew will automatically receive and can access the trigger payload through the standard CrewAI context mechanisms.
|
||||
|
||||
### Integration with Flows
|
||||
|
||||
For flows, you have more control over how trigger data is handled:
|
||||
|
||||
#### Accessing Trigger Payload
|
||||
|
||||
All `@start()` methods in your flows will accept an additional parameter called `crewai_trigger_payload`:
|
||||
|
||||
```python
|
||||
from crewai.flow import Flow, start, listen
|
||||
|
||||
class MyAutomatedFlow(Flow):
|
||||
@start()
|
||||
def handle_trigger(self, crewai_trigger_payload: dict = None):
|
||||
"""
|
||||
This start method can receive trigger data
|
||||
"""
|
||||
if crewai_trigger_payload:
|
||||
# Process the trigger data
|
||||
trigger_id = crewai_trigger_payload.get('id')
|
||||
event_data = crewai_trigger_payload.get('payload', {})
|
||||
|
||||
# Store in flow state for use by other methods
|
||||
self.state.trigger_id = trigger_id
|
||||
self.state.trigger_type = event_data
|
||||
|
||||
return event_data
|
||||
|
||||
# Handle manual execution
|
||||
return None
|
||||
|
||||
@listen(handle_trigger)
|
||||
def process_data(self, trigger_data):
|
||||
"""
|
||||
Process the data from the trigger
|
||||
"""
|
||||
# ... process the trigger
|
||||
```
|
||||
|
||||
#### Triggering Crews from Flows
|
||||
|
||||
When kicking off a crew within a flow that was triggered, pass the trigger payload as it:
|
||||
|
||||
```python
|
||||
@start()
|
||||
def delegate_to_crew(self, crewai_trigger_payload: dict = None):
|
||||
"""
|
||||
Delegate processing to a specialized crew
|
||||
"""
|
||||
crew = MySpecializedCrew()
|
||||
|
||||
# Pass the trigger payload to the crew
|
||||
result = crew.crew().kickoff(
|
||||
inputs={
|
||||
'a_custom_parameter': "custom_value",
|
||||
'crewai_trigger_payload': crewai_trigger_payload
|
||||
},
|
||||
)
|
||||
|
||||
return result
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Trigger not firing:**
|
||||
- Verify the trigger is enabled
|
||||
- Check integration connection status
|
||||
|
||||
**Execution failures:**
|
||||
- Check the execution logs for error details
|
||||
- If you are developing, make sure the inputs include the `crewai_trigger_payload` parameter with the correct payload
|
||||
|
||||
Automation triggers transform your CrewAI deployments into responsive, event-driven systems that can seamlessly integrate with your existing business processes and tools.
|
||||
@@ -1,13 +1,13 @@
|
||||
---
|
||||
title: Weaviate Vector Search
|
||||
description: The `WeaviateVectorSearchTool` is designed to search a Weaviate vector database for semantically similar documents using hybrid search.
|
||||
description: The `WeaviateVectorSearchTool` is designed to search a Weaviate vector database for semantically similar documents.
|
||||
icon: network-wired
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
|
||||
The `WeaviateVectorSearchTool` is specifically crafted for conducting semantic searches within documents stored in a Weaviate vector database. This tool allows you to find semantically similar documents to a given query, leveraging the power of vector and keyword search for more accurate and contextually relevant search results.
|
||||
The `WeaviateVectorSearchTool` is specifically crafted for conducting semantic searches within documents stored in a Weaviate vector database. This tool allows you to find semantically similar documents to a given query, leveraging the power of vector embeddings for more accurate and contextually relevant search results.
|
||||
|
||||
[Weaviate](https://weaviate.io/) is a vector database that stores and queries vector embeddings, enabling semantic search capabilities.
|
||||
|
||||
@@ -39,7 +39,6 @@ from crewai_tools import WeaviateVectorSearchTool
|
||||
tool = WeaviateVectorSearchTool(
|
||||
collection_name='example_collections',
|
||||
limit=3,
|
||||
alpha=0.75,
|
||||
weaviate_cluster_url="https://your-weaviate-cluster-url.com",
|
||||
weaviate_api_key="your-weaviate-api-key",
|
||||
)
|
||||
@@ -64,7 +63,6 @@ The `WeaviateVectorSearchTool` accepts the following parameters:
|
||||
- **weaviate_cluster_url**: Required. The URL of the Weaviate cluster.
|
||||
- **weaviate_api_key**: Required. The API key for the Weaviate cluster.
|
||||
- **limit**: Optional. The number of results to return. Default is `3`.
|
||||
- **alpha**: Optional. Controls the weighting between vector and keyword (BM25) search. alpha = 0 -> BM25 only, alpha = 1 -> vector search only. Default is `0.75`.
|
||||
- **vectorizer**: Optional. The vectorizer to use. If not provided, it will use `text2vec_openai` with the `nomic-embed-text` model.
|
||||
- **generative_model**: Optional. The generative model to use. If not provided, it will use OpenAI's `gpt-4o`.
|
||||
|
||||
@@ -80,7 +78,6 @@ from weaviate.classes.config import Configure
|
||||
tool = WeaviateVectorSearchTool(
|
||||
collection_name='example_collections',
|
||||
limit=3,
|
||||
alpha=0.75,
|
||||
vectorizer=Configure.Vectorizer.text2vec_openai(model="nomic-embed-text"),
|
||||
generative_model=Configure.Generative.openai(model="gpt-4o-mini"),
|
||||
weaviate_cluster_url="https://your-weaviate-cluster-url.com",
|
||||
@@ -131,7 +128,6 @@ with test_docs.batch.dynamic() as batch:
|
||||
tool = WeaviateVectorSearchTool(
|
||||
collection_name='example_collections',
|
||||
limit=3,
|
||||
alpha=0.75,
|
||||
weaviate_cluster_url="https://your-weaviate-cluster-url.com",
|
||||
weaviate_api_key="your-weaviate-api-key",
|
||||
)
|
||||
@@ -149,7 +145,6 @@ from crewai_tools import WeaviateVectorSearchTool
|
||||
weaviate_tool = WeaviateVectorSearchTool(
|
||||
collection_name='example_collections',
|
||||
limit=3,
|
||||
alpha=0.75,
|
||||
weaviate_cluster_url="https://your-weaviate-cluster-url.com",
|
||||
weaviate_api_key="your-weaviate-api-key",
|
||||
)
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 142 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 330 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 133 KiB |
@@ -59,7 +59,6 @@ crew = Crew(
|
||||
| **Pydantic 출력** _(선택 사항)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | 태스크 출력용 Pydantic 모델입니다. |
|
||||
| **콜백** _(선택 사항)_ | `callback` | `Optional[Any]` | 태스크 완료 후 실행할 함수/객체입니다. |
|
||||
| **가드레일** _(선택 사항)_ | `guardrail` | `Optional[Callable]` | 다음 태스크로 진행하기 전에 태스크 출력을 검증하는 함수입니다. |
|
||||
| **가드레일 최대 재시도** _(선택 사항)_ | `guardrail_max_retries` | `Optional[int]` | 가드레일 검증 실패 시 최대 재시도 횟수입니다. 기본값은 3입니다. |
|
||||
|
||||
## 작업 생성하기
|
||||
|
||||
@@ -449,7 +448,7 @@ task = Task(
|
||||
expected_output="A valid JSON object",
|
||||
agent=analyst,
|
||||
guardrail=validate_json_output,
|
||||
guardrail_max_retries=3 # 재시도 횟수 제한
|
||||
max_retries=3 # Limit retry attempts
|
||||
)
|
||||
```
|
||||
|
||||
@@ -900,4 +899,4 @@ except RuntimeError as e:
|
||||
작업(task)은 CrewAI 에이전트의 행동을 이끄는 원동력입니다.
|
||||
작업과 그 결과를 적절하게 정의함으로써, 에이전트가 독립적으로 또는 협업 단위로 효과적으로 작동할 수 있는 기반을 마련할 수 있습니다.
|
||||
작업에 적합한 도구를 장착하고, 실행 과정을 이해하며, 견고한 검증 절차를 따르는 것은 CrewAI의 잠재력을 극대화하는 데 필수적입니다.
|
||||
이를 통해 에이전트가 할당된 작업에 효과적으로 준비되고, 작업이 의도대로 수행될 수 있습니다.
|
||||
이를 통해 에이전트가 할당된 작업에 효과적으로 준비되고, 작업이 의도대로 수행될 수 있습니다.
|
||||
@@ -1,171 +0,0 @@
|
||||
---
|
||||
title: "자동화 트리거"
|
||||
description: "연결된 통합에서 특정 이벤트가 발생할 때 CrewAI 워크플로우를 자동으로 실행합니다"
|
||||
icon: "bolt"
|
||||
---
|
||||
|
||||
자동화 트리거를 사용하면 연결된 통합에서 특정 이벤트가 발생할 때 CrewAI 배포를 자동으로 실행할 수 있어, 비즈니스 시스템의 실시간 변화에 반응하는 강력한 이벤트 기반 워크플로우를 만들 수 있습니다.
|
||||
|
||||
## 개요
|
||||
|
||||
자동화 트리거를 사용하면 다음을 수행할 수 있습니다:
|
||||
|
||||
- **실시간 이벤트에 응답** - 특정 조건이 충족될 때 워크플로우를 자동으로 실행
|
||||
- **외부 시스템과 통합** - Gmail, Outlook, OneDrive, JIRA, Slack, Stripe 등의 플랫폼과 연결
|
||||
- **자동화 확장** - 수동 개입 없이 대용량 이벤트 처리
|
||||
- **컨텍스트 유지** - crew와 flow 내에서 트리거 데이터에 액세스
|
||||
|
||||
## 자동화 트리거 관리
|
||||
|
||||
### 사용 가능한 트리거 보기
|
||||
|
||||
자동화 트리거에 액세스하고 관리하려면:
|
||||
|
||||
1. CrewAI 대시보드에서 배포로 이동
|
||||
2. **트리거** 탭을 클릭하여 사용 가능한 모든 트리거 통합 보기
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/list-available-triggers.png" alt="사용 가능한 자동화 트리거 목록" />
|
||||
</Frame>
|
||||
|
||||
이 보기는 배포에 사용 가능한 모든 트리거 통합과 현재 연결 상태를 보여줍니다.
|
||||
|
||||
### 트리거 활성화 및 비활성화
|
||||
|
||||
각 트리거는 토글 스위치를 사용하여 쉽게 활성화하거나 비활성화할 수 있습니다:
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/trigger-selected.png" alt="토글로 트리거 활성화 또는 비활성화" />
|
||||
</Frame>
|
||||
|
||||
- **활성화됨 (파란색 토글)**: 트리거가 활성 상태이며 지정된 이벤트가 발생할 때 배포를 자동으로 실행합니다
|
||||
- **비활성화됨 (회색 토글)**: 트리거가 비활성 상태이며 이벤트에 응답하지 않습니다
|
||||
|
||||
토글을 클릭하기만 하면 트리거 상태를 변경할 수 있습니다. 변경 사항은 즉시 적용됩니다.
|
||||
|
||||
### 트리거 실행 모니터링
|
||||
|
||||
트리거된 실행의 성능과 기록을 추적합니다:
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/list-executions.png" alt="자동화에 의해 트리거된 실행 목록" />
|
||||
</Frame>
|
||||
|
||||
## 자동화 구축
|
||||
|
||||
자동화를 구축하기 전에 crew와 flow가 받을 트리거 페이로드의 구조를 이해하는 것이 도움이 됩니다.
|
||||
|
||||
### 페이로드 샘플 저장소
|
||||
|
||||
자동화를 구축하고 테스트하는 데 도움이 되도록 다양한 트리거 소스의 샘플 페이로드가 포함된 포괄적인 저장소를 유지 관리하고 있습니다:
|
||||
|
||||
**🔗 [CrewAI Enterprise 트리거 페이로드 샘플](https://github.com/crewAIInc/crewai-enterprise-trigger-payload-samples)**
|
||||
|
||||
이 저장소에는 다음이 포함되어 있습니다:
|
||||
|
||||
- **실제 페이로드 예제** - 다양한 트리거 소스(Gmail, Google Drive 등)에서 가져온 예제
|
||||
- **페이로드 구조 문서** - 형식과 사용 가능한 필드를 보여주는 문서
|
||||
|
||||
### Crew와 트리거
|
||||
|
||||
기존 crew 정의는 트리거와 완벽하게 작동하며, 받은 페이로드를 분석하는 작업만 있으면 됩니다:
|
||||
|
||||
```python
|
||||
@CrewBase
|
||||
class MyAutomatedCrew:
|
||||
@agent
|
||||
def researcher(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['researcher'],
|
||||
)
|
||||
|
||||
@task
|
||||
def parse_trigger_payload(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['parse_trigger_payload'],
|
||||
agent=self.researcher(),
|
||||
)
|
||||
|
||||
@task
|
||||
def analyze_trigger_content(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['analyze_trigger_data'],
|
||||
agent=self.researcher(),
|
||||
)
|
||||
```
|
||||
|
||||
crew는 자동으로 트리거 페이로드를 받고 표준 CrewAI 컨텍스트 메커니즘을 통해 액세스할 수 있습니다.
|
||||
|
||||
### Flow와의 통합
|
||||
|
||||
flow의 경우 트리거 데이터 처리 방법을 더 세밀하게 제어할 수 있습니다:
|
||||
|
||||
#### 트리거 페이로드 액세스
|
||||
|
||||
flow의 모든 `@start()` 메서드는 `crewai_trigger_payload`라는 추가 매개변수를 허용합니다:
|
||||
|
||||
```python
|
||||
from crewai.flow import Flow, start, listen
|
||||
|
||||
class MyAutomatedFlow(Flow):
|
||||
@start()
|
||||
def handle_trigger(self, crewai_trigger_payload: dict = None):
|
||||
"""
|
||||
이 start 메서드는 트리거 데이터를 받을 수 있습니다
|
||||
"""
|
||||
if crewai_trigger_payload:
|
||||
# 트리거 데이터 처리
|
||||
trigger_id = crewai_trigger_payload.get('id')
|
||||
event_data = crewai_trigger_payload.get('payload', {})
|
||||
|
||||
# 다른 메서드에서 사용할 수 있도록 flow 상태에 저장
|
||||
self.state.trigger_id = trigger_id
|
||||
self.state.trigger_type = event_data
|
||||
|
||||
return event_data
|
||||
|
||||
# 수동 실행 처리
|
||||
return None
|
||||
|
||||
@listen(handle_trigger)
|
||||
def process_data(self, trigger_data):
|
||||
"""
|
||||
트리거 데이터 처리
|
||||
"""
|
||||
# ... 트리거 처리
|
||||
```
|
||||
|
||||
#### Flow에서 Crew 트리거하기
|
||||
|
||||
트리거된 flow 내에서 crew를 시작할 때 트리거 페이로드를 전달합니다:
|
||||
|
||||
```python
|
||||
@start()
|
||||
def delegate_to_crew(self, crewai_trigger_payload: dict = None):
|
||||
"""
|
||||
전문 crew에 처리 위임
|
||||
"""
|
||||
crew = MySpecializedCrew()
|
||||
|
||||
# crew에 트리거 페이로드 전달
|
||||
result = crew.crew().kickoff(
|
||||
inputs={
|
||||
'a_custom_parameter': "custom_value",
|
||||
'crewai_trigger_payload': crewai_trigger_payload
|
||||
},
|
||||
)
|
||||
|
||||
return result
|
||||
```
|
||||
|
||||
## 문제 해결
|
||||
|
||||
**트리거가 작동하지 않는 경우:**
|
||||
- 트리거가 활성화되어 있는지 확인
|
||||
- 통합 연결 상태 확인
|
||||
|
||||
**실행 실패:**
|
||||
- 오류 세부 정보는 실행 로그 확인
|
||||
- 개발 중인 경우 입력에 올바른 페이로드가 포함된 `crewai_trigger_payload` 매개변수가 포함되어 있는지 확인
|
||||
|
||||
자동화 트리거는 CrewAI 배포를 기존 비즈니스 프로세스 및 도구와 완벽하게 통합할 수 있는 반응형 이벤트 기반 시스템으로 변환합니다.
|
||||
@@ -59,7 +59,6 @@ crew = Crew(
|
||||
| **Output Pydantic** _(opcional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | Um modelo Pydantic para a saída da tarefa. |
|
||||
| **Callback** _(opcional)_ | `callback` | `Optional[Any]` | Função/objeto a ser executado após a conclusão da tarefa. |
|
||||
| **Guardrail** _(opcional)_ | `guardrail` | `Optional[Callable]` | Função para validar a saída da tarefa antes de prosseguir para a próxima tarefa. |
|
||||
| **Max Tentativas Guardrail** _(opcional)_ | `guardrail_max_retries` | `Optional[int]` | Número máximo de tentativas quando a validação do guardrail falha. Padrão é 3. |
|
||||
|
||||
## Criando Tarefas
|
||||
|
||||
@@ -451,7 +450,7 @@ task = Task(
|
||||
expected_output="Um objeto JSON válido",
|
||||
agent=analyst,
|
||||
guardrail=validate_json_output,
|
||||
guardrail_max_retries=3 # Limite de tentativas
|
||||
max_retries=3 # Limite de tentativas
|
||||
)
|
||||
```
|
||||
|
||||
@@ -936,7 +935,7 @@ task = Task(
|
||||
description="Gerar dados",
|
||||
expected_output="Dados válidos",
|
||||
guardrail=validate_data,
|
||||
guardrail_max_retries=5 # Sobrescreve o limite padrão de tentativas
|
||||
max_retries=5 # Sobrescreve o limite padrão de tentativas
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
@@ -1,171 +0,0 @@
|
||||
---
|
||||
title: "Triggers de Automação"
|
||||
description: "Execute automaticamente seus workflows CrewAI quando eventos específicos ocorrem em integrações conectadas"
|
||||
icon: "bolt"
|
||||
---
|
||||
|
||||
Os triggers de automação permitem executar automaticamente suas implantações CrewAI quando eventos específicos ocorrem em suas integrações conectadas, criando workflows poderosos orientados por eventos que respondem a mudanças em tempo real em seus sistemas de negócio.
|
||||
|
||||
## Visão Geral
|
||||
|
||||
Com triggers de automação, você pode:
|
||||
|
||||
- **Responder a eventos em tempo real** - Execute workflows automaticamente quando condições específicas forem atendidas
|
||||
- **Integrar com sistemas externos** - Conecte com plataformas como Gmail, Outlook, OneDrive, JIRA, Slack, Stripe e muito mais
|
||||
- **Escalar sua automação** - Lide com eventos de alto volume sem intervenção manual
|
||||
- **Manter contexto** - Acesse dados do trigger dentro de suas crews e flows
|
||||
|
||||
## Gerenciando Triggers de Automação
|
||||
|
||||
### Visualizando Triggers Disponíveis
|
||||
|
||||
Para acessar e gerenciar seus triggers de automação:
|
||||
|
||||
1. Navegue até sua implantação no painel do CrewAI
|
||||
2. Clique na aba **Triggers** para visualizar todas as integrações de trigger disponíveis
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/list-available-triggers.png" alt="Lista de triggers de automação disponíveis" />
|
||||
</Frame>
|
||||
|
||||
Esta visualização mostra todas as integrações de trigger disponíveis para sua implantação, junto com seus status de conexão atuais.
|
||||
|
||||
### Habilitando e Desabilitando Triggers
|
||||
|
||||
Cada trigger pode ser facilmente habilitado ou desabilitado usando o botão de alternância:
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/trigger-selected.png" alt="Habilitar ou desabilitar triggers com alternância" />
|
||||
</Frame>
|
||||
|
||||
- **Habilitado (alternância azul)**: O trigger está ativo e executará automaticamente sua implantação quando os eventos especificados ocorrerem
|
||||
- **Desabilitado (alternância cinza)**: O trigger está inativo e não responderá a eventos
|
||||
|
||||
Simplesmente clique na alternância para mudar o estado do trigger. As alterações entram em vigor imediatamente.
|
||||
|
||||
### Monitorando Execuções de Trigger
|
||||
|
||||
Acompanhe o desempenho e histórico de suas execuções acionadas:
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/list-executions.png" alt="Lista de execuções acionadas por automação" />
|
||||
</Frame>
|
||||
|
||||
## Construindo Automação
|
||||
|
||||
Antes de construir sua automação, é útil entender a estrutura dos payloads de trigger que suas crews e flows receberão.
|
||||
|
||||
### Repositório de Amostras de Payload
|
||||
|
||||
Mantemos um repositório abrangente com amostras de payload de várias fontes de trigger para ajudá-lo a construir e testar suas automações:
|
||||
|
||||
**🔗 [Amostras de Payload de Trigger CrewAI Enterprise](https://github.com/crewAIInc/crewai-enterprise-trigger-payload-samples)**
|
||||
|
||||
Este repositório contém:
|
||||
|
||||
- **Exemplos reais de payload** de diferentes fontes de trigger (Gmail, Google Drive, etc.)
|
||||
- **Documentação da estrutura de payload** mostrando o formato e campos disponíveis
|
||||
|
||||
### Triggers com Crew
|
||||
|
||||
Suas definições de crew existentes funcionam perfeitamente com triggers, você só precisa ter uma tarefa para analisar o payload recebido:
|
||||
|
||||
```python
|
||||
@CrewBase
|
||||
class MinhaCrewAutomatizada:
|
||||
@agent
|
||||
def pesquisador(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['pesquisador'],
|
||||
)
|
||||
|
||||
@task
|
||||
def analisar_payload_trigger(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['analisar_payload_trigger'],
|
||||
agent=self.pesquisador(),
|
||||
)
|
||||
|
||||
@task
|
||||
def analisar_conteudo_trigger(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['analisar_dados_trigger'],
|
||||
agent=self.pesquisador(),
|
||||
)
|
||||
```
|
||||
|
||||
A crew receberá automaticamente e pode acessar o payload do trigger através dos mecanismos de contexto padrão do CrewAI.
|
||||
|
||||
### Integração com Flows
|
||||
|
||||
Para flows, você tem mais controle sobre como os dados do trigger são tratados:
|
||||
|
||||
#### Acessando Payload do Trigger
|
||||
|
||||
Todos os métodos `@start()` em seus flows aceitarão um parâmetro adicional chamado `crewai_trigger_payload`:
|
||||
|
||||
```python
|
||||
from crewai.flow import Flow, start, listen
|
||||
|
||||
class MeuFlowAutomatizado(Flow):
|
||||
@start()
|
||||
def lidar_com_trigger(self, crewai_trigger_payload: dict = None):
|
||||
"""
|
||||
Este método start pode receber dados do trigger
|
||||
"""
|
||||
if crewai_trigger_payload:
|
||||
# Processa os dados do trigger
|
||||
trigger_id = crewai_trigger_payload.get('id')
|
||||
dados_evento = crewai_trigger_payload.get('payload', {})
|
||||
|
||||
# Armazena no estado do flow para uso por outros métodos
|
||||
self.state.trigger_id = trigger_id
|
||||
self.state.trigger_type = dados_evento
|
||||
|
||||
return dados_evento
|
||||
|
||||
# Lida com execução manual
|
||||
return None
|
||||
|
||||
@listen(lidar_com_trigger)
|
||||
def processar_dados(self, dados_trigger):
|
||||
"""
|
||||
Processa os dados do trigger
|
||||
"""
|
||||
# ... processa o trigger
|
||||
```
|
||||
|
||||
#### Acionando Crews a partir de Flows
|
||||
|
||||
Ao iniciar uma crew dentro de um flow que foi acionado, passe o payload do trigger conforme ele:
|
||||
|
||||
```python
|
||||
@start()
|
||||
def delegar_para_crew(self, crewai_trigger_payload: dict = None):
|
||||
"""
|
||||
Delega processamento para uma crew especializada
|
||||
"""
|
||||
crew = MinhaCrewEspecializada()
|
||||
|
||||
# Passa o payload do trigger para a crew
|
||||
resultado = crew.crew().kickoff(
|
||||
inputs={
|
||||
'parametro_personalizado': "valor_personalizado",
|
||||
'crewai_trigger_payload': crewai_trigger_payload
|
||||
},
|
||||
)
|
||||
|
||||
return resultado
|
||||
```
|
||||
|
||||
## Solução de Problemas
|
||||
|
||||
**Trigger não está sendo disparado:**
|
||||
- Verifique se o trigger está habilitado
|
||||
- Verifique o status de conexão da integração
|
||||
|
||||
**Falhas de execução:**
|
||||
- Verifique os logs de execução para detalhes do erro
|
||||
- Se você está desenvolvendo, certifique-se de que as entradas incluem o parâmetro `crewai_trigger_payload` com o payload correto
|
||||
|
||||
Os triggers de automação transformam suas implantações CrewAI em sistemas responsivos orientados por eventos que podem se integrar perfeitamente com seus processos de negócio e ferramentas existentes.
|
||||
@@ -68,9 +68,6 @@ docling = [
|
||||
aisuite = [
|
||||
"aisuite>=0.1.10",
|
||||
]
|
||||
qdrant = [
|
||||
"qdrant-client[fastembed]>=1.14.3",
|
||||
]
|
||||
|
||||
[tool.uv]
|
||||
dev-dependencies = [
|
||||
|
||||
@@ -1,18 +1,7 @@
|
||||
import shutil
|
||||
import subprocess
|
||||
import time
|
||||
from typing import (
|
||||
Any,
|
||||
Callable,
|
||||
Dict,
|
||||
List,
|
||||
Literal,
|
||||
Optional,
|
||||
Sequence,
|
||||
Tuple,
|
||||
Type,
|
||||
Union,
|
||||
)
|
||||
from typing import Any, Callable, Dict, List, Literal, Optional, Sequence, Tuple, Type, Union
|
||||
|
||||
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
|
||||
|
||||
@@ -173,7 +162,7 @@ class Agent(BaseAgent):
|
||||
)
|
||||
guardrail: Optional[Union[Callable[[Any], Tuple[bool, Any]], str]] = Field(
|
||||
default=None,
|
||||
description="Function or string description of a guardrail to validate agent output",
|
||||
description="Function or string description of a guardrail to validate agent output"
|
||||
)
|
||||
guardrail_max_retries: int = Field(
|
||||
default=3, description="Maximum number of retries when guardrail fails"
|
||||
@@ -287,7 +276,7 @@ class Agent(BaseAgent):
|
||||
self._inject_date_to_task(task)
|
||||
|
||||
if self.tools_handler:
|
||||
self.tools_handler.last_used_tool = None
|
||||
self.tools_handler.last_used_tool = {} # type: ignore # Incompatible types in assignment (expression has type "dict[Never, Never]", variable has type "ToolCalling")
|
||||
|
||||
task_prompt = task.prompt()
|
||||
|
||||
@@ -320,20 +309,15 @@ class Agent(BaseAgent):
|
||||
event=MemoryRetrievalStartedEvent(
|
||||
task_id=str(task.id) if task else None,
|
||||
source_type="agent",
|
||||
from_agent=self,
|
||||
from_task=task,
|
||||
),
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
contextual_memory = ContextualMemory(
|
||||
self.crew._short_term_memory,
|
||||
self.crew._long_term_memory,
|
||||
self.crew._entity_memory,
|
||||
self.crew._external_memory,
|
||||
agent=self,
|
||||
task=task,
|
||||
)
|
||||
memory = contextual_memory.build_context_for_task(task, context)
|
||||
if memory.strip() != "":
|
||||
@@ -346,14 +330,13 @@ class Agent(BaseAgent):
|
||||
memory_content=memory,
|
||||
retrieval_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="agent",
|
||||
from_agent=self,
|
||||
from_task=task,
|
||||
),
|
||||
)
|
||||
knowledge_config = (
|
||||
self.knowledge_config.model_dump() if self.knowledge_config else {}
|
||||
)
|
||||
|
||||
|
||||
if self.knowledge or (self.crew and self.crew.knowledge):
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
|
||||
@@ -43,6 +43,7 @@ class CrewAgentExecutorMixin:
|
||||
metadata={
|
||||
"observation": self.task.description,
|
||||
},
|
||||
agent=self.agent.role,
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"Failed to add to short term memory: {e}")
|
||||
@@ -64,6 +65,7 @@ class CrewAgentExecutorMixin:
|
||||
"description": self.task.description,
|
||||
"messages": self.messages,
|
||||
},
|
||||
agent=self.agent.role,
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"Failed to add to external memory: {e}")
|
||||
@@ -98,8 +100,8 @@ class CrewAgentExecutorMixin:
|
||||
)
|
||||
self.crew._long_term_memory.save(long_term_memory)
|
||||
|
||||
entity_memories = [
|
||||
EntityMemoryItem(
|
||||
for entity in evaluation.entities:
|
||||
entity_memory = EntityMemoryItem(
|
||||
name=entity.name,
|
||||
type=entity.type,
|
||||
description=entity.description,
|
||||
@@ -107,10 +109,7 @@ class CrewAgentExecutorMixin:
|
||||
[f"- {r}" for r in entity.relationships]
|
||||
),
|
||||
)
|
||||
for entity in evaluation.entities
|
||||
]
|
||||
if entity_memories:
|
||||
self.crew._entity_memory.save(entity_memories)
|
||||
self.crew._entity_memory.save(entity_memory)
|
||||
except AttributeError as e:
|
||||
print(f"Missing attributes for long term memory: {e}")
|
||||
pass
|
||||
@@ -159,9 +158,7 @@ class CrewAgentExecutorMixin:
|
||||
self._printer.print(content=prompt, color="bold_yellow")
|
||||
response = input()
|
||||
if response.strip() != "":
|
||||
self._printer.print(
|
||||
content="\nProcessing your feedback...", color="cyan"
|
||||
)
|
||||
self._printer.print(content="\nProcessing your feedback...", color="cyan")
|
||||
return response
|
||||
finally:
|
||||
event_listener.formatter.resume_live_updates()
|
||||
|
||||
@@ -8,13 +8,13 @@ from .cache.cache_handler import CacheHandler
|
||||
class ToolsHandler:
|
||||
"""Callback handler for tool usage."""
|
||||
|
||||
last_used_tool: Optional[ToolCalling] = None
|
||||
last_used_tool: ToolCalling = {} # type: ignore # BUG?: Incompatible types in assignment (expression has type "Dict[...]", variable has type "ToolCalling")
|
||||
cache: Optional[CacheHandler]
|
||||
|
||||
def __init__(self, cache: Optional[CacheHandler] = None):
|
||||
"""Initialize the callback handler."""
|
||||
self.cache = cache
|
||||
self.last_used_tool = None
|
||||
self.last_used_tool = {} # type: ignore # BUG?: same as above
|
||||
|
||||
def on_tool_use(
|
||||
self,
|
||||
|
||||
@@ -1 +1,6 @@
|
||||
ALGORITHMS = ["RS256"]
|
||||
|
||||
#TODO: The AUTH0 constants should be removed after WorkOS migration is completed
|
||||
AUTH0_DOMAIN = "crewai.us.auth0.com"
|
||||
AUTH0_CLIENT_ID = "DEVC5Fw6NlRoSzmDCcOhVq85EfLBjKa8"
|
||||
AUTH0_AUDIENCE = "https://crewai.us.auth0.com/api/v2/"
|
||||
|
||||
@@ -7,27 +7,24 @@ from rich.console import Console
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
from .utils import validate_jwt_token
|
||||
from crewai.cli.shared.token_manager import TokenManager
|
||||
from .utils import TokenManager, validate_jwt_token
|
||||
from urllib.parse import quote
|
||||
from crewai.cli.plus_api import PlusAPI
|
||||
from crewai.cli.config import Settings
|
||||
from crewai.cli.authentication.constants import (
|
||||
AUTH0_AUDIENCE,
|
||||
AUTH0_CLIENT_ID,
|
||||
AUTH0_DOMAIN,
|
||||
)
|
||||
|
||||
console = Console()
|
||||
|
||||
|
||||
class Oauth2Settings(BaseModel):
|
||||
provider: str = Field(
|
||||
description="OAuth2 provider used for authentication (e.g., workos, okta, auth0)."
|
||||
)
|
||||
client_id: str = Field(
|
||||
description="OAuth2 client ID issued by the provider, used during authentication requests."
|
||||
)
|
||||
domain: str = Field(
|
||||
description="OAuth2 provider's domain (e.g., your-org.auth0.com) used for issuing tokens."
|
||||
)
|
||||
audience: Optional[str] = Field(
|
||||
description="OAuth2 audience value, typically used to identify the target API or resource.",
|
||||
default=None,
|
||||
)
|
||||
provider: str = Field(description="OAuth2 provider used for authentication (e.g., workos, okta, auth0).")
|
||||
client_id: str = Field(description="OAuth2 client ID issued by the provider, used during authentication requests.")
|
||||
domain: str = Field(description="OAuth2 provider's domain (e.g., your-org.auth0.com) used for issuing tokens.")
|
||||
audience: Optional[str] = Field(description="OAuth2 audience value, typically used to identify the target API or resource.", default=None)
|
||||
|
||||
@classmethod
|
||||
def from_settings(cls):
|
||||
@@ -47,15 +44,11 @@ class ProviderFactory:
|
||||
settings = settings or Oauth2Settings.from_settings()
|
||||
|
||||
import importlib
|
||||
|
||||
module = importlib.import_module(
|
||||
f"crewai.cli.authentication.providers.{settings.provider.lower()}"
|
||||
)
|
||||
module = importlib.import_module(f"crewai.cli.authentication.providers.{settings.provider.lower()}")
|
||||
provider = getattr(module, f"{settings.provider.capitalize()}Provider")
|
||||
|
||||
return provider(settings)
|
||||
|
||||
|
||||
class AuthenticationCommand:
|
||||
def __init__(self):
|
||||
self.token_manager = TokenManager()
|
||||
@@ -65,12 +58,26 @@ class AuthenticationCommand:
|
||||
"""Sign up to CrewAI+"""
|
||||
console.print("Signing in to CrewAI Enterprise...\n", style="bold blue")
|
||||
|
||||
# TODO: WORKOS - Next line and conditional are temporary until migration to WorkOS is complete.
|
||||
user_provider = self._determine_user_provider()
|
||||
if user_provider == "auth0":
|
||||
settings = Oauth2Settings(
|
||||
provider="auth0",
|
||||
client_id=AUTH0_CLIENT_ID,
|
||||
domain=AUTH0_DOMAIN,
|
||||
audience=AUTH0_AUDIENCE
|
||||
)
|
||||
self.oauth2_provider = ProviderFactory.from_settings(settings)
|
||||
# End of temporary code.
|
||||
|
||||
device_code_data = self._get_device_code()
|
||||
self._display_auth_instructions(device_code_data)
|
||||
|
||||
return self._poll_for_token(device_code_data)
|
||||
|
||||
def _get_device_code(self) -> Dict[str, Any]:
|
||||
def _get_device_code(
|
||||
self
|
||||
) -> Dict[str, Any]:
|
||||
"""Get the device code to authenticate the user."""
|
||||
|
||||
device_code_payload = {
|
||||
@@ -79,9 +86,7 @@ class AuthenticationCommand:
|
||||
"audience": self.oauth2_provider.get_audience(),
|
||||
}
|
||||
response = requests.post(
|
||||
url=self.oauth2_provider.get_authorize_url(),
|
||||
data=device_code_payload,
|
||||
timeout=20,
|
||||
url=self.oauth2_provider.get_authorize_url(), data=device_code_payload, timeout=20
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
@@ -92,7 +97,9 @@ class AuthenticationCommand:
|
||||
console.print("2. Enter the following code: ", device_code_data["user_code"])
|
||||
webbrowser.open(device_code_data["verification_uri_complete"])
|
||||
|
||||
def _poll_for_token(self, device_code_data: Dict[str, Any]) -> None:
|
||||
def _poll_for_token(
|
||||
self, device_code_data: Dict[str, Any]
|
||||
) -> None:
|
||||
"""Polls the server for the token until it is received, or max attempts are reached."""
|
||||
|
||||
token_payload = {
|
||||
@@ -105,9 +112,7 @@ class AuthenticationCommand:
|
||||
|
||||
attempts = 0
|
||||
while True and attempts < 10:
|
||||
response = requests.post(
|
||||
self.oauth2_provider.get_token_url(), data=token_payload, timeout=30
|
||||
)
|
||||
response = requests.post(self.oauth2_provider.get_token_url(), data=token_payload, timeout=30)
|
||||
token_data = response.json()
|
||||
|
||||
if response.status_code == 200:
|
||||
@@ -187,3 +192,30 @@ class AuthenticationCommand:
|
||||
"\nRun [bold]crewai login[/bold] to try logging in again.\n",
|
||||
style="yellow",
|
||||
)
|
||||
|
||||
# TODO: WORKOS - This method is temporary until migration to WorkOS is complete.
|
||||
def _determine_user_provider(self) -> str:
|
||||
"""Determine which provider to use for authentication."""
|
||||
|
||||
console.print(
|
||||
"Enter your CrewAI Enterprise account email: ", style="bold blue", end=""
|
||||
)
|
||||
email = input()
|
||||
email_encoded = quote(email)
|
||||
|
||||
# It's not correct to call this method directly, but it's temporary until migration is complete.
|
||||
response = PlusAPI("")._make_request(
|
||||
"GET", f"/crewai_plus/api/v1/me/provider?email={email_encoded}"
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
if response.json().get("provider") == "auth0":
|
||||
return "auth0"
|
||||
else:
|
||||
return "workos"
|
||||
else:
|
||||
console.print(
|
||||
"Error: Failed to authenticate with crewai enterprise. Ensure that you are using the latest crewai version and please try again. If the problem persists, contact support@crewai.com.",
|
||||
style="red",
|
||||
)
|
||||
raise SystemExit
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from crewai.cli.shared.token_manager import TokenManager
|
||||
from .utils import TokenManager
|
||||
|
||||
|
||||
class AuthError(Exception):
|
||||
|
||||
@@ -1,5 +1,12 @@
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
import jwt
|
||||
from jwt import PyJWKClient
|
||||
from cryptography.fernet import Fernet
|
||||
|
||||
|
||||
def validate_jwt_token(
|
||||
@@ -60,3 +67,118 @@ def validate_jwt_token(
|
||||
raise Exception(f"JWKS or key processing error: {str(e)}")
|
||||
except jwt.InvalidTokenError as e:
|
||||
raise Exception(f"Invalid token: {str(e)}")
|
||||
|
||||
|
||||
class TokenManager:
|
||||
def __init__(self, file_path: str = "tokens.enc") -> None:
|
||||
"""
|
||||
Initialize the TokenManager class.
|
||||
|
||||
:param file_path: The file path to store the encrypted tokens. Default is "tokens.enc".
|
||||
"""
|
||||
self.file_path = file_path
|
||||
self.key = self._get_or_create_key()
|
||||
self.fernet = Fernet(self.key)
|
||||
|
||||
def _get_or_create_key(self) -> bytes:
|
||||
"""
|
||||
Get or create the encryption key.
|
||||
|
||||
:return: The encryption key.
|
||||
"""
|
||||
key_filename = "secret.key"
|
||||
key = self.read_secure_file(key_filename)
|
||||
|
||||
if key is not None:
|
||||
return key
|
||||
|
||||
new_key = Fernet.generate_key()
|
||||
self.save_secure_file(key_filename, new_key)
|
||||
return new_key
|
||||
|
||||
def save_tokens(self, access_token: str, expires_at: int) -> None:
|
||||
"""
|
||||
Save the access token and its expiration time.
|
||||
|
||||
:param access_token: The access token to save.
|
||||
:param expires_at: The UNIX timestamp of the expiration time.
|
||||
"""
|
||||
expiration_time = datetime.fromtimestamp(expires_at)
|
||||
data = {
|
||||
"access_token": access_token,
|
||||
"expiration": expiration_time.isoformat(),
|
||||
}
|
||||
encrypted_data = self.fernet.encrypt(json.dumps(data).encode())
|
||||
self.save_secure_file(self.file_path, encrypted_data)
|
||||
|
||||
def get_token(self) -> Optional[str]:
|
||||
"""
|
||||
Get the access token if it is valid and not expired.
|
||||
|
||||
:return: The access token if valid and not expired, otherwise None.
|
||||
"""
|
||||
encrypted_data = self.read_secure_file(self.file_path)
|
||||
|
||||
decrypted_data = self.fernet.decrypt(encrypted_data) # type: ignore
|
||||
data = json.loads(decrypted_data)
|
||||
|
||||
expiration = datetime.fromisoformat(data["expiration"])
|
||||
if expiration <= datetime.now():
|
||||
return None
|
||||
|
||||
return data["access_token"]
|
||||
|
||||
def get_secure_storage_path(self) -> Path:
|
||||
"""
|
||||
Get the secure storage path based on the operating system.
|
||||
|
||||
:return: The secure storage path.
|
||||
"""
|
||||
if sys.platform == "win32":
|
||||
# Windows: Use %LOCALAPPDATA%
|
||||
base_path = os.environ.get("LOCALAPPDATA")
|
||||
elif sys.platform == "darwin":
|
||||
# macOS: Use ~/Library/Application Support
|
||||
base_path = os.path.expanduser("~/Library/Application Support")
|
||||
else:
|
||||
# Linux and other Unix-like: Use ~/.local/share
|
||||
base_path = os.path.expanduser("~/.local/share")
|
||||
|
||||
app_name = "crewai/credentials"
|
||||
storage_path = Path(base_path) / app_name
|
||||
|
||||
storage_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
return storage_path
|
||||
|
||||
def save_secure_file(self, filename: str, content: bytes) -> None:
|
||||
"""
|
||||
Save the content to a secure file.
|
||||
|
||||
:param filename: The name of the file.
|
||||
:param content: The content to save.
|
||||
"""
|
||||
storage_path = self.get_secure_storage_path()
|
||||
file_path = storage_path / filename
|
||||
|
||||
with open(file_path, "wb") as f:
|
||||
f.write(content)
|
||||
|
||||
# Set appropriate permissions (read/write for owner only)
|
||||
os.chmod(file_path, 0o600)
|
||||
|
||||
def read_secure_file(self, filename: str) -> Optional[bytes]:
|
||||
"""
|
||||
Read the content of a secure file.
|
||||
|
||||
:param filename: The name of the file.
|
||||
:return: The content of the file if it exists, otherwise None.
|
||||
"""
|
||||
storage_path = self.get_secure_storage_path()
|
||||
file_path = storage_path / filename
|
||||
|
||||
if not file_path.exists():
|
||||
return None
|
||||
|
||||
with open(file_path, "rb") as f:
|
||||
return f.read()
|
||||
|
||||
@@ -11,7 +11,6 @@ from crewai.cli.constants import (
|
||||
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_CLIENT_ID,
|
||||
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_DOMAIN,
|
||||
)
|
||||
from crewai.cli.shared.token_manager import TokenManager
|
||||
|
||||
DEFAULT_CONFIG_PATH = Path.home() / ".config" / "crewai" / "settings.json"
|
||||
|
||||
@@ -54,7 +53,6 @@ HIDDEN_SETTINGS_KEYS = [
|
||||
"tool_repository_password",
|
||||
]
|
||||
|
||||
|
||||
class Settings(BaseModel):
|
||||
enterprise_base_url: Optional[str] = Field(
|
||||
default=DEFAULT_CLI_SETTINGS["enterprise_base_url"],
|
||||
@@ -76,12 +74,12 @@ class Settings(BaseModel):
|
||||
|
||||
oauth2_provider: str = Field(
|
||||
description="OAuth2 provider used for authentication (e.g., workos, okta, auth0).",
|
||||
default=DEFAULT_CLI_SETTINGS["oauth2_provider"],
|
||||
default=DEFAULT_CLI_SETTINGS["oauth2_provider"]
|
||||
)
|
||||
|
||||
oauth2_audience: Optional[str] = Field(
|
||||
description="OAuth2 audience value, typically used to identify the target API or resource.",
|
||||
default=DEFAULT_CLI_SETTINGS["oauth2_audience"],
|
||||
default=DEFAULT_CLI_SETTINGS["oauth2_audience"]
|
||||
)
|
||||
|
||||
oauth2_client_id: str = Field(
|
||||
@@ -91,7 +89,7 @@ class Settings(BaseModel):
|
||||
|
||||
oauth2_domain: str = Field(
|
||||
description="OAuth2 provider's domain (e.g., your-org.auth0.com) used for issuing tokens.",
|
||||
default=DEFAULT_CLI_SETTINGS["oauth2_domain"],
|
||||
default=DEFAULT_CLI_SETTINGS["oauth2_domain"]
|
||||
)
|
||||
|
||||
def __init__(self, config_path: Path = DEFAULT_CONFIG_PATH, **data):
|
||||
@@ -118,7 +116,6 @@ class Settings(BaseModel):
|
||||
"""Reset all settings to default values"""
|
||||
self._reset_user_settings()
|
||||
self._reset_cli_settings()
|
||||
self._clear_auth_tokens()
|
||||
self.dump()
|
||||
|
||||
def dump(self) -> None:
|
||||
@@ -142,7 +139,3 @@ class Settings(BaseModel):
|
||||
"""Reset all CLI settings to default values"""
|
||||
for key in CLI_SETTINGS_KEYS:
|
||||
setattr(self, key, DEFAULT_CLI_SETTINGS.get(key))
|
||||
|
||||
def _clear_auth_tokens(self) -> None:
|
||||
"""Clear all authentication tokens"""
|
||||
TokenManager().clear_tokens()
|
||||
|
||||
@@ -117,6 +117,9 @@ class PlusAPI:
|
||||
def get_organizations(self) -> requests.Response:
|
||||
return self._make_request("GET", self.ORGANIZATIONS_RESOURCE)
|
||||
|
||||
def send_trace_batch(self, payload) -> requests.Response:
|
||||
return self._make_request("POST", self.TRACING_RESOURCE, json=payload)
|
||||
|
||||
def initialize_trace_batch(self, payload) -> requests.Response:
|
||||
return self._make_request(
|
||||
"POST", f"{self.TRACING_RESOURCE}/batches", json=payload
|
||||
@@ -132,7 +135,6 @@ class PlusAPI:
|
||||
"POST",
|
||||
f"{self.TRACING_RESOURCE}/batches/{trace_batch_id}/events",
|
||||
json=payload,
|
||||
timeout=30,
|
||||
)
|
||||
|
||||
def send_ephemeral_trace_events(
|
||||
@@ -142,7 +144,6 @@ class PlusAPI:
|
||||
"POST",
|
||||
f"{self.EPHEMERAL_TRACING_RESOURCE}/batches/{trace_batch_id}/events",
|
||||
json=payload,
|
||||
timeout=30,
|
||||
)
|
||||
|
||||
def finalize_trace_batch(self, trace_batch_id: str, payload) -> requests.Response:
|
||||
|
||||
@@ -1,139 +0,0 @@
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from cryptography.fernet import Fernet
|
||||
|
||||
|
||||
class TokenManager:
|
||||
def __init__(self, file_path: str = "tokens.enc") -> None:
|
||||
"""
|
||||
Initialize the TokenManager class.
|
||||
|
||||
:param file_path: The file path to store the encrypted tokens. Default is "tokens.enc".
|
||||
"""
|
||||
self.file_path = file_path
|
||||
self.key = self._get_or_create_key()
|
||||
self.fernet = Fernet(self.key)
|
||||
|
||||
def _get_or_create_key(self) -> bytes:
|
||||
"""
|
||||
Get or create the encryption key.
|
||||
|
||||
:return: The encryption key.
|
||||
"""
|
||||
key_filename = "secret.key"
|
||||
key = self.read_secure_file(key_filename)
|
||||
|
||||
if key is not None:
|
||||
return key
|
||||
|
||||
new_key = Fernet.generate_key()
|
||||
self.save_secure_file(key_filename, new_key)
|
||||
return new_key
|
||||
|
||||
def save_tokens(self, access_token: str, expires_at: int) -> None:
|
||||
"""
|
||||
Save the access token and its expiration time.
|
||||
|
||||
:param access_token: The access token to save.
|
||||
:param expires_at: The UNIX timestamp of the expiration time.
|
||||
"""
|
||||
expiration_time = datetime.fromtimestamp(expires_at)
|
||||
data = {
|
||||
"access_token": access_token,
|
||||
"expiration": expiration_time.isoformat(),
|
||||
}
|
||||
encrypted_data = self.fernet.encrypt(json.dumps(data).encode())
|
||||
self.save_secure_file(self.file_path, encrypted_data)
|
||||
|
||||
def get_token(self) -> Optional[str]:
|
||||
"""
|
||||
Get the access token if it is valid and not expired.
|
||||
|
||||
:return: The access token if valid and not expired, otherwise None.
|
||||
"""
|
||||
encrypted_data = self.read_secure_file(self.file_path)
|
||||
|
||||
decrypted_data = self.fernet.decrypt(encrypted_data) # type: ignore
|
||||
data = json.loads(decrypted_data)
|
||||
|
||||
expiration = datetime.fromisoformat(data["expiration"])
|
||||
if expiration <= datetime.now():
|
||||
return None
|
||||
|
||||
return data["access_token"]
|
||||
|
||||
def clear_tokens(self) -> None:
|
||||
"""
|
||||
Clear the tokens.
|
||||
"""
|
||||
self.delete_secure_file(self.file_path)
|
||||
|
||||
def get_secure_storage_path(self) -> Path:
|
||||
"""
|
||||
Get the secure storage path based on the operating system.
|
||||
|
||||
:return: The secure storage path.
|
||||
"""
|
||||
if sys.platform == "win32":
|
||||
# Windows: Use %LOCALAPPDATA%
|
||||
base_path = os.environ.get("LOCALAPPDATA")
|
||||
elif sys.platform == "darwin":
|
||||
# macOS: Use ~/Library/Application Support
|
||||
base_path = os.path.expanduser("~/Library/Application Support")
|
||||
else:
|
||||
# Linux and other Unix-like: Use ~/.local/share
|
||||
base_path = os.path.expanduser("~/.local/share")
|
||||
|
||||
app_name = "crewai/credentials"
|
||||
storage_path = Path(base_path) / app_name
|
||||
|
||||
storage_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
return storage_path
|
||||
|
||||
def save_secure_file(self, filename: str, content: bytes) -> None:
|
||||
"""
|
||||
Save the content to a secure file.
|
||||
|
||||
:param filename: The name of the file.
|
||||
:param content: The content to save.
|
||||
"""
|
||||
storage_path = self.get_secure_storage_path()
|
||||
file_path = storage_path / filename
|
||||
|
||||
with open(file_path, "wb") as f:
|
||||
f.write(content)
|
||||
|
||||
# Set appropriate permissions (read/write for owner only)
|
||||
os.chmod(file_path, 0o600)
|
||||
|
||||
def read_secure_file(self, filename: str) -> Optional[bytes]:
|
||||
"""
|
||||
Read the content of a secure file.
|
||||
|
||||
:param filename: The name of the file.
|
||||
:return: The content of the file if it exists, otherwise None.
|
||||
"""
|
||||
storage_path = self.get_secure_storage_path()
|
||||
file_path = storage_path / filename
|
||||
|
||||
if not file_path.exists():
|
||||
return None
|
||||
|
||||
with open(file_path, "rb") as f:
|
||||
return f.read()
|
||||
|
||||
def delete_secure_file(self, filename: str) -> None:
|
||||
"""
|
||||
Delete the secure file.
|
||||
|
||||
:param filename: The name of the file.
|
||||
"""
|
||||
storage_path = self.get_secure_storage_path()
|
||||
file_path = storage_path / filename
|
||||
if file_path.exists():
|
||||
file_path.unlink(missing_ok=True)
|
||||
@@ -1,5 +1,5 @@
|
||||
import threading
|
||||
from typing import Any, Optional
|
||||
from typing import Any
|
||||
|
||||
from crewai.experimental.evaluation.base_evaluator import AgentEvaluationResult, AggregationStrategy
|
||||
from crewai.agent import Agent
|
||||
@@ -15,11 +15,10 @@ from crewai.utilities.events.agent_events import LiteAgentExecutionCompletedEven
|
||||
from crewai.experimental.evaluation.base_evaluator import AgentAggregatedEvaluationResult, EvaluationScore, MetricCategory
|
||||
|
||||
class ExecutionState:
|
||||
current_agent_id: Optional[str] = None
|
||||
current_task_id: Optional[str] = None
|
||||
|
||||
def __init__(self):
|
||||
self.traces = {}
|
||||
self.current_agent_id: str | None = None
|
||||
self.current_task_id: str | None = None
|
||||
self.iteration = 1
|
||||
self.iterations_results = {}
|
||||
self.agent_evaluators = {}
|
||||
|
||||
@@ -474,7 +474,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
self._method_outputs: List[Any] = [] # List to store all method outputs
|
||||
self._completed_methods: Set[str] = set() # Track completed methods for reload
|
||||
self._persistence: Optional[FlowPersistence] = persistence
|
||||
self._is_execution_resuming: bool = False
|
||||
|
||||
# Initialize state with initial values
|
||||
self._state = self._create_initial_state()
|
||||
@@ -830,9 +829,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
# Clear completed methods and outputs for a fresh start
|
||||
self._completed_methods.clear()
|
||||
self._method_outputs.clear()
|
||||
else:
|
||||
# We're restoring from persistence, set the flag
|
||||
self._is_execution_resuming = True
|
||||
|
||||
if inputs:
|
||||
# Override the id in the state if it exists in inputs
|
||||
@@ -884,9 +880,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
]
|
||||
await asyncio.gather(*tasks)
|
||||
|
||||
# Clear the resumption flag after initial execution completes
|
||||
self._is_execution_resuming = False
|
||||
|
||||
final_output = self._method_outputs[-1] if self._method_outputs else None
|
||||
|
||||
crewai_event_bus.emit(
|
||||
@@ -923,13 +916,9 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
- Automatically injects crewai_trigger_payload if available in flow inputs
|
||||
"""
|
||||
if start_method_name in self._completed_methods:
|
||||
if self._is_execution_resuming:
|
||||
# During resumption, skip execution but continue listeners
|
||||
last_output = self._method_outputs[-1] if self._method_outputs else None
|
||||
await self._execute_listeners(start_method_name, last_output)
|
||||
return
|
||||
# For cyclic flows, clear from completed to allow re-execution
|
||||
self._completed_methods.discard(start_method_name)
|
||||
last_output = self._method_outputs[-1] if self._method_outputs else None
|
||||
await self._execute_listeners(start_method_name, last_output)
|
||||
return
|
||||
|
||||
method = self._methods[start_method_name]
|
||||
enhanced_method = self._inject_trigger_payload_for_start_method(method)
|
||||
@@ -1061,15 +1050,11 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
for router_name in routers_triggered:
|
||||
await self._execute_single_listener(router_name, result)
|
||||
# After executing router, the router's result is the path
|
||||
router_result = (
|
||||
self._method_outputs[-1] if self._method_outputs else None
|
||||
)
|
||||
router_result = self._method_outputs[-1]
|
||||
if router_result: # Only add non-None results
|
||||
router_results.append(router_result)
|
||||
current_trigger = (
|
||||
str(router_result)
|
||||
if router_result is not None
|
||||
else "" # Update for next iteration of router chain
|
||||
router_result # Update for next iteration of router chain
|
||||
)
|
||||
|
||||
# Now execute normal listeners for all router results and the original trigger
|
||||
@@ -1087,24 +1072,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
]
|
||||
await asyncio.gather(*tasks)
|
||||
|
||||
if current_trigger in router_results:
|
||||
# Find start methods triggered by this router result
|
||||
for method_name in self._start_methods:
|
||||
# Check if this start method is triggered by the current trigger
|
||||
if method_name in self._listeners:
|
||||
condition_type, trigger_methods = self._listeners[
|
||||
method_name
|
||||
]
|
||||
if current_trigger in trigger_methods:
|
||||
# Only execute if this is a cycle (method was already completed)
|
||||
if method_name in self._completed_methods:
|
||||
# For router-triggered start methods in cycles, temporarily clear resumption flag
|
||||
# to allow cyclic execution
|
||||
was_resuming = self._is_execution_resuming
|
||||
self._is_execution_resuming = False
|
||||
await self._execute_start_method(method_name)
|
||||
self._is_execution_resuming = was_resuming
|
||||
|
||||
def _find_triggered_methods(
|
||||
self, trigger_method: str, router_only: bool
|
||||
) -> List[str]:
|
||||
@@ -1142,9 +1109,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
if router_only != is_router:
|
||||
continue
|
||||
|
||||
if not router_only and listener_name in self._start_methods:
|
||||
continue
|
||||
|
||||
if condition_type == "OR":
|
||||
# If the trigger_method matches any in methods, run this
|
||||
if trigger_method in methods:
|
||||
@@ -1194,13 +1158,10 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
Catches and logs any exceptions during execution, preventing
|
||||
individual listener failures from breaking the entire flow.
|
||||
"""
|
||||
if listener_name in self._completed_methods:
|
||||
if self._is_execution_resuming:
|
||||
# During resumption, skip execution but continue listeners
|
||||
await self._execute_listeners(listener_name, None)
|
||||
return
|
||||
# For cyclic flows, clear from completed to allow re-execution
|
||||
self._completed_methods.discard(listener_name)
|
||||
# TODO: greyson fix
|
||||
# if listener_name in self._completed_methods:
|
||||
# await self._execute_listeners(listener_name, None)
|
||||
# return
|
||||
|
||||
try:
|
||||
method = self._methods[listener_name]
|
||||
|
||||
@@ -69,7 +69,12 @@ from crewai.utilities.events.agent_events import (
|
||||
LiteAgentExecutionStartedEvent,
|
||||
)
|
||||
from crewai.utilities.events.crewai_event_bus import crewai_event_bus
|
||||
|
||||
from crewai.utilities.events.llm_events import (
|
||||
LLMCallCompletedEvent,
|
||||
LLMCallFailedEvent,
|
||||
LLMCallStartedEvent,
|
||||
LLMCallType,
|
||||
)
|
||||
from crewai.utilities.llm_utils import create_llm
|
||||
from crewai.utilities.printer import Printer
|
||||
from crewai.utilities.token_counter_callback import TokenCalcHandler
|
||||
@@ -514,6 +519,19 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
|
||||
enforce_rpm_limit(self.request_within_rpm_limit)
|
||||
|
||||
llm = cast(LLM, self.llm)
|
||||
model = llm.model if hasattr(llm, "model") else "unknown"
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=LLMCallStartedEvent(
|
||||
messages=self._messages,
|
||||
tools=None,
|
||||
callbacks=self._callbacks,
|
||||
from_agent=self,
|
||||
model=model,
|
||||
),
|
||||
)
|
||||
|
||||
try:
|
||||
answer = get_llm_response(
|
||||
llm=cast(LLM, self.llm),
|
||||
@@ -523,7 +541,23 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
from_agent=self,
|
||||
)
|
||||
|
||||
# Emit LLM call completed event
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=LLMCallCompletedEvent(
|
||||
messages=self._messages,
|
||||
response=answer,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
from_agent=self,
|
||||
model=model,
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
# Emit LLM call failed event
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=LLMCallFailedEvent(error=str(e), from_agent=self),
|
||||
)
|
||||
raise e
|
||||
|
||||
formatted_answer = process_llm_response(answer, self.use_stop_words)
|
||||
|
||||
@@ -851,9 +851,7 @@ class LLM(BaseLLM):
|
||||
return tool_calls
|
||||
|
||||
# --- 7) Handle tool calls if present
|
||||
tool_result = self._handle_tool_call(
|
||||
tool_calls, available_functions, from_task, from_agent
|
||||
)
|
||||
tool_result = self._handle_tool_call(tool_calls, available_functions)
|
||||
if tool_result is not None:
|
||||
return tool_result
|
||||
# --- 8) If tool call handling didn't return a result, emit completion event and return text response
|
||||
@@ -870,8 +868,6 @@ class LLM(BaseLLM):
|
||||
self,
|
||||
tool_calls: List[Any],
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
from_task: Optional[Any] = None,
|
||||
from_agent: Optional[Any] = None,
|
||||
) -> Optional[str]:
|
||||
"""Handle a tool call from the LLM.
|
||||
|
||||
@@ -906,8 +902,6 @@ class LLM(BaseLLM):
|
||||
event=ToolUsageStartedEvent(
|
||||
tool_name=function_name,
|
||||
tool_args=function_args,
|
||||
from_agent=from_agent,
|
||||
from_task=from_task,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -920,17 +914,12 @@ class LLM(BaseLLM):
|
||||
tool_args=function_args,
|
||||
started_at=started_at,
|
||||
finished_at=datetime.now(),
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
|
||||
# --- 3.3) Emit success event
|
||||
self._handle_emit_call_events(
|
||||
response=result,
|
||||
call_type=LLMCallType.TOOL_CALL,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
response=result, call_type=LLMCallType.TOOL_CALL
|
||||
)
|
||||
return result
|
||||
except Exception as e:
|
||||
@@ -1150,11 +1139,7 @@ class LLM(BaseLLM):
|
||||
|
||||
# TODO: Remove this code after merging PR https://github.com/BerriAI/litellm/pull/10917
|
||||
# Ollama doesn't supports last message to be 'assistant'
|
||||
if (
|
||||
"ollama" in self.model.lower()
|
||||
and messages
|
||||
and messages[-1]["role"] == "assistant"
|
||||
):
|
||||
if "ollama" in self.model.lower() and messages and messages[-1]["role"] == "assistant":
|
||||
return messages + [{"role": "user", "content": ""}]
|
||||
|
||||
# Handle Anthropic models
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Optional, TYPE_CHECKING
|
||||
from typing import Optional
|
||||
|
||||
from crewai.memory import (
|
||||
EntityMemory,
|
||||
@@ -7,10 +7,6 @@ from crewai.memory import (
|
||||
ShortTermMemory,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agent import Agent
|
||||
from crewai.task import Task
|
||||
|
||||
|
||||
class ContextualMemory:
|
||||
def __init__(
|
||||
@@ -19,28 +15,11 @@ class ContextualMemory:
|
||||
ltm: LongTermMemory,
|
||||
em: EntityMemory,
|
||||
exm: ExternalMemory,
|
||||
agent: Optional["Agent"] = None,
|
||||
task: Optional["Task"] = None,
|
||||
):
|
||||
self.stm = stm
|
||||
self.ltm = ltm
|
||||
self.em = em
|
||||
self.exm = exm
|
||||
self.agent = agent
|
||||
self.task = task
|
||||
|
||||
if self.stm is not None:
|
||||
self.stm.agent = self.agent
|
||||
self.stm.task = self.task
|
||||
if self.ltm is not None:
|
||||
self.ltm.agent = self.agent
|
||||
self.ltm.task = self.task
|
||||
if self.em is not None:
|
||||
self.em.agent = self.agent
|
||||
self.em.task = self.task
|
||||
if self.exm is not None:
|
||||
self.exm.agent = self.agent
|
||||
self.exm.task = self.task
|
||||
|
||||
def build_context_for_task(self, task, context) -> str:
|
||||
"""
|
||||
@@ -70,7 +49,10 @@ class ContextualMemory:
|
||||
|
||||
stm_results = self.stm.search(query)
|
||||
formatted_results = "\n".join(
|
||||
[f"- {result['context']}" for result in stm_results]
|
||||
[
|
||||
f"- {result['context']}"
|
||||
for result in stm_results
|
||||
]
|
||||
)
|
||||
return f"Recent Insights:\n{formatted_results}" if stm_results else ""
|
||||
|
||||
@@ -107,7 +89,10 @@ class ContextualMemory:
|
||||
|
||||
em_results = self.em.search(query)
|
||||
formatted_results = "\n".join(
|
||||
[f"- {result['context']}" for result in em_results] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
|
||||
[
|
||||
f"- {result['context']}"
|
||||
for result in em_results
|
||||
] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
|
||||
)
|
||||
return f"Entities:\n{formatted_results}" if em_results else ""
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Any
|
||||
from typing import Optional
|
||||
import time
|
||||
|
||||
from pydantic import PrivateAttr
|
||||
@@ -24,7 +24,7 @@ class EntityMemory(Memory):
|
||||
Inherits from the Memory class.
|
||||
"""
|
||||
|
||||
_memory_provider: str | None = PrivateAttr()
|
||||
_memory_provider: Optional[str] = PrivateAttr()
|
||||
|
||||
def __init__(self, crew=None, embedder_config=None, storage=None, path=None):
|
||||
memory_provider = embedder_config.get("provider") if embedder_config else None
|
||||
@@ -35,7 +35,7 @@ class EntityMemory(Memory):
|
||||
raise ImportError(
|
||||
"Mem0 is not installed. Please install it with `pip install mem0ai`."
|
||||
)
|
||||
config = embedder_config.get("config") if embedder_config else None
|
||||
config = embedder_config.get("config")
|
||||
storage = Mem0Storage(type="short_term", crew=crew, config=config)
|
||||
else:
|
||||
storage = (
|
||||
@@ -53,99 +53,47 @@ class EntityMemory(Memory):
|
||||
super().__init__(storage=storage)
|
||||
self._memory_provider = memory_provider
|
||||
|
||||
def save(
|
||||
self,
|
||||
value: EntityMemoryItem | list[EntityMemoryItem],
|
||||
metadata: dict[str, Any] | None = None,
|
||||
) -> None:
|
||||
"""Saves one or more entity items into the SQLite storage.
|
||||
|
||||
Args:
|
||||
value: Single EntityMemoryItem or list of EntityMemoryItems to save.
|
||||
metadata: Optional metadata dict (included for supertype compatibility but not used).
|
||||
|
||||
Notes:
|
||||
The metadata parameter is included to satisfy the supertype signature but is not
|
||||
used - entity metadata is extracted from the EntityMemoryItem objects themselves.
|
||||
"""
|
||||
|
||||
if not value:
|
||||
return
|
||||
|
||||
items = value if isinstance(value, list) else [value]
|
||||
is_batch = len(items) > 1
|
||||
|
||||
metadata = {"entity_count": len(items)} if is_batch else items[0].metadata
|
||||
def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
|
||||
"""Saves an entity item into the SQLite storage."""
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=MemorySaveStartedEvent(
|
||||
metadata=metadata,
|
||||
metadata=item.metadata,
|
||||
source_type="entity_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
saved_count = 0
|
||||
errors = []
|
||||
|
||||
try:
|
||||
for item in items:
|
||||
try:
|
||||
if self._memory_provider == "mem0":
|
||||
data = f"""
|
||||
Remember details about the following entity:
|
||||
Name: {item.name}
|
||||
Type: {item.type}
|
||||
Entity Description: {item.description}
|
||||
"""
|
||||
else:
|
||||
data = f"{item.name}({item.type}): {item.description}"
|
||||
|
||||
super().save(data, item.metadata)
|
||||
saved_count += 1
|
||||
except Exception as e:
|
||||
errors.append(f"{item.name}: {str(e)}")
|
||||
|
||||
if is_batch:
|
||||
emit_value = f"Saved {saved_count} entities"
|
||||
metadata = {"entity_count": saved_count, "errors": errors}
|
||||
if self._memory_provider == "mem0":
|
||||
data = f"""
|
||||
Remember details about the following entity:
|
||||
Name: {item.name}
|
||||
Type: {item.type}
|
||||
Entity Description: {item.description}
|
||||
"""
|
||||
else:
|
||||
emit_value = f"{items[0].name}({items[0].type}): {items[0].description}"
|
||||
metadata = items[0].metadata
|
||||
data = f"{item.name}({item.type}): {item.description}"
|
||||
|
||||
super().save(data, item.metadata)
|
||||
|
||||
# Emit memory save completed event
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=MemorySaveCompletedEvent(
|
||||
value=emit_value,
|
||||
metadata=metadata,
|
||||
value=data,
|
||||
metadata=item.metadata,
|
||||
save_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="entity_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
if errors:
|
||||
raise Exception(
|
||||
f"Partial save: {len(errors)} failed out of {len(items)}"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
fail_metadata = (
|
||||
{"entity_count": len(items), "saved": saved_count}
|
||||
if is_batch
|
||||
else items[0].metadata
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=MemorySaveFailedEvent(
|
||||
metadata=fail_metadata,
|
||||
metadata=item.metadata,
|
||||
error=str(e),
|
||||
source_type="entity_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
raise
|
||||
@@ -163,8 +111,6 @@ class EntityMemory(Memory):
|
||||
limit=limit,
|
||||
score_threshold=score_threshold,
|
||||
source_type="entity_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -183,8 +129,6 @@ class EntityMemory(Memory):
|
||||
score_threshold=score_threshold,
|
||||
query_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="entity_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
22
src/crewai/memory/external/external_memory.py
vendored
22
src/crewai/memory/external/external_memory.py
vendored
@@ -53,6 +53,7 @@ class ExternalMemory(Memory):
|
||||
self,
|
||||
value: Any,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
agent: Optional[str] = None,
|
||||
) -> None:
|
||||
"""Saves a value into the external storage."""
|
||||
crewai_event_bus.emit(
|
||||
@@ -60,30 +61,24 @@ class ExternalMemory(Memory):
|
||||
event=MemorySaveStartedEvent(
|
||||
value=value,
|
||||
metadata=metadata,
|
||||
agent_role=agent,
|
||||
source_type="external_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
try:
|
||||
item = ExternalMemoryItem(
|
||||
value=value,
|
||||
metadata=metadata,
|
||||
agent=self.agent.role if self.agent else None,
|
||||
)
|
||||
super().save(value=item.value, metadata=item.metadata)
|
||||
item = ExternalMemoryItem(value=value, metadata=metadata, agent=agent)
|
||||
super().save(value=item.value, metadata=item.metadata, agent=item.agent)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=MemorySaveCompletedEvent(
|
||||
value=value,
|
||||
metadata=metadata,
|
||||
agent_role=agent,
|
||||
save_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="external_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
@@ -92,10 +87,9 @@ class ExternalMemory(Memory):
|
||||
event=MemorySaveFailedEvent(
|
||||
value=value,
|
||||
metadata=metadata,
|
||||
agent_role=agent,
|
||||
error=str(e),
|
||||
source_type="external_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
raise
|
||||
@@ -113,8 +107,6 @@ class ExternalMemory(Memory):
|
||||
limit=limit,
|
||||
score_threshold=score_threshold,
|
||||
source_type="external_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -133,8 +125,6 @@ class ExternalMemory(Memory):
|
||||
score_threshold=score_threshold,
|
||||
query_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="external_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
@@ -37,17 +37,13 @@ class LongTermMemory(Memory):
|
||||
metadata=item.metadata,
|
||||
agent_role=item.agent,
|
||||
source_type="long_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
try:
|
||||
metadata = item.metadata
|
||||
metadata.update(
|
||||
{"agent": item.agent, "expected_output": item.expected_output}
|
||||
)
|
||||
metadata.update({"agent": item.agent, "expected_output": item.expected_output})
|
||||
self.storage.save( # type: ignore # BUG?: Unexpected keyword argument "task_description","score","datetime" for "save" of "Storage"
|
||||
task_description=item.task,
|
||||
score=metadata["quality"],
|
||||
@@ -63,8 +59,6 @@ class LongTermMemory(Memory):
|
||||
agent_role=item.agent,
|
||||
save_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="long_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
@@ -80,19 +74,13 @@ class LongTermMemory(Memory):
|
||||
)
|
||||
raise
|
||||
|
||||
def search( # type: ignore # signature of "search" incompatible with supertype "Memory"
|
||||
self,
|
||||
task: str,
|
||||
latest_n: int = 3,
|
||||
) -> List[Dict[str, Any]]: # type: ignore # signature of "search" incompatible with supertype "Memory"
|
||||
def search(self, task: str, latest_n: int = 3) -> List[Dict[str, Any]]: # type: ignore # signature of "search" incompatible with supertype "Memory"
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=MemoryQueryStartedEvent(
|
||||
query=task,
|
||||
limit=latest_n,
|
||||
source_type="long_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -108,8 +96,6 @@ class LongTermMemory(Memory):
|
||||
limit=latest_n,
|
||||
query_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="long_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
@@ -1,11 +1,7 @@
|
||||
from typing import Any, Dict, List, Optional, TYPE_CHECKING
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agent import Agent
|
||||
from crewai.task import Task
|
||||
|
||||
|
||||
class Memory(BaseModel):
|
||||
"""
|
||||
@@ -16,38 +12,19 @@ class Memory(BaseModel):
|
||||
crew: Optional[Any] = None
|
||||
|
||||
storage: Any
|
||||
_agent: Optional["Agent"] = None
|
||||
_task: Optional["Task"] = None
|
||||
|
||||
def __init__(self, storage: Any, **data: Any):
|
||||
super().__init__(storage=storage, **data)
|
||||
|
||||
@property
|
||||
def task(self) -> Optional["Task"]:
|
||||
"""Get the current task associated with this memory."""
|
||||
return self._task
|
||||
|
||||
@task.setter
|
||||
def task(self, task: Optional["Task"]) -> None:
|
||||
"""Set the current task associated with this memory."""
|
||||
self._task = task
|
||||
|
||||
@property
|
||||
def agent(self) -> Optional["Agent"]:
|
||||
"""Get the current agent associated with this memory."""
|
||||
return self._agent
|
||||
|
||||
@agent.setter
|
||||
def agent(self, agent: Optional["Agent"]) -> None:
|
||||
"""Set the current agent associated with this memory."""
|
||||
self._agent = agent
|
||||
|
||||
def save(
|
||||
self,
|
||||
value: Any,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
agent: Optional[str] = None,
|
||||
) -> None:
|
||||
metadata = metadata or {}
|
||||
if agent:
|
||||
metadata["agent"] = agent
|
||||
|
||||
self.storage.save(value, metadata)
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@ class ShortTermMemory(Memory):
|
||||
raise ImportError(
|
||||
"Mem0 is not installed. Please install it with `pip install mem0ai`."
|
||||
)
|
||||
config = embedder_config.get("config") if embedder_config else None
|
||||
config = embedder_config.get("config")
|
||||
storage = Mem0Storage(type="short_term", crew=crew, config=config)
|
||||
else:
|
||||
storage = (
|
||||
@@ -57,42 +57,34 @@ class ShortTermMemory(Memory):
|
||||
self,
|
||||
value: Any,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
agent: Optional[str] = None,
|
||||
) -> None:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=MemorySaveStartedEvent(
|
||||
value=value,
|
||||
metadata=metadata,
|
||||
agent_role=agent,
|
||||
source_type="short_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
try:
|
||||
item = ShortTermMemoryItem(
|
||||
data=value,
|
||||
metadata=metadata,
|
||||
agent=self.agent.role if self.agent else None,
|
||||
)
|
||||
item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent)
|
||||
if self._memory_provider == "mem0":
|
||||
item.data = (
|
||||
f"Remember the following insights from Agent run: {item.data}"
|
||||
)
|
||||
item.data = f"Remember the following insights from Agent run: {item.data}"
|
||||
|
||||
super().save(value=item.data, metadata=item.metadata)
|
||||
super().save(value=item.data, metadata=item.metadata, agent=item.agent)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=MemorySaveCompletedEvent(
|
||||
value=value,
|
||||
metadata=metadata,
|
||||
# agent_role=agent,
|
||||
agent_role=agent,
|
||||
save_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="short_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
@@ -101,10 +93,9 @@ class ShortTermMemory(Memory):
|
||||
event=MemorySaveFailedEvent(
|
||||
value=value,
|
||||
metadata=metadata,
|
||||
agent_role=agent,
|
||||
error=str(e),
|
||||
source_type="short_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
raise
|
||||
@@ -122,8 +113,6 @@ class ShortTermMemory(Memory):
|
||||
limit=limit,
|
||||
score_threshold=score_threshold,
|
||||
source_type="short_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -142,8 +131,6 @@ class ShortTermMemory(Memory):
|
||||
score_threshold=score_threshold,
|
||||
query_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="short_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
@@ -1,58 +1 @@
|
||||
"""RAG (Retrieval-Augmented Generation) infrastructure for CrewAI."""
|
||||
|
||||
import sys
|
||||
import importlib
|
||||
from types import ModuleType
|
||||
from typing import Any
|
||||
|
||||
from crewai.rag.config.types import RagConfigType
|
||||
from crewai.rag.config.utils import set_rag_config
|
||||
|
||||
|
||||
_module_path = __path__
|
||||
_module_file = __file__
|
||||
|
||||
class _RagModule(ModuleType):
|
||||
"""Module wrapper to intercept attribute setting for config."""
|
||||
|
||||
__path__ = _module_path
|
||||
__file__ = _module_file
|
||||
|
||||
def __init__(self, module_name: str):
|
||||
"""Initialize the module wrapper.
|
||||
|
||||
Args:
|
||||
module_name: Name of the module.
|
||||
"""
|
||||
super().__init__(module_name)
|
||||
|
||||
def __setattr__(self, name: str, value: RagConfigType) -> None:
|
||||
"""Set module attributes.
|
||||
|
||||
Args:
|
||||
name: Attribute name.
|
||||
value: Attribute value.
|
||||
"""
|
||||
if name == "config":
|
||||
return set_rag_config(value)
|
||||
raise AttributeError(f"Setting attribute '{name}' is not allowed.")
|
||||
|
||||
def __getattr__(self, name: str) -> Any:
|
||||
"""Get module attributes.
|
||||
|
||||
Args:
|
||||
name: Attribute name.
|
||||
|
||||
Returns:
|
||||
The requested attribute.
|
||||
|
||||
Raises:
|
||||
AttributeError: If attribute doesn't exist.
|
||||
"""
|
||||
try:
|
||||
return importlib.import_module(f"{self.__name__}.{name}")
|
||||
except ImportError:
|
||||
raise AttributeError(f"module '{self.__name__}' has no attribute '{name}'")
|
||||
|
||||
|
||||
sys.modules[__name__] = _RagModule(__name__)
|
||||
"""RAG (Retrieval-Augmented Generation) infrastructure for CrewAI."""
|
||||
@@ -1,567 +0,0 @@
|
||||
"""ChromaDB client implementation."""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from chromadb.api.types import (
|
||||
Embeddable,
|
||||
EmbeddingFunction as ChromaEmbeddingFunction,
|
||||
QueryResult,
|
||||
)
|
||||
from typing_extensions import Unpack
|
||||
|
||||
from crewai.rag.chromadb.types import (
|
||||
ChromaDBClientType,
|
||||
ChromaDBCollectionCreateParams,
|
||||
ChromaDBCollectionSearchParams,
|
||||
)
|
||||
from crewai.rag.chromadb.utils import (
|
||||
_extract_search_params,
|
||||
_is_async_client,
|
||||
_is_sync_client,
|
||||
_prepare_documents_for_chromadb,
|
||||
_process_query_results,
|
||||
)
|
||||
from crewai.rag.core.base_client import (
|
||||
BaseClient,
|
||||
BaseCollectionParams,
|
||||
BaseCollectionAddParams,
|
||||
)
|
||||
from crewai.rag.types import SearchResult
|
||||
|
||||
|
||||
class ChromaDBClient(BaseClient):
|
||||
"""ChromaDB implementation of the BaseClient protocol.
|
||||
|
||||
Provides vector database operations for ChromaDB, supporting both
|
||||
synchronous and asynchronous clients.
|
||||
|
||||
Attributes:
|
||||
client: ChromaDB client instance (ClientAPI or AsyncClientAPI).
|
||||
embedding_function: Function to generate embeddings for documents.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
client: ChromaDBClientType,
|
||||
embedding_function: ChromaEmbeddingFunction[Embeddable],
|
||||
) -> None:
|
||||
"""Initialize ChromaDBClient with client and embedding function.
|
||||
|
||||
Args:
|
||||
client: Pre-configured ChromaDB client instance.
|
||||
embedding_function: Embedding function for text to vector conversion.
|
||||
"""
|
||||
self.client = client
|
||||
self.embedding_function = embedding_function
|
||||
|
||||
def create_collection(
|
||||
self, **kwargs: Unpack[ChromaDBCollectionCreateParams]
|
||||
) -> None:
|
||||
"""Create a new collection in ChromaDB.
|
||||
|
||||
Uses the client's default embedding function if none provided.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to create. Must be unique.
|
||||
configuration: Optional collection configuration specifying distance metrics,
|
||||
HNSW parameters, or other backend-specific settings.
|
||||
metadata: Optional metadata dictionary to attach to the collection.
|
||||
embedding_function: Optional custom embedding function. If not provided,
|
||||
uses the client's default embedding function.
|
||||
data_loader: Optional data loader for batch loading data into the collection.
|
||||
get_or_create: If True, returns existing collection if it already exists
|
||||
instead of raising an error. Defaults to False.
|
||||
|
||||
Raises:
|
||||
TypeError: If AsyncClientAPI is used instead of ClientAPI for sync operations.
|
||||
ValueError: If collection with the same name already exists and get_or_create
|
||||
is False.
|
||||
ConnectionError: If unable to connect to ChromaDB server.
|
||||
|
||||
Example:
|
||||
>>> client = ChromaDBClient()
|
||||
>>> client.create_collection(
|
||||
... collection_name="documents",
|
||||
... metadata={"description": "Product documentation"},
|
||||
... get_or_create=True
|
||||
... )
|
||||
"""
|
||||
if not _is_sync_client(self.client):
|
||||
raise TypeError(
|
||||
"Synchronous method create_collection() requires a ClientAPI. "
|
||||
"Use acreate_collection() for AsyncClientAPI."
|
||||
)
|
||||
|
||||
metadata = kwargs.get("metadata", {})
|
||||
if "hnsw:space" not in metadata:
|
||||
metadata["hnsw:space"] = "cosine"
|
||||
|
||||
self.client.create_collection(
|
||||
name=kwargs["collection_name"],
|
||||
configuration=kwargs.get("configuration"),
|
||||
metadata=metadata,
|
||||
embedding_function=kwargs.get(
|
||||
"embedding_function", self.embedding_function
|
||||
),
|
||||
data_loader=kwargs.get("data_loader"),
|
||||
get_or_create=kwargs.get("get_or_create", False),
|
||||
)
|
||||
|
||||
async def acreate_collection(
|
||||
self, **kwargs: Unpack[ChromaDBCollectionCreateParams]
|
||||
) -> None:
|
||||
"""Create a new collection in ChromaDB asynchronously.
|
||||
|
||||
Creates a new collection with the specified name and optional configuration.
|
||||
If an embedding function is not provided, uses the client's default embedding function.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to create. Must be unique.
|
||||
configuration: Optional collection configuration specifying distance metrics,
|
||||
HNSW parameters, or other backend-specific settings.
|
||||
metadata: Optional metadata dictionary to attach to the collection.
|
||||
embedding_function: Optional custom embedding function. If not provided,
|
||||
uses the client's default embedding function.
|
||||
data_loader: Optional data loader for batch loading data into the collection.
|
||||
get_or_create: If True, returns existing collection if it already exists
|
||||
instead of raising an error. Defaults to False.
|
||||
|
||||
Raises:
|
||||
TypeError: If ClientAPI is used instead of AsyncClientAPI for async operations.
|
||||
ValueError: If collection with the same name already exists and get_or_create
|
||||
is False.
|
||||
ConnectionError: If unable to connect to ChromaDB server.
|
||||
|
||||
Example:
|
||||
>>> import asyncio
|
||||
>>> async def main():
|
||||
... client = ChromaDBClient()
|
||||
... await client.acreate_collection(
|
||||
... collection_name="documents",
|
||||
... metadata={"description": "Product documentation"},
|
||||
... get_or_create=True
|
||||
... )
|
||||
>>> asyncio.run(main())
|
||||
"""
|
||||
if not _is_async_client(self.client):
|
||||
raise TypeError(
|
||||
"Asynchronous method acreate_collection() requires an AsyncClientAPI. "
|
||||
"Use create_collection() for ClientAPI."
|
||||
)
|
||||
|
||||
metadata = kwargs.get("metadata", {})
|
||||
if "hnsw:space" not in metadata:
|
||||
metadata["hnsw:space"] = "cosine"
|
||||
|
||||
await self.client.create_collection(
|
||||
name=kwargs["collection_name"],
|
||||
configuration=kwargs.get("configuration"),
|
||||
metadata=metadata,
|
||||
embedding_function=kwargs.get(
|
||||
"embedding_function", self.embedding_function
|
||||
),
|
||||
data_loader=kwargs.get("data_loader"),
|
||||
get_or_create=kwargs.get("get_or_create", False),
|
||||
)
|
||||
|
||||
def get_or_create_collection(
|
||||
self, **kwargs: Unpack[ChromaDBCollectionCreateParams]
|
||||
) -> Any:
|
||||
"""Get an existing collection or create it if it doesn't exist.
|
||||
|
||||
Returns existing collection if found, otherwise creates a new one.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to get or create.
|
||||
configuration: Optional collection configuration specifying distance metrics,
|
||||
HNSW parameters, or other backend-specific settings.
|
||||
metadata: Optional metadata dictionary to attach to the collection.
|
||||
embedding_function: Optional custom embedding function. If not provided,
|
||||
uses the client's default embedding function.
|
||||
data_loader: Optional data loader for batch loading data into the collection.
|
||||
|
||||
Returns:
|
||||
A ChromaDB Collection object.
|
||||
|
||||
Raises:
|
||||
TypeError: If AsyncClientAPI is used instead of ClientAPI for sync operations.
|
||||
ConnectionError: If unable to connect to ChromaDB server.
|
||||
|
||||
Example:
|
||||
>>> client = ChromaDBClient()
|
||||
>>> collection = client.get_or_create_collection(
|
||||
... collection_name="documents",
|
||||
... metadata={"description": "Product documentation"}
|
||||
... )
|
||||
"""
|
||||
if not _is_sync_client(self.client):
|
||||
raise TypeError(
|
||||
"Synchronous method get_or_create_collection() requires a ClientAPI. "
|
||||
"Use aget_or_create_collection() for AsyncClientAPI."
|
||||
)
|
||||
|
||||
metadata = kwargs.get("metadata", {})
|
||||
if "hnsw:space" not in metadata:
|
||||
metadata["hnsw:space"] = "cosine"
|
||||
|
||||
return self.client.get_or_create_collection(
|
||||
name=kwargs["collection_name"],
|
||||
configuration=kwargs.get("configuration"),
|
||||
metadata=metadata,
|
||||
embedding_function=kwargs.get(
|
||||
"embedding_function", self.embedding_function
|
||||
),
|
||||
data_loader=kwargs.get("data_loader"),
|
||||
)
|
||||
|
||||
async def aget_or_create_collection(
|
||||
self, **kwargs: Unpack[ChromaDBCollectionCreateParams]
|
||||
) -> Any:
|
||||
"""Get an existing collection or create it if it doesn't exist asynchronously.
|
||||
|
||||
Returns existing collection if found, otherwise creates a new one.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to get or create.
|
||||
configuration: Optional collection configuration specifying distance metrics,
|
||||
HNSW parameters, or other backend-specific settings.
|
||||
metadata: Optional metadata dictionary to attach to the collection.
|
||||
embedding_function: Optional custom embedding function. If not provided,
|
||||
uses the client's default embedding function.
|
||||
data_loader: Optional data loader for batch loading data into the collection.
|
||||
|
||||
Returns:
|
||||
A ChromaDB AsyncCollection object.
|
||||
|
||||
Raises:
|
||||
TypeError: If ClientAPI is used instead of AsyncClientAPI for async operations.
|
||||
ConnectionError: If unable to connect to ChromaDB server.
|
||||
|
||||
Example:
|
||||
>>> import asyncio
|
||||
>>> async def main():
|
||||
... client = ChromaDBClient()
|
||||
... collection = await client.aget_or_create_collection(
|
||||
... collection_name="documents",
|
||||
... metadata={"description": "Product documentation"}
|
||||
... )
|
||||
>>> asyncio.run(main())
|
||||
"""
|
||||
if not _is_async_client(self.client):
|
||||
raise TypeError(
|
||||
"Asynchronous method aget_or_create_collection() requires an AsyncClientAPI. "
|
||||
"Use get_or_create_collection() for ClientAPI."
|
||||
)
|
||||
|
||||
metadata = kwargs.get("metadata", {})
|
||||
if "hnsw:space" not in metadata:
|
||||
metadata["hnsw:space"] = "cosine"
|
||||
|
||||
return await self.client.get_or_create_collection(
|
||||
name=kwargs["collection_name"],
|
||||
configuration=kwargs.get("configuration"),
|
||||
metadata=metadata,
|
||||
embedding_function=kwargs.get(
|
||||
"embedding_function", self.embedding_function
|
||||
),
|
||||
data_loader=kwargs.get("data_loader"),
|
||||
)
|
||||
|
||||
def add_documents(self, **kwargs: Unpack[BaseCollectionAddParams]) -> None:
|
||||
"""Add documents with their embeddings to a collection.
|
||||
|
||||
Performs an upsert operation - documents with existing IDs are updated.
|
||||
Generates embeddings automatically using the configured embedding function.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: The name of the collection to add documents to.
|
||||
documents: List of BaseRecord dicts containing:
|
||||
- content: The text content (required)
|
||||
- doc_id: Optional unique identifier (auto-generated if missing)
|
||||
- metadata: Optional metadata dictionary
|
||||
|
||||
Raises:
|
||||
TypeError: If AsyncClientAPI is used instead of ClientAPI for sync operations.
|
||||
ValueError: If collection doesn't exist or documents list is empty.
|
||||
ConnectionError: If unable to connect to ChromaDB server.
|
||||
"""
|
||||
if not _is_sync_client(self.client):
|
||||
raise TypeError(
|
||||
"Synchronous method add_documents() requires a ClientAPI. "
|
||||
"Use aadd_documents() for AsyncClientAPI."
|
||||
)
|
||||
|
||||
collection_name = kwargs["collection_name"]
|
||||
documents = kwargs["documents"]
|
||||
|
||||
if not documents:
|
||||
raise ValueError("Documents list cannot be empty")
|
||||
|
||||
collection = self.client.get_collection(
|
||||
name=collection_name,
|
||||
embedding_function=self.embedding_function,
|
||||
)
|
||||
|
||||
prepared = _prepare_documents_for_chromadb(documents)
|
||||
collection.add(
|
||||
ids=prepared.ids,
|
||||
documents=prepared.texts,
|
||||
metadatas=prepared.metadatas,
|
||||
)
|
||||
|
||||
async def aadd_documents(self, **kwargs: Unpack[BaseCollectionAddParams]) -> None:
|
||||
"""Add documents with their embeddings to a collection asynchronously.
|
||||
|
||||
Performs an upsert operation - documents with existing IDs are updated.
|
||||
Generates embeddings automatically using the configured embedding function.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: The name of the collection to add documents to.
|
||||
documents: List of BaseRecord dicts containing:
|
||||
- content: The text content (required)
|
||||
- doc_id: Optional unique identifier (auto-generated if missing)
|
||||
- metadata: Optional metadata dictionary
|
||||
|
||||
Raises:
|
||||
TypeError: If ClientAPI is used instead of AsyncClientAPI for async operations.
|
||||
ValueError: If collection doesn't exist or documents list is empty.
|
||||
ConnectionError: If unable to connect to ChromaDB server.
|
||||
"""
|
||||
if not _is_async_client(self.client):
|
||||
raise TypeError(
|
||||
"Asynchronous method aadd_documents() requires an AsyncClientAPI. "
|
||||
"Use add_documents() for ClientAPI."
|
||||
)
|
||||
|
||||
collection_name = kwargs["collection_name"]
|
||||
documents = kwargs["documents"]
|
||||
|
||||
if not documents:
|
||||
raise ValueError("Documents list cannot be empty")
|
||||
|
||||
collection = await self.client.get_collection(
|
||||
name=collection_name,
|
||||
embedding_function=self.embedding_function,
|
||||
)
|
||||
prepared = _prepare_documents_for_chromadb(documents)
|
||||
await collection.add(
|
||||
ids=prepared.ids,
|
||||
documents=prepared.texts,
|
||||
metadatas=prepared.metadatas,
|
||||
)
|
||||
|
||||
def search(
|
||||
self, **kwargs: Unpack[ChromaDBCollectionSearchParams]
|
||||
) -> list[SearchResult]:
|
||||
"""Search for similar documents using a query.
|
||||
|
||||
Performs semantic search to find documents similar to the query text.
|
||||
Uses the configured embedding function to generate query embeddings.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to search in.
|
||||
query: The text query to search for.
|
||||
limit: Maximum number of results to return (default: 10).
|
||||
metadata_filter: Optional filter for metadata fields.
|
||||
score_threshold: Optional minimum similarity score (0-1) for results.
|
||||
where: Optional ChromaDB where clause for metadata filtering.
|
||||
where_document: Optional ChromaDB where clause for document content filtering.
|
||||
include: Optional list of fields to include in results.
|
||||
|
||||
Returns:
|
||||
List of SearchResult dicts containing id, content, metadata, and score.
|
||||
|
||||
Raises:
|
||||
TypeError: If AsyncClientAPI is used instead of ClientAPI for sync operations.
|
||||
ValueError: If collection doesn't exist.
|
||||
ConnectionError: If unable to connect to ChromaDB server.
|
||||
"""
|
||||
if not _is_sync_client(self.client):
|
||||
raise TypeError(
|
||||
"Synchronous method search() requires a ClientAPI. "
|
||||
"Use asearch() for AsyncClientAPI."
|
||||
)
|
||||
|
||||
params = _extract_search_params(kwargs)
|
||||
|
||||
collection = self.client.get_collection(
|
||||
name=params.collection_name,
|
||||
embedding_function=self.embedding_function,
|
||||
)
|
||||
|
||||
where = params.where if params.where is not None else params.metadata_filter
|
||||
|
||||
results: QueryResult = collection.query(
|
||||
query_texts=[params.query],
|
||||
n_results=params.limit,
|
||||
where=where,
|
||||
where_document=params.where_document,
|
||||
include=params.include,
|
||||
)
|
||||
|
||||
return _process_query_results(
|
||||
collection=collection,
|
||||
results=results,
|
||||
params=params,
|
||||
)
|
||||
|
||||
async def asearch(
|
||||
self, **kwargs: Unpack[ChromaDBCollectionSearchParams]
|
||||
) -> list[SearchResult]:
|
||||
"""Search for similar documents using a query asynchronously.
|
||||
|
||||
Performs semantic search to find documents similar to the query text.
|
||||
Uses the configured embedding function to generate query embeddings.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to search in.
|
||||
query: The text query to search for.
|
||||
limit: Maximum number of results to return (default: 10).
|
||||
metadata_filter: Optional filter for metadata fields.
|
||||
score_threshold: Optional minimum similarity score (0-1) for results.
|
||||
where: Optional ChromaDB where clause for metadata filtering.
|
||||
where_document: Optional ChromaDB where clause for document content filtering.
|
||||
include: Optional list of fields to include in results.
|
||||
|
||||
Returns:
|
||||
List of SearchResult dicts containing id, content, metadata, and score.
|
||||
|
||||
Raises:
|
||||
TypeError: If ClientAPI is used instead of AsyncClientAPI for async operations.
|
||||
ValueError: If collection doesn't exist.
|
||||
ConnectionError: If unable to connect to ChromaDB server.
|
||||
"""
|
||||
if not _is_async_client(self.client):
|
||||
raise TypeError(
|
||||
"Asynchronous method asearch() requires an AsyncClientAPI. "
|
||||
"Use search() for ClientAPI."
|
||||
)
|
||||
|
||||
params = _extract_search_params(kwargs)
|
||||
|
||||
collection = await self.client.get_collection(
|
||||
name=params.collection_name,
|
||||
embedding_function=self.embedding_function,
|
||||
)
|
||||
|
||||
where = params.where if params.where is not None else params.metadata_filter
|
||||
|
||||
results: QueryResult = await collection.query(
|
||||
query_texts=[params.query],
|
||||
n_results=params.limit,
|
||||
where=where,
|
||||
where_document=params.where_document,
|
||||
include=params.include,
|
||||
)
|
||||
|
||||
return _process_query_results(
|
||||
collection=collection,
|
||||
results=results,
|
||||
params=params,
|
||||
)
|
||||
|
||||
def delete_collection(self, **kwargs: Unpack[BaseCollectionParams]) -> None:
|
||||
"""Delete a collection and all its data.
|
||||
|
||||
Permanently removes a collection and all documents, embeddings, and metadata it contains.
|
||||
This operation cannot be undone.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to delete.
|
||||
|
||||
Raises:
|
||||
TypeError: If AsyncClientAPI is used instead of ClientAPI for sync operations.
|
||||
ValueError: If collection doesn't exist.
|
||||
ConnectionError: If unable to connect to ChromaDB server.
|
||||
|
||||
Example:
|
||||
>>> client = ChromaDBClient()
|
||||
>>> client.delete_collection(collection_name="old_documents")
|
||||
"""
|
||||
if not _is_sync_client(self.client):
|
||||
raise TypeError(
|
||||
"Synchronous method delete_collection() requires a ClientAPI. "
|
||||
"Use adelete_collection() for AsyncClientAPI."
|
||||
)
|
||||
|
||||
collection_name = kwargs["collection_name"]
|
||||
self.client.delete_collection(name=collection_name)
|
||||
|
||||
async def adelete_collection(self, **kwargs: Unpack[BaseCollectionParams]) -> None:
|
||||
"""Delete a collection and all its data asynchronously.
|
||||
|
||||
Permanently removes a collection and all documents, embeddings, and metadata it contains.
|
||||
This operation cannot be undone.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to delete.
|
||||
|
||||
Raises:
|
||||
TypeError: If ClientAPI is used instead of AsyncClientAPI for async operations.
|
||||
ValueError: If collection doesn't exist.
|
||||
ConnectionError: If unable to connect to ChromaDB server.
|
||||
|
||||
Example:
|
||||
>>> import asyncio
|
||||
>>> async def main():
|
||||
... client = ChromaDBClient()
|
||||
... await client.adelete_collection(collection_name="old_documents")
|
||||
>>> asyncio.run(main())
|
||||
"""
|
||||
if not _is_async_client(self.client):
|
||||
raise TypeError(
|
||||
"Asynchronous method adelete_collection() requires an AsyncClientAPI. "
|
||||
"Use delete_collection() for ClientAPI."
|
||||
)
|
||||
|
||||
collection_name = kwargs["collection_name"]
|
||||
await self.client.delete_collection(name=collection_name)
|
||||
|
||||
def reset(self) -> None:
|
||||
"""Reset the vector database by deleting all collections and data.
|
||||
|
||||
Completely clears the ChromaDB instance, removing all collections,
|
||||
documents, embeddings, and metadata. This operation cannot be undone.
|
||||
Use with extreme caution in production environments.
|
||||
|
||||
Raises:
|
||||
TypeError: If AsyncClientAPI is used instead of ClientAPI for sync operations.
|
||||
ConnectionError: If unable to connect to ChromaDB server.
|
||||
|
||||
Example:
|
||||
>>> client = ChromaDBClient()
|
||||
>>> client.reset() # Removes ALL data from ChromaDB
|
||||
"""
|
||||
if not _is_sync_client(self.client):
|
||||
raise TypeError(
|
||||
"Synchronous method reset() requires a ClientAPI. "
|
||||
"Use areset() for AsyncClientAPI."
|
||||
)
|
||||
|
||||
self.client.reset()
|
||||
|
||||
async def areset(self) -> None:
|
||||
"""Reset the vector database by deleting all collections and data asynchronously.
|
||||
|
||||
Completely clears the ChromaDB instance, removing all collections,
|
||||
documents, embeddings, and metadata. This operation cannot be undone.
|
||||
Use with extreme caution in production environments.
|
||||
|
||||
Raises:
|
||||
TypeError: If ClientAPI is used instead of AsyncClientAPI for async operations.
|
||||
ConnectionError: If unable to connect to ChromaDB server.
|
||||
|
||||
Example:
|
||||
>>> import asyncio
|
||||
>>> async def main():
|
||||
... client = ChromaDBClient()
|
||||
... await client.areset() # Removes ALL data from ChromaDB
|
||||
>>> asyncio.run(main())
|
||||
"""
|
||||
if not _is_async_client(self.client):
|
||||
raise TypeError(
|
||||
"Asynchronous method areset() requires an AsyncClientAPI. "
|
||||
"Use reset() for ClientAPI."
|
||||
)
|
||||
|
||||
await self.client.reset()
|
||||
@@ -1,59 +0,0 @@
|
||||
"""ChromaDB configuration model."""
|
||||
|
||||
import warnings
|
||||
from dataclasses import field
|
||||
from typing import Literal, cast
|
||||
from pydantic.dataclasses import dataclass as pyd_dataclass
|
||||
from chromadb.config import Settings
|
||||
from chromadb.utils.embedding_functions import DefaultEmbeddingFunction
|
||||
|
||||
from crewai.rag.chromadb.types import ChromaEmbeddingFunctionWrapper
|
||||
from crewai.rag.config.base import BaseRagConfig
|
||||
from crewai.rag.chromadb.constants import (
|
||||
DEFAULT_TENANT,
|
||||
DEFAULT_DATABASE,
|
||||
DEFAULT_STORAGE_PATH,
|
||||
)
|
||||
|
||||
|
||||
warnings.filterwarnings(
|
||||
"ignore",
|
||||
message=".*Mixing V1 models and V2 models.*",
|
||||
category=UserWarning,
|
||||
module="pydantic._internal._generate_schema",
|
||||
)
|
||||
|
||||
|
||||
def _default_settings() -> Settings:
|
||||
"""Create default ChromaDB settings.
|
||||
|
||||
Returns:
|
||||
Settings with persistent storage and reset enabled.
|
||||
"""
|
||||
return Settings(
|
||||
persist_directory=DEFAULT_STORAGE_PATH,
|
||||
allow_reset=True,
|
||||
is_persistent=True,
|
||||
)
|
||||
|
||||
|
||||
def _default_embedding_function() -> ChromaEmbeddingFunctionWrapper:
|
||||
"""Create default ChromaDB embedding function.
|
||||
|
||||
Returns:
|
||||
Default embedding function using all-MiniLM-L6-v2 via ONNX.
|
||||
"""
|
||||
return cast(ChromaEmbeddingFunctionWrapper, DefaultEmbeddingFunction())
|
||||
|
||||
|
||||
@pyd_dataclass(frozen=True)
|
||||
class ChromaDBConfig(BaseRagConfig):
|
||||
"""Configuration for ChromaDB client."""
|
||||
|
||||
provider: Literal["chromadb"] = field(default="chromadb", init=False)
|
||||
tenant: str = DEFAULT_TENANT
|
||||
database: str = DEFAULT_DATABASE
|
||||
settings: Settings = field(default_factory=_default_settings)
|
||||
embedding_function: ChromaEmbeddingFunctionWrapper = field(
|
||||
default_factory=_default_embedding_function
|
||||
)
|
||||
@@ -1,10 +0,0 @@
|
||||
"""Constants for ChromaDB configuration."""
|
||||
|
||||
import os
|
||||
from typing import Final
|
||||
|
||||
from crewai.utilities.paths import db_storage_path
|
||||
|
||||
DEFAULT_TENANT: Final[str] = "default_tenant"
|
||||
DEFAULT_DATABASE: Final[str] = "default_database"
|
||||
DEFAULT_STORAGE_PATH: Final[str] = os.path.join(db_storage_path(), "chromadb")
|
||||
@@ -1,24 +0,0 @@
|
||||
"""Factory functions for creating ChromaDB clients."""
|
||||
|
||||
from chromadb import Client
|
||||
|
||||
from crewai.rag.chromadb.config import ChromaDBConfig
|
||||
from crewai.rag.chromadb.client import ChromaDBClient
|
||||
|
||||
|
||||
def create_client(config: ChromaDBConfig) -> ChromaDBClient:
|
||||
"""Create a ChromaDBClient from configuration.
|
||||
|
||||
Args:
|
||||
config: ChromaDB configuration object.
|
||||
|
||||
Returns:
|
||||
Configured ChromaDBClient instance.
|
||||
"""
|
||||
|
||||
return ChromaDBClient(
|
||||
client=Client(
|
||||
settings=config.settings, tenant=config.tenant, database=config.database
|
||||
),
|
||||
embedding_function=config.embedding_function,
|
||||
)
|
||||
@@ -1,102 +0,0 @@
|
||||
"""Type definitions specific to ChromaDB implementation."""
|
||||
|
||||
from collections.abc import Mapping
|
||||
from typing import Any, NamedTuple
|
||||
|
||||
from pydantic import GetCoreSchemaHandler
|
||||
from pydantic_core import CoreSchema, core_schema
|
||||
from chromadb.api import ClientAPI, AsyncClientAPI
|
||||
from chromadb.api.configuration import CollectionConfigurationInterface
|
||||
from chromadb.api.types import (
|
||||
CollectionMetadata,
|
||||
DataLoader,
|
||||
Embeddable,
|
||||
EmbeddingFunction as ChromaEmbeddingFunction,
|
||||
Include,
|
||||
Loadable,
|
||||
Where,
|
||||
WhereDocument,
|
||||
)
|
||||
|
||||
from crewai.rag.core.base_client import BaseCollectionParams, BaseCollectionSearchParams
|
||||
|
||||
ChromaDBClientType = ClientAPI | AsyncClientAPI
|
||||
|
||||
|
||||
class ChromaEmbeddingFunctionWrapper(ChromaEmbeddingFunction[Embeddable]):
|
||||
"""Base class for ChromaDB EmbeddingFunction to work with Pydantic validation."""
|
||||
|
||||
@classmethod
|
||||
def __get_pydantic_core_schema__(
|
||||
cls, _source_type: Any, _handler: GetCoreSchemaHandler
|
||||
) -> CoreSchema:
|
||||
"""Generate Pydantic core schema for ChromaDB EmbeddingFunction.
|
||||
|
||||
This allows Pydantic to handle ChromaDB's EmbeddingFunction type
|
||||
without requiring arbitrary_types_allowed=True.
|
||||
"""
|
||||
return core_schema.any_schema()
|
||||
|
||||
|
||||
class PreparedDocuments(NamedTuple):
|
||||
"""Prepared documents ready for ChromaDB insertion.
|
||||
|
||||
Attributes:
|
||||
ids: List of document IDs
|
||||
texts: List of document texts
|
||||
metadatas: List of document metadata mappings
|
||||
"""
|
||||
|
||||
ids: list[str]
|
||||
texts: list[str]
|
||||
metadatas: list[Mapping[str, str | int | float | bool]]
|
||||
|
||||
|
||||
class ExtractedSearchParams(NamedTuple):
|
||||
"""Extracted search parameters for ChromaDB queries.
|
||||
|
||||
Attributes:
|
||||
collection_name: Name of the collection to search
|
||||
query: Search query text
|
||||
limit: Maximum number of results
|
||||
metadata_filter: Optional metadata filter
|
||||
score_threshold: Optional minimum similarity score
|
||||
where: Optional ChromaDB where clause
|
||||
where_document: Optional ChromaDB document filter
|
||||
include: Fields to include in results
|
||||
"""
|
||||
|
||||
collection_name: str
|
||||
query: str
|
||||
limit: int
|
||||
metadata_filter: dict[str, Any] | None
|
||||
score_threshold: float | None
|
||||
where: Where | None
|
||||
where_document: WhereDocument | None
|
||||
include: Include
|
||||
|
||||
|
||||
class ChromaDBCollectionCreateParams(BaseCollectionParams, total=False):
|
||||
"""Parameters for creating a ChromaDB collection.
|
||||
|
||||
This class extends BaseCollectionParams to include any additional
|
||||
parameters specific to ChromaDB collection creation.
|
||||
"""
|
||||
|
||||
configuration: CollectionConfigurationInterface
|
||||
metadata: CollectionMetadata
|
||||
embedding_function: ChromaEmbeddingFunction[Embeddable]
|
||||
data_loader: DataLoader[Loadable]
|
||||
get_or_create: bool
|
||||
|
||||
|
||||
class ChromaDBCollectionSearchParams(BaseCollectionSearchParams, total=False):
|
||||
"""Parameters for searching a ChromaDB collection.
|
||||
|
||||
This class extends BaseCollectionSearchParams to include ChromaDB-specific
|
||||
search parameters like where clauses and include options.
|
||||
"""
|
||||
|
||||
where: Where
|
||||
where_document: WhereDocument
|
||||
include: Include
|
||||
@@ -1,218 +0,0 @@
|
||||
"""Utility functions for ChromaDB client implementation."""
|
||||
|
||||
import hashlib
|
||||
from collections.abc import Mapping
|
||||
from typing import Literal, TypeGuard, cast
|
||||
|
||||
from chromadb.api import AsyncClientAPI, ClientAPI
|
||||
from chromadb.api.types import (
|
||||
Include,
|
||||
IncludeEnum,
|
||||
QueryResult,
|
||||
)
|
||||
from chromadb.api.models.AsyncCollection import AsyncCollection
|
||||
from chromadb.api.models.Collection import Collection
|
||||
from crewai.rag.chromadb.types import (
|
||||
ChromaDBClientType,
|
||||
ChromaDBCollectionSearchParams,
|
||||
ExtractedSearchParams,
|
||||
PreparedDocuments,
|
||||
)
|
||||
from crewai.rag.types import BaseRecord, SearchResult
|
||||
|
||||
|
||||
def _is_sync_client(client: ChromaDBClientType) -> TypeGuard[ClientAPI]:
|
||||
"""Type guard to check if the client is a synchronous ClientAPI.
|
||||
|
||||
Args:
|
||||
client: The client to check.
|
||||
|
||||
Returns:
|
||||
True if the client is a ClientAPI, False otherwise.
|
||||
"""
|
||||
return isinstance(client, ClientAPI)
|
||||
|
||||
|
||||
def _is_async_client(client: ChromaDBClientType) -> TypeGuard[AsyncClientAPI]:
|
||||
"""Type guard to check if the client is an asynchronous AsyncClientAPI.
|
||||
|
||||
Args:
|
||||
client: The client to check.
|
||||
|
||||
Returns:
|
||||
True if the client is an AsyncClientAPI, False otherwise.
|
||||
"""
|
||||
return isinstance(client, AsyncClientAPI)
|
||||
|
||||
|
||||
def _prepare_documents_for_chromadb(
|
||||
documents: list[BaseRecord],
|
||||
) -> PreparedDocuments:
|
||||
"""Prepare documents for ChromaDB by extracting IDs, texts, and metadata.
|
||||
|
||||
Args:
|
||||
documents: List of BaseRecord documents to prepare.
|
||||
|
||||
Returns:
|
||||
PreparedDocuments with ids, texts, and metadatas ready for ChromaDB.
|
||||
"""
|
||||
ids: list[str] = []
|
||||
texts: list[str] = []
|
||||
metadatas: list[Mapping[str, str | int | float | bool]] = []
|
||||
|
||||
for doc in documents:
|
||||
if "doc_id" in doc:
|
||||
ids.append(doc["doc_id"])
|
||||
else:
|
||||
content_hash = hashlib.sha256(doc["content"].encode()).hexdigest()[:16]
|
||||
ids.append(content_hash)
|
||||
|
||||
texts.append(doc["content"])
|
||||
metadata = doc.get("metadata")
|
||||
if metadata:
|
||||
if isinstance(metadata, list):
|
||||
metadatas.append(metadata[0] if metadata else {})
|
||||
else:
|
||||
metadatas.append(metadata)
|
||||
else:
|
||||
metadatas.append({})
|
||||
|
||||
return PreparedDocuments(ids, texts, metadatas)
|
||||
|
||||
|
||||
def _extract_search_params(
|
||||
kwargs: ChromaDBCollectionSearchParams,
|
||||
) -> ExtractedSearchParams:
|
||||
"""Extract search parameters from kwargs.
|
||||
|
||||
Args:
|
||||
kwargs: Keyword arguments containing search parameters.
|
||||
|
||||
Returns:
|
||||
ExtractedSearchParams with all extracted parameters.
|
||||
"""
|
||||
return ExtractedSearchParams(
|
||||
collection_name=kwargs["collection_name"],
|
||||
query=kwargs["query"],
|
||||
limit=kwargs.get("limit", 10),
|
||||
metadata_filter=kwargs.get("metadata_filter"),
|
||||
score_threshold=kwargs.get("score_threshold"),
|
||||
where=kwargs.get("where"),
|
||||
where_document=kwargs.get("where_document"),
|
||||
include=kwargs.get(
|
||||
"include",
|
||||
[IncludeEnum.metadatas, IncludeEnum.documents, IncludeEnum.distances],
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def _convert_distance_to_score(
|
||||
distance: float,
|
||||
distance_metric: Literal["l2", "cosine", "ip"],
|
||||
) -> float:
|
||||
"""Convert ChromaDB distance to similarity score.
|
||||
|
||||
Notes:
|
||||
Assuming all embedding are unit-normalized for now, including custom embeddings.
|
||||
|
||||
Args:
|
||||
distance: The distance value from ChromaDB.
|
||||
distance_metric: The distance metric used ("l2", "cosine", or "ip").
|
||||
|
||||
Returns:
|
||||
Similarity score in range [0, 1] where 1 is most similar.
|
||||
"""
|
||||
if distance_metric == "cosine":
|
||||
score = 1.0 - 0.5 * distance
|
||||
return max(0.0, min(1.0, score))
|
||||
raise ValueError(f"Unsupported distance metric: {distance_metric}")
|
||||
|
||||
|
||||
def _convert_chromadb_results_to_search_results(
|
||||
results: QueryResult,
|
||||
include: Include,
|
||||
distance_metric: Literal["l2", "cosine", "ip"],
|
||||
score_threshold: float | None = None,
|
||||
) -> list[SearchResult]:
|
||||
"""Convert ChromaDB query results to SearchResult format.
|
||||
|
||||
Args:
|
||||
results: ChromaDB query results.
|
||||
include: List of fields that were included in the query.
|
||||
distance_metric: The distance metric used by the collection.
|
||||
score_threshold: Optional minimum similarity score (0-1) for results.
|
||||
|
||||
Returns:
|
||||
List of SearchResult dicts containing id, content, metadata, and score.
|
||||
"""
|
||||
search_results: list[SearchResult] = []
|
||||
|
||||
include_strings = [item.value for item in include]
|
||||
|
||||
ids = results["ids"][0] if results.get("ids") else []
|
||||
|
||||
documents_list = results.get("documents")
|
||||
documents = (
|
||||
documents_list[0] if documents_list and "documents" in include_strings else []
|
||||
)
|
||||
|
||||
metadatas_list = results.get("metadatas")
|
||||
metadatas = (
|
||||
metadatas_list[0] if metadatas_list and "metadatas" in include_strings else []
|
||||
)
|
||||
|
||||
distances_list = results.get("distances")
|
||||
distances = (
|
||||
distances_list[0] if distances_list and "distances" in include_strings else []
|
||||
)
|
||||
|
||||
for i, doc_id in enumerate(ids):
|
||||
if not distances or i >= len(distances):
|
||||
continue
|
||||
|
||||
distance = distances[i]
|
||||
score = _convert_distance_to_score(
|
||||
distance=distance, distance_metric=distance_metric
|
||||
)
|
||||
|
||||
if score_threshold and score < score_threshold:
|
||||
continue
|
||||
|
||||
result: SearchResult = {
|
||||
"id": doc_id,
|
||||
"content": documents[i] if documents and i < len(documents) else "",
|
||||
"metadata": dict(metadatas[i]) if metadatas and i < len(metadatas) else {},
|
||||
"score": score,
|
||||
}
|
||||
search_results.append(result)
|
||||
|
||||
return search_results
|
||||
|
||||
|
||||
def _process_query_results(
|
||||
collection: Collection | AsyncCollection,
|
||||
results: QueryResult,
|
||||
params: ExtractedSearchParams,
|
||||
) -> list[SearchResult]:
|
||||
"""Process ChromaDB query results and convert to SearchResult format.
|
||||
|
||||
Args:
|
||||
collection: The ChromaDB collection (sync or async) that was queried.
|
||||
results: Raw query results from ChromaDB.
|
||||
params: The search parameters used for the query.
|
||||
|
||||
Returns:
|
||||
List of SearchResult dicts containing id, content, metadata, and score.
|
||||
"""
|
||||
|
||||
distance_metric = cast(
|
||||
Literal["l2", "cosine", "ip"],
|
||||
collection.metadata.get("hnsw:space", "l2") if collection.metadata else "l2",
|
||||
)
|
||||
|
||||
return _convert_chromadb_results_to_search_results(
|
||||
results=results,
|
||||
include=params.include,
|
||||
distance_metric=distance_metric,
|
||||
score_threshold=params.score_threshold,
|
||||
)
|
||||
@@ -1 +0,0 @@
|
||||
"""RAG client configuration management using ContextVars for thread-safe provider switching."""
|
||||
@@ -1,16 +0,0 @@
|
||||
"""Base configuration class for RAG providers."""
|
||||
|
||||
from dataclasses import field
|
||||
from typing import Any
|
||||
|
||||
from pydantic.dataclasses import dataclass as pyd_dataclass
|
||||
|
||||
from crewai.rag.config.optional_imports.types import SupportedProvider
|
||||
|
||||
|
||||
@pyd_dataclass(frozen=True)
|
||||
class BaseRagConfig:
|
||||
"""Base class for RAG configuration with Pydantic serialization support."""
|
||||
|
||||
provider: SupportedProvider = field(init=False)
|
||||
embedding_function: Any | None = field(default=None)
|
||||
@@ -1,8 +0,0 @@
|
||||
"""Constants for RAG configuration."""
|
||||
|
||||
from typing import Final
|
||||
|
||||
DISCRIMINATOR: Final[str] = "provider"
|
||||
|
||||
DEFAULT_RAG_CONFIG_PATH: Final[str] = "crewai.rag.chromadb.config"
|
||||
DEFAULT_RAG_CONFIG_CLASS: Final[str] = "ChromaDBConfig"
|
||||
@@ -1 +0,0 @@
|
||||
"""Optional imports for RAG configuration providers."""
|
||||
@@ -1,26 +0,0 @@
|
||||
"""Base classes for missing provider configurations."""
|
||||
|
||||
from typing import Literal
|
||||
from dataclasses import field
|
||||
|
||||
from pydantic import ConfigDict
|
||||
from pydantic.dataclasses import dataclass as pyd_dataclass
|
||||
|
||||
|
||||
@pyd_dataclass(config=ConfigDict(extra="forbid"))
|
||||
class _MissingProvider:
|
||||
"""Base class for missing provider configurations.
|
||||
|
||||
Raises RuntimeError when instantiated to indicate missing dependencies.
|
||||
"""
|
||||
|
||||
provider: Literal["chromadb", "qdrant", "__missing__"] = field(
|
||||
default="__missing__"
|
||||
)
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
"""Raises error indicating the provider is not installed."""
|
||||
raise RuntimeError(
|
||||
f"provider '{self.provider}' requested but not installed. "
|
||||
f"Install the extra: `uv add crewai'[{self.provider}]'`."
|
||||
)
|
||||
@@ -1,27 +0,0 @@
|
||||
"""Protocol definitions for RAG factory modules."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Protocol, TYPE_CHECKING
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.rag.chromadb.client import ChromaDBClient
|
||||
from crewai.rag.chromadb.config import ChromaDBConfig
|
||||
from crewai.rag.qdrant.client import QdrantClient
|
||||
from crewai.rag.qdrant.config import QdrantConfig
|
||||
|
||||
|
||||
class ChromaFactoryModule(Protocol):
|
||||
"""Protocol for ChromaDB factory module."""
|
||||
|
||||
def create_client(self, config: ChromaDBConfig) -> ChromaDBClient:
|
||||
"""Creates a ChromaDB client from configuration."""
|
||||
...
|
||||
|
||||
|
||||
class QdrantFactoryModule(Protocol):
|
||||
"""Protocol for Qdrant factory module."""
|
||||
|
||||
def create_client(self, config: QdrantConfig) -> QdrantClient:
|
||||
"""Creates a Qdrant client from configuration."""
|
||||
...
|
||||
@@ -1,22 +0,0 @@
|
||||
"""Provider-specific missing configuration classes."""
|
||||
|
||||
from typing import Literal
|
||||
from dataclasses import field
|
||||
from pydantic import ConfigDict
|
||||
from pydantic.dataclasses import dataclass as pyd_dataclass
|
||||
|
||||
from crewai.rag.config.optional_imports.base import _MissingProvider
|
||||
|
||||
|
||||
@pyd_dataclass(config=ConfigDict(extra="forbid"))
|
||||
class MissingChromaDBConfig(_MissingProvider):
|
||||
"""Placeholder for missing ChromaDB configuration."""
|
||||
|
||||
provider: Literal["chromadb"] = field(default="chromadb")
|
||||
|
||||
|
||||
@pyd_dataclass(config=ConfigDict(extra="forbid"))
|
||||
class MissingQdrantConfig(_MissingProvider):
|
||||
"""Placeholder for missing Qdrant configuration."""
|
||||
|
||||
provider: Literal["qdrant"] = field(default="qdrant")
|
||||
@@ -1,8 +0,0 @@
|
||||
"""Type definitions for optional imports."""
|
||||
|
||||
from typing import Annotated, Literal
|
||||
|
||||
SupportedProvider = Annotated[
|
||||
Literal["chromadb", "qdrant"],
|
||||
"Supported RAG provider types, add providers here as they become available",
|
||||
]
|
||||
@@ -1,34 +0,0 @@
|
||||
"""Type definitions for RAG configuration."""
|
||||
|
||||
from typing import Annotated, TypeAlias, TYPE_CHECKING
|
||||
from pydantic import Field
|
||||
|
||||
from crewai.rag.config.constants import DISCRIMINATOR
|
||||
|
||||
# Linter freaks out on conditional imports, assigning in the type checking fixes it
|
||||
if TYPE_CHECKING:
|
||||
from crewai.rag.chromadb.config import ChromaDBConfig as ChromaDBConfig_
|
||||
|
||||
ChromaDBConfig = ChromaDBConfig_
|
||||
from crewai.rag.qdrant.config import QdrantConfig as QdrantConfig_
|
||||
|
||||
QdrantConfig = QdrantConfig_
|
||||
else:
|
||||
try:
|
||||
from crewai.rag.chromadb.config import ChromaDBConfig
|
||||
except ImportError:
|
||||
from crewai.rag.config.optional_imports.providers import (
|
||||
MissingChromaDBConfig as ChromaDBConfig,
|
||||
)
|
||||
|
||||
try:
|
||||
from crewai.rag.qdrant.config import QdrantConfig
|
||||
except ImportError:
|
||||
from crewai.rag.config.optional_imports.providers import (
|
||||
MissingQdrantConfig as QdrantConfig,
|
||||
)
|
||||
|
||||
SupportedProviderConfig: TypeAlias = ChromaDBConfig | QdrantConfig
|
||||
RagConfigType: TypeAlias = Annotated[
|
||||
SupportedProviderConfig, Field(discriminator=DISCRIMINATOR)
|
||||
]
|
||||
@@ -1,86 +0,0 @@
|
||||
"""RAG client configuration utilities."""
|
||||
|
||||
from contextvars import ContextVar
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.utilities.import_utils import require
|
||||
from crewai.rag.core.base_client import BaseClient
|
||||
from crewai.rag.config.types import RagConfigType
|
||||
from crewai.rag.config.constants import (
|
||||
DEFAULT_RAG_CONFIG_PATH,
|
||||
DEFAULT_RAG_CONFIG_CLASS,
|
||||
)
|
||||
from crewai.rag.factory import create_client
|
||||
|
||||
|
||||
class RagContext(BaseModel):
|
||||
"""Context holding RAG configuration and client instance."""
|
||||
|
||||
config: RagConfigType = Field(..., description="RAG provider configuration")
|
||||
client: BaseClient | None = Field(
|
||||
default=None, description="Instantiated RAG client"
|
||||
)
|
||||
|
||||
|
||||
_rag_context: ContextVar[RagContext | None] = ContextVar("_rag_context", default=None)
|
||||
|
||||
|
||||
def set_rag_config(config: RagConfigType) -> None:
|
||||
"""Set global RAG client configuration and instantiate the client.
|
||||
|
||||
Args:
|
||||
config: The RAG client configuration (ChromaDBConfig).
|
||||
"""
|
||||
client = create_client(config)
|
||||
context = RagContext(config=config, client=client)
|
||||
_rag_context.set(context)
|
||||
|
||||
|
||||
def get_rag_config() -> RagConfigType:
|
||||
"""Get current RAG configuration.
|
||||
|
||||
Returns:
|
||||
The current RAG configuration object.
|
||||
"""
|
||||
context = _rag_context.get()
|
||||
if context is None:
|
||||
module = require(DEFAULT_RAG_CONFIG_PATH, purpose="RAG configuration")
|
||||
config_class = getattr(module, DEFAULT_RAG_CONFIG_CLASS)
|
||||
default_config = config_class()
|
||||
set_rag_config(default_config)
|
||||
context = _rag_context.get()
|
||||
|
||||
if context is None or context.config is None:
|
||||
raise ValueError(
|
||||
"RAG configuration is not set. Please set the RAG config first."
|
||||
)
|
||||
|
||||
return context.config
|
||||
|
||||
|
||||
def get_rag_client() -> BaseClient:
|
||||
"""Get the current RAG client instance.
|
||||
|
||||
Returns:
|
||||
The current RAG client, creating one if needed.
|
||||
"""
|
||||
context = _rag_context.get()
|
||||
if context is None:
|
||||
get_rag_config()
|
||||
context = _rag_context.get()
|
||||
|
||||
if context and context.client is None:
|
||||
context.client = create_client(context.config)
|
||||
|
||||
if context is None or context.client is None:
|
||||
raise ValueError(
|
||||
"RAG client is not configured. Please set the RAG config first."
|
||||
)
|
||||
|
||||
return context.client
|
||||
|
||||
|
||||
def clear_rag_config() -> None:
|
||||
"""Clear the current RAG configuration and client, reverting to defaults."""
|
||||
_rag_context.set(None)
|
||||
@@ -1,10 +1,8 @@
|
||||
"""Protocol for vector database client implementations."""
|
||||
|
||||
from abc import abstractmethod
|
||||
from typing import Any, Protocol, runtime_checkable, Annotated
|
||||
from typing_extensions import Unpack, Required, TypedDict
|
||||
from pydantic import GetCoreSchemaHandler
|
||||
from pydantic_core import CoreSchema, core_schema
|
||||
from typing import Any, Protocol, runtime_checkable, TypedDict, Annotated
|
||||
from typing_extensions import Unpack, Required
|
||||
|
||||
|
||||
from crewai.rag.types import (
|
||||
@@ -98,17 +96,6 @@ class BaseClient(Protocol):
|
||||
client: Any
|
||||
embedding_function: EmbeddingFunction
|
||||
|
||||
@classmethod
|
||||
def __get_pydantic_core_schema__(
|
||||
cls, _source_type: Any, _handler: GetCoreSchemaHandler
|
||||
) -> CoreSchema:
|
||||
"""Generate Pydantic core schema for BaseClient Protocol.
|
||||
|
||||
This allows the Protocol to be used in Pydantic models without
|
||||
requiring arbitrary_types_allowed=True.
|
||||
"""
|
||||
return core_schema.any_schema()
|
||||
|
||||
@abstractmethod
|
||||
def create_collection(self, **kwargs: Unpack[BaseCollectionParams]) -> None:
|
||||
"""Create a new collection/index in the vector database.
|
||||
|
||||
30
src/crewai/rag/core/base_provider.py
Normal file
30
src/crewai/rag/core/base_provider.py
Normal file
@@ -0,0 +1,30 @@
|
||||
"""Base provider protocol for vector database client creation."""
|
||||
|
||||
from abc import ABC
|
||||
from typing import Any, Protocol, runtime_checkable, Union
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.rag.types import EmbeddingFunction
|
||||
from crewai.rag.embeddings.types import EmbeddingOptions
|
||||
|
||||
|
||||
class BaseProviderOptions(BaseModel, ABC):
|
||||
"""Base configuration for all provider options."""
|
||||
|
||||
client_type: str = Field(..., description="Type of client to create")
|
||||
embedding_config: Union[EmbeddingOptions, EmbeddingFunction, None] = Field(
|
||||
default=None,
|
||||
description="Embedding configuration - either options for built-in providers or a custom function",
|
||||
)
|
||||
options: Any = Field(
|
||||
default=None, description="Additional provider-specific options"
|
||||
)
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class BaseProvider(Protocol):
|
||||
"""Protocol for vector database client providers."""
|
||||
|
||||
def __call__(self, options: BaseProviderOptions) -> Any:
|
||||
"""Create and return a configured client instance."""
|
||||
...
|
||||
@@ -1,26 +0,0 @@
|
||||
"""Core exceptions for RAG module."""
|
||||
|
||||
|
||||
class ClientMethodMismatchError(TypeError):
|
||||
"""Raised when a method is called with the wrong client type.
|
||||
|
||||
Typically used when a sync method is called with an async client,
|
||||
or vice versa.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, method_name: str, expected_client: str, alt_method: str, alt_client: str
|
||||
) -> None:
|
||||
"""Create a ClientMethodMismatchError.
|
||||
|
||||
Args:
|
||||
method_name: Method that was called incorrectly.
|
||||
expected_client: Required client type.
|
||||
alt_method: Suggested alternative method.
|
||||
alt_client: Client type for the alternative method.
|
||||
"""
|
||||
message = (
|
||||
f"Method {method_name}() requires a {expected_client}. "
|
||||
f"Use {alt_method}() for {alt_client}."
|
||||
)
|
||||
super().__init__(message)
|
||||
@@ -1,45 +0,0 @@
|
||||
"""Factory functions for creating RAG clients from configuration."""
|
||||
|
||||
from typing import cast
|
||||
|
||||
from crewai.rag.config.optional_imports.protocols import (
|
||||
ChromaFactoryModule,
|
||||
QdrantFactoryModule,
|
||||
)
|
||||
from crewai.rag.core.base_client import BaseClient
|
||||
from crewai.rag.config.types import RagConfigType
|
||||
from crewai.utilities.import_utils import require
|
||||
|
||||
|
||||
def create_client(config: RagConfigType) -> BaseClient:
|
||||
"""Create a client from configuration using the appropriate factory.
|
||||
|
||||
Args:
|
||||
config: The RAG client configuration.
|
||||
|
||||
Returns:
|
||||
The created client instance.
|
||||
|
||||
Raises:
|
||||
ValueError: If the configuration provider is not supported.
|
||||
"""
|
||||
|
||||
if config.provider == "chromadb":
|
||||
chromadb_mod = cast(
|
||||
ChromaFactoryModule,
|
||||
require(
|
||||
"crewai.rag.chromadb.factory",
|
||||
purpose="The 'chromadb' provider",
|
||||
),
|
||||
)
|
||||
return chromadb_mod.create_client(config)
|
||||
|
||||
if config.provider == "qdrant":
|
||||
qdrant_mod = cast(
|
||||
QdrantFactoryModule,
|
||||
require(
|
||||
"crewai.rag.qdrant.factory",
|
||||
purpose="The 'qdrant' provider",
|
||||
),
|
||||
)
|
||||
return qdrant_mod.create_client(config)
|
||||
@@ -1 +0,0 @@
|
||||
"""Qdrant vector database client implementation."""
|
||||
@@ -1,501 +0,0 @@
|
||||
"""Qdrant client implementation."""
|
||||
|
||||
from typing import Any, cast
|
||||
|
||||
from typing_extensions import Unpack
|
||||
|
||||
from crewai.rag.core.base_client import (
|
||||
BaseClient,
|
||||
BaseCollectionParams,
|
||||
BaseCollectionAddParams,
|
||||
BaseCollectionSearchParams,
|
||||
)
|
||||
from crewai.rag.core.exceptions import ClientMethodMismatchError
|
||||
from crewai.rag.qdrant.types import (
|
||||
AsyncEmbeddingFunction,
|
||||
EmbeddingFunction,
|
||||
QdrantClientType,
|
||||
QdrantCollectionCreateParams,
|
||||
)
|
||||
from crewai.rag.qdrant.utils import (
|
||||
_is_async_client,
|
||||
_is_async_embedding_function,
|
||||
_is_sync_client,
|
||||
_create_point_from_document,
|
||||
_get_collection_params,
|
||||
_prepare_search_params,
|
||||
_process_search_results,
|
||||
)
|
||||
from crewai.rag.types import SearchResult
|
||||
|
||||
|
||||
class QdrantClient(BaseClient):
|
||||
"""Qdrant implementation of the BaseClient protocol.
|
||||
|
||||
Provides vector database operations for Qdrant, supporting both
|
||||
synchronous and asynchronous clients.
|
||||
|
||||
Attributes:
|
||||
client: Qdrant client instance (QdrantClient or AsyncQdrantClient).
|
||||
embedding_function: Function to generate embeddings for documents.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
client: QdrantClientType,
|
||||
embedding_function: EmbeddingFunction | AsyncEmbeddingFunction,
|
||||
) -> None:
|
||||
"""Initialize QdrantClient with client and embedding function.
|
||||
|
||||
Args:
|
||||
client: Pre-configured Qdrant client instance.
|
||||
embedding_function: Embedding function for text to vector conversion.
|
||||
"""
|
||||
self.client = client
|
||||
self.embedding_function = embedding_function
|
||||
|
||||
def create_collection(self, **kwargs: Unpack[QdrantCollectionCreateParams]) -> None:
|
||||
"""Create a new collection in Qdrant.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to create. Must be unique.
|
||||
vectors_config: Optional vector configuration. Defaults to 1536 dimensions with cosine distance.
|
||||
sparse_vectors_config: Optional sparse vector configuration.
|
||||
shard_number: Optional number of shards.
|
||||
replication_factor: Optional replication factor.
|
||||
write_consistency_factor: Optional write consistency factor.
|
||||
on_disk_payload: Optional flag to store payload on disk.
|
||||
hnsw_config: Optional HNSW index configuration.
|
||||
optimizers_config: Optional optimizer configuration.
|
||||
wal_config: Optional write-ahead log configuration.
|
||||
quantization_config: Optional quantization configuration.
|
||||
init_from: Optional collection to initialize from.
|
||||
timeout: Optional timeout for the operation.
|
||||
|
||||
Raises:
|
||||
ValueError: If collection with the same name already exists.
|
||||
ConnectionError: If unable to connect to Qdrant server.
|
||||
"""
|
||||
if not _is_sync_client(self.client):
|
||||
raise ClientMethodMismatchError(
|
||||
method_name="create_collection",
|
||||
expected_client="QdrantClient",
|
||||
alt_method="acreate_collection",
|
||||
alt_client="AsyncQdrantClient",
|
||||
)
|
||||
|
||||
collection_name = kwargs["collection_name"]
|
||||
|
||||
if self.client.collection_exists(collection_name):
|
||||
raise ValueError(f"Collection '{collection_name}' already exists")
|
||||
|
||||
params = _get_collection_params(kwargs)
|
||||
self.client.create_collection(**params)
|
||||
|
||||
async def acreate_collection(
|
||||
self, **kwargs: Unpack[QdrantCollectionCreateParams]
|
||||
) -> None:
|
||||
"""Create a new collection in Qdrant asynchronously.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to create. Must be unique.
|
||||
vectors_config: Optional vector configuration. Defaults to 1536 dimensions with cosine distance.
|
||||
sparse_vectors_config: Optional sparse vector configuration.
|
||||
shard_number: Optional number of shards.
|
||||
replication_factor: Optional replication factor.
|
||||
write_consistency_factor: Optional write consistency factor.
|
||||
on_disk_payload: Optional flag to store payload on disk.
|
||||
hnsw_config: Optional HNSW index configuration.
|
||||
optimizers_config: Optional optimizer configuration.
|
||||
wal_config: Optional write-ahead log configuration.
|
||||
quantization_config: Optional quantization configuration.
|
||||
init_from: Optional collection to initialize from.
|
||||
timeout: Optional timeout for the operation.
|
||||
|
||||
Raises:
|
||||
ValueError: If collection with the same name already exists.
|
||||
ConnectionError: If unable to connect to Qdrant server.
|
||||
"""
|
||||
if not _is_async_client(self.client):
|
||||
raise ClientMethodMismatchError(
|
||||
method_name="acreate_collection",
|
||||
expected_client="AsyncQdrantClient",
|
||||
alt_method="create_collection",
|
||||
alt_client="QdrantClient",
|
||||
)
|
||||
|
||||
collection_name = kwargs["collection_name"]
|
||||
|
||||
if await self.client.collection_exists(collection_name):
|
||||
raise ValueError(f"Collection '{collection_name}' already exists")
|
||||
|
||||
params = _get_collection_params(kwargs)
|
||||
await self.client.create_collection(**params)
|
||||
|
||||
def get_or_create_collection(
|
||||
self, **kwargs: Unpack[QdrantCollectionCreateParams]
|
||||
) -> Any:
|
||||
"""Get an existing collection or create it if it doesn't exist.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to get or create.
|
||||
vectors_config: Optional vector configuration. Defaults to 1536 dimensions with cosine distance.
|
||||
sparse_vectors_config: Optional sparse vector configuration.
|
||||
shard_number: Optional number of shards.
|
||||
replication_factor: Optional replication factor.
|
||||
write_consistency_factor: Optional write consistency factor.
|
||||
on_disk_payload: Optional flag to store payload on disk.
|
||||
hnsw_config: Optional HNSW index configuration.
|
||||
optimizers_config: Optional optimizer configuration.
|
||||
wal_config: Optional write-ahead log configuration.
|
||||
quantization_config: Optional quantization configuration.
|
||||
init_from: Optional collection to initialize from.
|
||||
timeout: Optional timeout for the operation.
|
||||
|
||||
Returns:
|
||||
Collection info dict with name and other metadata.
|
||||
|
||||
Raises:
|
||||
ConnectionError: If unable to connect to Qdrant server.
|
||||
"""
|
||||
if not _is_sync_client(self.client):
|
||||
raise ClientMethodMismatchError(
|
||||
method_name="get_or_create_collection",
|
||||
expected_client="QdrantClient",
|
||||
alt_method="aget_or_create_collection",
|
||||
alt_client="AsyncQdrantClient",
|
||||
)
|
||||
|
||||
collection_name = kwargs["collection_name"]
|
||||
|
||||
if self.client.collection_exists(collection_name):
|
||||
return self.client.get_collection(collection_name)
|
||||
|
||||
params = _get_collection_params(kwargs)
|
||||
self.client.create_collection(**params)
|
||||
|
||||
return self.client.get_collection(collection_name)
|
||||
|
||||
async def aget_or_create_collection(
|
||||
self, **kwargs: Unpack[QdrantCollectionCreateParams]
|
||||
) -> Any:
|
||||
"""Get an existing collection or create it if it doesn't exist asynchronously.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to get or create.
|
||||
vectors_config: Optional vector configuration. Defaults to 1536 dimensions with cosine distance.
|
||||
sparse_vectors_config: Optional sparse vector configuration.
|
||||
shard_number: Optional number of shards.
|
||||
replication_factor: Optional replication factor.
|
||||
write_consistency_factor: Optional write consistency factor.
|
||||
on_disk_payload: Optional flag to store payload on disk.
|
||||
hnsw_config: Optional HNSW index configuration.
|
||||
optimizers_config: Optional optimizer configuration.
|
||||
wal_config: Optional write-ahead log configuration.
|
||||
quantization_config: Optional quantization configuration.
|
||||
init_from: Optional collection to initialize from.
|
||||
timeout: Optional timeout for the operation.
|
||||
|
||||
Returns:
|
||||
Collection info dict with name and other metadata.
|
||||
|
||||
Raises:
|
||||
ConnectionError: If unable to connect to Qdrant server.
|
||||
"""
|
||||
if not _is_async_client(self.client):
|
||||
raise ClientMethodMismatchError(
|
||||
method_name="aget_or_create_collection",
|
||||
expected_client="AsyncQdrantClient",
|
||||
alt_method="get_or_create_collection",
|
||||
alt_client="QdrantClient",
|
||||
)
|
||||
|
||||
collection_name = kwargs["collection_name"]
|
||||
|
||||
if await self.client.collection_exists(collection_name):
|
||||
return await self.client.get_collection(collection_name)
|
||||
|
||||
params = _get_collection_params(kwargs)
|
||||
await self.client.create_collection(**params)
|
||||
|
||||
return await self.client.get_collection(collection_name)
|
||||
|
||||
def add_documents(self, **kwargs: Unpack[BaseCollectionAddParams]) -> None:
|
||||
"""Add documents with their embeddings to a collection.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: The name of the collection to add documents to.
|
||||
documents: List of BaseRecord dicts containing document data.
|
||||
|
||||
Raises:
|
||||
ValueError: If collection doesn't exist or documents list is empty.
|
||||
ConnectionError: If unable to connect to Qdrant server.
|
||||
"""
|
||||
if not _is_sync_client(self.client):
|
||||
raise ClientMethodMismatchError(
|
||||
method_name="add_documents",
|
||||
expected_client="QdrantClient",
|
||||
alt_method="aadd_documents",
|
||||
alt_client="AsyncQdrantClient",
|
||||
)
|
||||
|
||||
collection_name = kwargs["collection_name"]
|
||||
documents = kwargs["documents"]
|
||||
|
||||
if not documents:
|
||||
raise ValueError("Documents list cannot be empty")
|
||||
|
||||
if not self.client.collection_exists(collection_name):
|
||||
raise ValueError(f"Collection '{collection_name}' does not exist")
|
||||
|
||||
points = []
|
||||
for doc in documents:
|
||||
if _is_async_embedding_function(self.embedding_function):
|
||||
raise TypeError(
|
||||
"Async embedding function cannot be used with sync add_documents. "
|
||||
"Use aadd_documents instead."
|
||||
)
|
||||
sync_fn = cast(EmbeddingFunction, self.embedding_function)
|
||||
embedding = sync_fn(doc["content"])
|
||||
point = _create_point_from_document(doc, embedding)
|
||||
points.append(point)
|
||||
|
||||
self.client.upsert(collection_name=collection_name, points=points)
|
||||
|
||||
async def aadd_documents(self, **kwargs: Unpack[BaseCollectionAddParams]) -> None:
|
||||
"""Add documents with their embeddings to a collection asynchronously.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: The name of the collection to add documents to.
|
||||
documents: List of BaseRecord dicts containing document data.
|
||||
|
||||
Raises:
|
||||
ValueError: If collection doesn't exist or documents list is empty.
|
||||
ConnectionError: If unable to connect to Qdrant server.
|
||||
"""
|
||||
if not _is_async_client(self.client):
|
||||
raise ClientMethodMismatchError(
|
||||
method_name="aadd_documents",
|
||||
expected_client="AsyncQdrantClient",
|
||||
alt_method="add_documents",
|
||||
alt_client="QdrantClient",
|
||||
)
|
||||
|
||||
collection_name = kwargs["collection_name"]
|
||||
documents = kwargs["documents"]
|
||||
|
||||
if not documents:
|
||||
raise ValueError("Documents list cannot be empty")
|
||||
|
||||
if not await self.client.collection_exists(collection_name):
|
||||
raise ValueError(f"Collection '{collection_name}' does not exist")
|
||||
|
||||
points = []
|
||||
for doc in documents:
|
||||
if _is_async_embedding_function(self.embedding_function):
|
||||
async_fn = cast(AsyncEmbeddingFunction, self.embedding_function)
|
||||
embedding = await async_fn(doc["content"])
|
||||
else:
|
||||
sync_fn = cast(EmbeddingFunction, self.embedding_function)
|
||||
embedding = sync_fn(doc["content"])
|
||||
point = _create_point_from_document(doc, embedding)
|
||||
points.append(point)
|
||||
|
||||
await self.client.upsert(collection_name=collection_name, points=points)
|
||||
|
||||
def search(
|
||||
self, **kwargs: Unpack[BaseCollectionSearchParams]
|
||||
) -> list[SearchResult]:
|
||||
"""Search for similar documents using a query.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to search in.
|
||||
query: The text query to search for.
|
||||
limit: Maximum number of results to return (default: 10).
|
||||
metadata_filter: Optional filter for metadata fields.
|
||||
score_threshold: Optional minimum similarity score (0-1) for results.
|
||||
|
||||
Returns:
|
||||
List of SearchResult dicts containing id, content, metadata, and score.
|
||||
|
||||
Raises:
|
||||
ValueError: If collection doesn't exist.
|
||||
ConnectionError: If unable to connect to Qdrant server.
|
||||
"""
|
||||
if not _is_sync_client(self.client):
|
||||
raise ClientMethodMismatchError(
|
||||
method_name="search",
|
||||
expected_client="QdrantClient",
|
||||
alt_method="asearch",
|
||||
alt_client="AsyncQdrantClient",
|
||||
)
|
||||
|
||||
collection_name = kwargs["collection_name"]
|
||||
query = kwargs["query"]
|
||||
limit = kwargs.get("limit", 10)
|
||||
metadata_filter = kwargs.get("metadata_filter")
|
||||
score_threshold = kwargs.get("score_threshold")
|
||||
|
||||
if not self.client.collection_exists(collection_name):
|
||||
raise ValueError(f"Collection '{collection_name}' does not exist")
|
||||
|
||||
if _is_async_embedding_function(self.embedding_function):
|
||||
raise TypeError(
|
||||
"Async embedding function cannot be used with sync search. "
|
||||
"Use asearch instead."
|
||||
)
|
||||
sync_fn = cast(EmbeddingFunction, self.embedding_function)
|
||||
query_embedding = sync_fn(query)
|
||||
|
||||
search_kwargs = _prepare_search_params(
|
||||
collection_name=collection_name,
|
||||
query_embedding=query_embedding,
|
||||
limit=limit,
|
||||
score_threshold=score_threshold,
|
||||
metadata_filter=metadata_filter,
|
||||
)
|
||||
|
||||
response = self.client.query_points(**search_kwargs)
|
||||
return _process_search_results(response)
|
||||
|
||||
async def asearch(
|
||||
self, **kwargs: Unpack[BaseCollectionSearchParams]
|
||||
) -> list[SearchResult]:
|
||||
"""Search for similar documents using a query asynchronously.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to search in.
|
||||
query: The text query to search for.
|
||||
limit: Maximum number of results to return (default: 10).
|
||||
metadata_filter: Optional filter for metadata fields.
|
||||
score_threshold: Optional minimum similarity score (0-1) for results.
|
||||
|
||||
Returns:
|
||||
List of SearchResult dicts containing id, content, metadata, and score.
|
||||
|
||||
Raises:
|
||||
ValueError: If collection doesn't exist.
|
||||
ConnectionError: If unable to connect to Qdrant server.
|
||||
"""
|
||||
if not _is_async_client(self.client):
|
||||
raise ClientMethodMismatchError(
|
||||
method_name="asearch",
|
||||
expected_client="AsyncQdrantClient",
|
||||
alt_method="search",
|
||||
alt_client="QdrantClient",
|
||||
)
|
||||
|
||||
collection_name = kwargs["collection_name"]
|
||||
query = kwargs["query"]
|
||||
limit = kwargs.get("limit", 10)
|
||||
metadata_filter = kwargs.get("metadata_filter")
|
||||
score_threshold = kwargs.get("score_threshold")
|
||||
|
||||
if not await self.client.collection_exists(collection_name):
|
||||
raise ValueError(f"Collection '{collection_name}' does not exist")
|
||||
|
||||
if _is_async_embedding_function(self.embedding_function):
|
||||
async_fn = cast(AsyncEmbeddingFunction, self.embedding_function)
|
||||
query_embedding = await async_fn(query)
|
||||
else:
|
||||
sync_fn = cast(EmbeddingFunction, self.embedding_function)
|
||||
query_embedding = sync_fn(query)
|
||||
|
||||
search_kwargs = _prepare_search_params(
|
||||
collection_name=collection_name,
|
||||
query_embedding=query_embedding,
|
||||
limit=limit,
|
||||
score_threshold=score_threshold,
|
||||
metadata_filter=metadata_filter,
|
||||
)
|
||||
|
||||
response = await self.client.query_points(**search_kwargs)
|
||||
return _process_search_results(response)
|
||||
|
||||
def delete_collection(self, **kwargs: Unpack[BaseCollectionParams]) -> None:
|
||||
"""Delete a collection and all its data.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to delete.
|
||||
|
||||
Raises:
|
||||
ValueError: If collection doesn't exist.
|
||||
ConnectionError: If unable to connect to Qdrant server.
|
||||
"""
|
||||
if not _is_sync_client(self.client):
|
||||
raise ClientMethodMismatchError(
|
||||
method_name="delete_collection",
|
||||
expected_client="QdrantClient",
|
||||
alt_method="adelete_collection",
|
||||
alt_client="AsyncQdrantClient",
|
||||
)
|
||||
|
||||
collection_name = kwargs["collection_name"]
|
||||
|
||||
if not self.client.collection_exists(collection_name):
|
||||
raise ValueError(f"Collection '{collection_name}' does not exist")
|
||||
|
||||
self.client.delete_collection(collection_name=collection_name)
|
||||
|
||||
async def adelete_collection(self, **kwargs: Unpack[BaseCollectionParams]) -> None:
|
||||
"""Delete a collection and all its data asynchronously.
|
||||
|
||||
Keyword Args:
|
||||
collection_name: Name of the collection to delete.
|
||||
|
||||
Raises:
|
||||
ValueError: If collection doesn't exist.
|
||||
ConnectionError: If unable to connect to Qdrant server.
|
||||
"""
|
||||
if not _is_async_client(self.client):
|
||||
raise ClientMethodMismatchError(
|
||||
method_name="adelete_collection",
|
||||
expected_client="AsyncQdrantClient",
|
||||
alt_method="delete_collection",
|
||||
alt_client="QdrantClient",
|
||||
)
|
||||
|
||||
collection_name = kwargs["collection_name"]
|
||||
|
||||
if not await self.client.collection_exists(collection_name):
|
||||
raise ValueError(f"Collection '{collection_name}' does not exist")
|
||||
|
||||
await self.client.delete_collection(collection_name=collection_name)
|
||||
|
||||
def reset(self) -> None:
|
||||
"""Reset the vector database by deleting all collections and data.
|
||||
|
||||
Raises:
|
||||
ConnectionError: If unable to connect to Qdrant server.
|
||||
"""
|
||||
if not _is_sync_client(self.client):
|
||||
raise ClientMethodMismatchError(
|
||||
method_name="reset",
|
||||
expected_client="QdrantClient",
|
||||
alt_method="areset",
|
||||
alt_client="AsyncQdrantClient",
|
||||
)
|
||||
|
||||
collections_response = self.client.get_collections()
|
||||
|
||||
for collection in collections_response.collections:
|
||||
self.client.delete_collection(collection_name=collection.name)
|
||||
|
||||
async def areset(self) -> None:
|
||||
"""Reset the vector database by deleting all collections and data asynchronously.
|
||||
|
||||
Raises:
|
||||
ConnectionError: If unable to connect to Qdrant server.
|
||||
"""
|
||||
if not _is_async_client(self.client):
|
||||
raise ClientMethodMismatchError(
|
||||
method_name="areset",
|
||||
expected_client="AsyncQdrantClient",
|
||||
alt_method="reset",
|
||||
alt_client="QdrantClient",
|
||||
)
|
||||
|
||||
collections_response = await self.client.get_collections()
|
||||
|
||||
for collection in collections_response.collections:
|
||||
await self.client.delete_collection(collection_name=collection.name)
|
||||
@@ -1,54 +0,0 @@
|
||||
"""Qdrant configuration model."""
|
||||
|
||||
from dataclasses import field
|
||||
from typing import Literal, cast
|
||||
from pydantic.dataclasses import dataclass as pyd_dataclass
|
||||
|
||||
from crewai.rag.config.base import BaseRagConfig
|
||||
from crewai.rag.qdrant.types import QdrantClientParams, QdrantEmbeddingFunctionWrapper
|
||||
from crewai.rag.qdrant.constants import DEFAULT_EMBEDDING_MODEL, DEFAULT_STORAGE_PATH
|
||||
|
||||
|
||||
def _default_options() -> QdrantClientParams:
|
||||
"""Create default Qdrant client options.
|
||||
|
||||
Returns:
|
||||
Default options with file-based storage.
|
||||
"""
|
||||
return QdrantClientParams(path=DEFAULT_STORAGE_PATH)
|
||||
|
||||
|
||||
def _default_embedding_function() -> QdrantEmbeddingFunctionWrapper:
|
||||
"""Create default Qdrant embedding function.
|
||||
|
||||
Returns:
|
||||
Default embedding function using fastembed with all-MiniLM-L6-v2.
|
||||
"""
|
||||
from fastembed import TextEmbedding
|
||||
|
||||
model = TextEmbedding(model_name=DEFAULT_EMBEDDING_MODEL)
|
||||
|
||||
def embed_fn(text: str) -> list[float]:
|
||||
"""Embed a single text string.
|
||||
|
||||
Args:
|
||||
text: Text to embed.
|
||||
|
||||
Returns:
|
||||
Embedding vector as list of floats.
|
||||
"""
|
||||
embeddings = list(model.embed([text]))
|
||||
return embeddings[0].tolist() if embeddings else []
|
||||
|
||||
return cast(QdrantEmbeddingFunctionWrapper, embed_fn)
|
||||
|
||||
|
||||
@pyd_dataclass(frozen=True)
|
||||
class QdrantConfig(BaseRagConfig):
|
||||
"""Configuration for Qdrant client."""
|
||||
|
||||
provider: Literal["qdrant"] = field(default="qdrant", init=False)
|
||||
options: QdrantClientParams = field(default_factory=_default_options)
|
||||
embedding_function: QdrantEmbeddingFunctionWrapper = field(
|
||||
default_factory=_default_embedding_function
|
||||
)
|
||||
@@ -1,12 +0,0 @@
|
||||
"""Constants for Qdrant implementation."""
|
||||
|
||||
import os
|
||||
from typing import Final
|
||||
|
||||
from qdrant_client.models import Distance, VectorParams
|
||||
|
||||
from crewai.utilities.paths import db_storage_path
|
||||
|
||||
DEFAULT_VECTOR_PARAMS: Final = VectorParams(size=384, distance=Distance.COSINE)
|
||||
DEFAULT_EMBEDDING_MODEL: Final[str] = "sentence-transformers/all-MiniLM-L6-v2"
|
||||
DEFAULT_STORAGE_PATH: Final[str] = os.path.join(db_storage_path(), "qdrant")
|
||||
@@ -1,21 +0,0 @@
|
||||
"""Factory functions for creating Qdrant clients from configuration."""
|
||||
|
||||
from qdrant_client import QdrantClient as SyncQdrantClientBase
|
||||
from crewai.rag.qdrant.client import QdrantClient
|
||||
from crewai.rag.qdrant.config import QdrantConfig
|
||||
|
||||
|
||||
def create_client(config: QdrantConfig) -> QdrantClient:
|
||||
"""Create a Qdrant client from configuration.
|
||||
|
||||
Args:
|
||||
config: The Qdrant configuration.
|
||||
|
||||
Returns:
|
||||
A configured QdrantClient instance.
|
||||
"""
|
||||
|
||||
qdrant_client = SyncQdrantClientBase(**config.options)
|
||||
return QdrantClient(
|
||||
client=qdrant_client, embedding_function=config.embedding_function
|
||||
)
|
||||
@@ -1,155 +0,0 @@
|
||||
"""Type definitions specific to Qdrant implementation."""
|
||||
|
||||
from collections.abc import Awaitable, Callable
|
||||
from typing import Annotated, Any, Protocol, TypeAlias
|
||||
from typing_extensions import NotRequired, TypedDict
|
||||
|
||||
import numpy as np
|
||||
from pydantic import GetCoreSchemaHandler
|
||||
from pydantic_core import CoreSchema, core_schema
|
||||
from qdrant_client import AsyncQdrantClient, QdrantClient as SyncQdrantClient
|
||||
from qdrant_client.models import (
|
||||
FieldCondition,
|
||||
Filter,
|
||||
HasIdCondition,
|
||||
HasVectorCondition,
|
||||
HnswConfigDiff,
|
||||
InitFrom,
|
||||
IsEmptyCondition,
|
||||
IsNullCondition,
|
||||
NestedCondition,
|
||||
OptimizersConfigDiff,
|
||||
QuantizationConfig,
|
||||
ShardingMethod,
|
||||
SparseVectorsConfig,
|
||||
VectorsConfig,
|
||||
WalConfigDiff,
|
||||
)
|
||||
|
||||
from crewai.rag.core.base_client import BaseCollectionParams
|
||||
|
||||
QdrantClientType = SyncQdrantClient | AsyncQdrantClient
|
||||
|
||||
QueryEmbedding: TypeAlias = list[float] | np.ndarray[Any, np.dtype[np.floating[Any]]]
|
||||
|
||||
BasicConditions = FieldCondition | IsEmptyCondition | IsNullCondition
|
||||
StructuralConditions = HasIdCondition | HasVectorCondition | NestedCondition
|
||||
FilterCondition = BasicConditions | StructuralConditions | Filter
|
||||
|
||||
MetadataFilterValue = bool | int | str
|
||||
MetadataFilter = dict[str, MetadataFilterValue]
|
||||
|
||||
|
||||
class EmbeddingFunction(Protocol):
|
||||
"""Protocol for embedding functions that convert text to vectors."""
|
||||
|
||||
def __call__(self, text: str) -> QueryEmbedding:
|
||||
"""Convert text to embedding vector.
|
||||
|
||||
Args:
|
||||
text: Input text to embed.
|
||||
|
||||
Returns:
|
||||
Embedding vector as list of floats or numpy array.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
class QdrantEmbeddingFunctionWrapper(EmbeddingFunction):
|
||||
"""Base class for Qdrant EmbeddingFunction to work with Pydantic validation."""
|
||||
|
||||
@classmethod
|
||||
def __get_pydantic_core_schema__(
|
||||
cls, _source_type: Any, _handler: GetCoreSchemaHandler
|
||||
) -> CoreSchema:
|
||||
"""Generate Pydantic core schema for Qdrant EmbeddingFunction.
|
||||
|
||||
This allows Pydantic to handle Qdrant's EmbeddingFunction type
|
||||
without requiring arbitrary_types_allowed=True.
|
||||
"""
|
||||
return core_schema.any_schema()
|
||||
|
||||
|
||||
class AsyncEmbeddingFunction(Protocol):
|
||||
"""Protocol for async embedding functions that convert text to vectors."""
|
||||
|
||||
async def __call__(self, text: str) -> QueryEmbedding:
|
||||
"""Convert text to embedding vector asynchronously.
|
||||
|
||||
Args:
|
||||
text: Input text to embed.
|
||||
|
||||
Returns:
|
||||
Embedding vector as list of floats or numpy array.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
class QdrantClientParams(TypedDict, total=False):
|
||||
"""Parameters for QdrantClient initialization.
|
||||
|
||||
Notes:
|
||||
Need to implement in factory or remove.
|
||||
"""
|
||||
|
||||
location: str | None
|
||||
url: str | None
|
||||
port: int
|
||||
grpc_port: int
|
||||
prefer_grpc: bool
|
||||
https: bool | None
|
||||
api_key: str | None
|
||||
prefix: str | None
|
||||
timeout: int | None
|
||||
host: str | None
|
||||
path: str | None
|
||||
force_disable_check_same_thread: bool
|
||||
grpc_options: dict[str, Any] | None
|
||||
auth_token_provider: Callable[[], str] | Callable[[], Awaitable[str]] | None
|
||||
cloud_inference: bool
|
||||
local_inference_batch_size: int | None
|
||||
check_compatibility: bool
|
||||
|
||||
|
||||
class CommonCreateFields(TypedDict, total=False):
|
||||
"""Fields shared between high-level and direct create_collection params."""
|
||||
|
||||
vectors_config: VectorsConfig
|
||||
sparse_vectors_config: SparseVectorsConfig
|
||||
shard_number: Annotated[int, "Number of shards (default: 1)"]
|
||||
sharding_method: ShardingMethod
|
||||
replication_factor: Annotated[int, "Number of replicas per shard (default: 1)"]
|
||||
write_consistency_factor: Annotated[int, "Await N replicas on write (default: 1)"]
|
||||
on_disk_payload: Annotated[bool, "Store payload on disk instead of RAM"]
|
||||
hnsw_config: HnswConfigDiff
|
||||
optimizers_config: OptimizersConfigDiff
|
||||
wal_config: WalConfigDiff
|
||||
quantization_config: QuantizationConfig
|
||||
init_from: InitFrom | str
|
||||
timeout: Annotated[int, "Operation timeout in seconds"]
|
||||
|
||||
|
||||
class QdrantCollectionCreateParams(
|
||||
BaseCollectionParams, CommonCreateFields, total=False
|
||||
):
|
||||
"""High-level parameters for creating a Qdrant collection."""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class CreateCollectionParams(CommonCreateFields, total=False):
|
||||
"""Parameters for qdrant_client.create_collection."""
|
||||
|
||||
collection_name: str
|
||||
|
||||
|
||||
class PreparedSearchParams(TypedDict):
|
||||
"""Type definition for prepared Qdrant search parameters."""
|
||||
|
||||
collection_name: str
|
||||
query: list[float]
|
||||
limit: Annotated[int, "Max results to return"]
|
||||
with_payload: Annotated[bool, "Include payload in results"]
|
||||
with_vectors: Annotated[bool, "Include vectors in results"]
|
||||
score_threshold: NotRequired[Annotated[float, "Min similarity score (0-1)"]]
|
||||
query_filter: NotRequired[Filter]
|
||||
@@ -1,228 +0,0 @@
|
||||
"""Utility functions for Qdrant operations."""
|
||||
|
||||
import asyncio
|
||||
from typing import TypeGuard
|
||||
from uuid import uuid4
|
||||
|
||||
from qdrant_client import AsyncQdrantClient, QdrantClient as SyncQdrantClient
|
||||
from qdrant_client.models import (
|
||||
FieldCondition,
|
||||
Filter,
|
||||
MatchValue,
|
||||
PointStruct,
|
||||
QueryResponse,
|
||||
)
|
||||
|
||||
from crewai.rag.qdrant.constants import DEFAULT_VECTOR_PARAMS
|
||||
from crewai.rag.qdrant.types import (
|
||||
AsyncEmbeddingFunction,
|
||||
CreateCollectionParams,
|
||||
EmbeddingFunction,
|
||||
FilterCondition,
|
||||
MetadataFilter,
|
||||
PreparedSearchParams,
|
||||
QdrantClientType,
|
||||
QdrantCollectionCreateParams,
|
||||
QueryEmbedding,
|
||||
)
|
||||
from crewai.rag.types import SearchResult, BaseRecord
|
||||
|
||||
|
||||
def _ensure_list_embedding(embedding: QueryEmbedding) -> list[float]:
|
||||
"""Convert embedding to list[float] format if needed.
|
||||
|
||||
Args:
|
||||
embedding: Embedding vector as list or numpy array.
|
||||
|
||||
Returns:
|
||||
Embedding as list[float].
|
||||
"""
|
||||
if not isinstance(embedding, list):
|
||||
return embedding.tolist()
|
||||
return embedding
|
||||
|
||||
|
||||
def _is_sync_client(client: QdrantClientType) -> TypeGuard[SyncQdrantClient]:
|
||||
"""Type guard to check if the client is a synchronous QdrantClient.
|
||||
|
||||
Args:
|
||||
client: The client to check.
|
||||
|
||||
Returns:
|
||||
True if the client is a QdrantClient, False otherwise.
|
||||
"""
|
||||
return isinstance(client, SyncQdrantClient)
|
||||
|
||||
|
||||
def _is_async_client(client: QdrantClientType) -> TypeGuard[AsyncQdrantClient]:
|
||||
"""Type guard to check if the client is an asynchronous AsyncQdrantClient.
|
||||
|
||||
Args:
|
||||
client: The client to check.
|
||||
|
||||
Returns:
|
||||
True if the client is an AsyncQdrantClient, False otherwise.
|
||||
"""
|
||||
return isinstance(client, AsyncQdrantClient)
|
||||
|
||||
|
||||
def _is_async_embedding_function(
|
||||
func: EmbeddingFunction | AsyncEmbeddingFunction,
|
||||
) -> TypeGuard[AsyncEmbeddingFunction]:
|
||||
"""Type guard to check if the embedding function is async.
|
||||
|
||||
Args:
|
||||
func: The embedding function to check.
|
||||
|
||||
Returns:
|
||||
True if the function is async, False otherwise.
|
||||
"""
|
||||
return asyncio.iscoroutinefunction(func)
|
||||
|
||||
|
||||
def _get_collection_params(
|
||||
kwargs: QdrantCollectionCreateParams,
|
||||
) -> CreateCollectionParams:
|
||||
"""Extract collection creation parameters from kwargs."""
|
||||
params: CreateCollectionParams = {
|
||||
"collection_name": kwargs["collection_name"],
|
||||
"vectors_config": kwargs.get("vectors_config", DEFAULT_VECTOR_PARAMS),
|
||||
}
|
||||
|
||||
if "sparse_vectors_config" in kwargs:
|
||||
params["sparse_vectors_config"] = kwargs["sparse_vectors_config"]
|
||||
if "shard_number" in kwargs:
|
||||
params["shard_number"] = kwargs["shard_number"]
|
||||
if "sharding_method" in kwargs:
|
||||
params["sharding_method"] = kwargs["sharding_method"]
|
||||
if "replication_factor" in kwargs:
|
||||
params["replication_factor"] = kwargs["replication_factor"]
|
||||
if "write_consistency_factor" in kwargs:
|
||||
params["write_consistency_factor"] = kwargs["write_consistency_factor"]
|
||||
if "on_disk_payload" in kwargs:
|
||||
params["on_disk_payload"] = kwargs["on_disk_payload"]
|
||||
if "hnsw_config" in kwargs:
|
||||
params["hnsw_config"] = kwargs["hnsw_config"]
|
||||
if "optimizers_config" in kwargs:
|
||||
params["optimizers_config"] = kwargs["optimizers_config"]
|
||||
if "wal_config" in kwargs:
|
||||
params["wal_config"] = kwargs["wal_config"]
|
||||
if "quantization_config" in kwargs:
|
||||
params["quantization_config"] = kwargs["quantization_config"]
|
||||
if "init_from" in kwargs:
|
||||
params["init_from"] = kwargs["init_from"]
|
||||
if "timeout" in kwargs:
|
||||
params["timeout"] = kwargs["timeout"]
|
||||
|
||||
return params
|
||||
|
||||
|
||||
def _prepare_search_params(
|
||||
collection_name: str,
|
||||
query_embedding: QueryEmbedding,
|
||||
limit: int,
|
||||
score_threshold: float | None,
|
||||
metadata_filter: MetadataFilter | None,
|
||||
) -> PreparedSearchParams:
|
||||
"""Prepare search parameters for Qdrant query_points.
|
||||
|
||||
Args:
|
||||
collection_name: Name of the collection to search.
|
||||
query_embedding: Embedding vector for the query.
|
||||
limit: Maximum number of results.
|
||||
score_threshold: Optional minimum similarity score.
|
||||
metadata_filter: Optional metadata filters.
|
||||
|
||||
Returns:
|
||||
Dictionary of parameters for query_points method.
|
||||
"""
|
||||
query_vector = _ensure_list_embedding(query_embedding)
|
||||
|
||||
search_kwargs: PreparedSearchParams = {
|
||||
"collection_name": collection_name,
|
||||
"query": query_vector,
|
||||
"limit": limit,
|
||||
"with_payload": True,
|
||||
"with_vectors": False,
|
||||
}
|
||||
|
||||
if score_threshold is not None:
|
||||
search_kwargs["score_threshold"] = score_threshold
|
||||
|
||||
if metadata_filter:
|
||||
filter_conditions: list[FilterCondition] = []
|
||||
for key, value in metadata_filter.items():
|
||||
filter_conditions.append(
|
||||
FieldCondition(key=key, match=MatchValue(value=value))
|
||||
)
|
||||
|
||||
search_kwargs["query_filter"] = Filter(must=filter_conditions)
|
||||
|
||||
return search_kwargs
|
||||
|
||||
|
||||
def _normalize_qdrant_score(score: float) -> float:
|
||||
"""Normalize Qdrant cosine similarity score to [0, 1] range.
|
||||
|
||||
Converts from Qdrant's [-1, 1] cosine similarity range to [0, 1] range for standardization across clients.
|
||||
|
||||
Args:
|
||||
score: Raw cosine similarity score from Qdrant [-1, 1].
|
||||
|
||||
Returns:
|
||||
Normalized score in [0, 1] range where 1 is most similar.
|
||||
"""
|
||||
normalized = (score + 1.0) / 2.0
|
||||
return max(0.0, min(1.0, normalized))
|
||||
|
||||
|
||||
def _process_search_results(response: QueryResponse) -> list[SearchResult]:
|
||||
"""Process Qdrant search response into SearchResult format.
|
||||
|
||||
Args:
|
||||
response: Response from Qdrant query_points method.
|
||||
|
||||
Returns:
|
||||
List of SearchResult dictionaries.
|
||||
"""
|
||||
results: list[SearchResult] = []
|
||||
for point in response.points:
|
||||
payload = point.payload or {}
|
||||
score = _normalize_qdrant_score(score=point.score)
|
||||
result: SearchResult = {
|
||||
"id": str(point.id),
|
||||
"content": payload.get("content", ""),
|
||||
"metadata": {k: v for k, v in payload.items() if k != "content"},
|
||||
"score": score,
|
||||
}
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def _create_point_from_document(
|
||||
doc: BaseRecord, embedding: QueryEmbedding
|
||||
) -> PointStruct:
|
||||
"""Create a PointStruct from a document and its embedding.
|
||||
|
||||
Args:
|
||||
doc: Document dictionary containing content, metadata, and optional doc_id.
|
||||
embedding: The embedding vector for the document content.
|
||||
|
||||
Returns:
|
||||
PointStruct ready to be upserted to Qdrant.
|
||||
"""
|
||||
doc_id = doc.get("doc_id", str(uuid4()))
|
||||
vector = _ensure_list_embedding(embedding)
|
||||
|
||||
metadata = doc.get("metadata", {})
|
||||
if isinstance(metadata, list):
|
||||
metadata = metadata[0] if metadata else {}
|
||||
elif not isinstance(metadata, dict):
|
||||
metadata = dict(metadata) if metadata else {}
|
||||
|
||||
return PointStruct(
|
||||
id=doc_id,
|
||||
vector=vector,
|
||||
payload={"content": doc["content"], **metadata},
|
||||
)
|
||||
@@ -1,9 +1,9 @@
|
||||
"""Type definitions for RAG (Retrieval-Augmented Generation) systems."""
|
||||
|
||||
from collections.abc import Callable, Mapping
|
||||
from typing import TypeAlias, Any
|
||||
from typing import TypeAlias, TypedDict, Any
|
||||
|
||||
from typing_extensions import Required, TypedDict
|
||||
from typing_extensions import Required
|
||||
|
||||
|
||||
class BaseRecord(TypedDict, total=False):
|
||||
|
||||
@@ -1,622 +0,0 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import contextvars
|
||||
import functools
|
||||
import hashlib
|
||||
import inspect
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import weakref
|
||||
from pathlib import Path
|
||||
from collections.abc import Callable, Coroutine
|
||||
from typing import Any, TypeVar, ParamSpec, Concatenate, TypedDict
|
||||
|
||||
import portalocker
|
||||
from portalocker import constants
|
||||
from typing_extensions import NotRequired, Self
|
||||
|
||||
P = ParamSpec("P")
|
||||
R = TypeVar("R")
|
||||
T = TypeVar("T")
|
||||
|
||||
_STATE: dict[str, int] = {"pid": os.getpid()}
|
||||
|
||||
|
||||
def _reset_after_fork() -> None:
|
||||
"""Reset in-process state in the child after a fork.
|
||||
|
||||
Resets all locks and thread-local storage after a process fork
|
||||
to prevent lock contamination across processes.
|
||||
"""
|
||||
global _sync_rlocks, _async_locks_by_loop, _tls, _task_depths_var, _STATE
|
||||
_sync_rlocks = {}
|
||||
_async_locks_by_loop = weakref.WeakKeyDictionary()
|
||||
_tls = threading.local()
|
||||
# Reset task-local depths for async
|
||||
_task_depths_var = contextvars.ContextVar("locked_task_depths", default=None)
|
||||
_STATE["pid"] = os.getpid()
|
||||
|
||||
|
||||
def _ensure_same_process() -> None:
|
||||
"""Ensure we're in the same process, reset if forked.
|
||||
|
||||
Checks if the current PID matches the stored PID and resets
|
||||
state if a fork has occurred.
|
||||
"""
|
||||
if _STATE["pid"] != os.getpid():
|
||||
_reset_after_fork()
|
||||
|
||||
|
||||
# Automatically reset in a forked child on POSIX
|
||||
_register_at_fork = getattr(os, "register_at_fork", None)
|
||||
if _register_at_fork is not None:
|
||||
_register_at_fork(after_in_child=_reset_after_fork)
|
||||
|
||||
|
||||
class LockConfig(TypedDict):
|
||||
"""Configuration for portalocker locks.
|
||||
|
||||
Attributes:
|
||||
mode: File open mode.
|
||||
timeout: Optional lock timeout.
|
||||
check_interval: Optional check interval.
|
||||
fail_when_locked: Whether to fail if already locked.
|
||||
flags: Portalocker lock flags.
|
||||
"""
|
||||
|
||||
mode: str
|
||||
timeout: NotRequired[float]
|
||||
check_interval: NotRequired[float]
|
||||
fail_when_locked: bool
|
||||
flags: portalocker.LockFlags
|
||||
|
||||
|
||||
def _get_platform_lock_flags() -> portalocker.LockFlags:
|
||||
"""Get platform-appropriate lock flags.
|
||||
|
||||
Returns:
|
||||
LockFlags.EXCLUSIVE for exclusive file locking.
|
||||
"""
|
||||
# Use EXCLUSIVE flag only - let portalocker handle blocking/non-blocking internally
|
||||
return constants.LockFlags.EXCLUSIVE
|
||||
|
||||
|
||||
def _get_lock_config() -> LockConfig:
|
||||
"""Get lock configuration appropriate for the platform.
|
||||
|
||||
Returns:
|
||||
LockConfig dict with mode, flags, and fail_when_locked settings.
|
||||
"""
|
||||
config: LockConfig = {
|
||||
"mode": "a+",
|
||||
"fail_when_locked": False,
|
||||
"flags": _get_platform_lock_flags(),
|
||||
}
|
||||
return config
|
||||
|
||||
|
||||
LOCK_CONFIG: LockConfig = _get_lock_config()
|
||||
LOCK_STALE_SECONDS = 120
|
||||
|
||||
|
||||
def _default_lock_dir() -> Path:
|
||||
"""Get or create the default lock directory.
|
||||
|
||||
Returns:
|
||||
Path to ~/.crewai/locks directory, created if necessary.
|
||||
"""
|
||||
lock_dir = Path.home() / ".crewai" / "locks"
|
||||
lock_dir.mkdir(parents=True, exist_ok=True)
|
||||
# Best-effort: restrict perms on POSIX
|
||||
try:
|
||||
if os.name == "posix":
|
||||
lock_dir.chmod(0o700)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Clean up old lock files
|
||||
_cleanup_stale_locks(lock_dir)
|
||||
return lock_dir
|
||||
|
||||
|
||||
def _cleanup_stale_locks(lock_dir: Path, max_age_seconds: int = 86400) -> None:
|
||||
"""Remove lock files older than max_age_seconds.
|
||||
|
||||
Args:
|
||||
lock_dir: Directory containing lock files.
|
||||
max_age_seconds: Maximum age before considering a lock stale (default 24 hours).
|
||||
"""
|
||||
try:
|
||||
current_time = time.time()
|
||||
for lock_file in lock_dir.glob("*.lock"):
|
||||
try:
|
||||
# Check if file is old and not currently locked
|
||||
file_age = current_time - lock_file.stat().st_mtime
|
||||
if file_age > max_age_seconds:
|
||||
# Try to acquire exclusive lock - if successful, file is not in use
|
||||
try:
|
||||
with portalocker.Lock(
|
||||
str(lock_file),
|
||||
mode="a+",
|
||||
timeout=0.01, # Very short timeout
|
||||
fail_when_locked=True,
|
||||
flags=constants.LockFlags.EXCLUSIVE,
|
||||
):
|
||||
pass # We got the lock, file is not in use
|
||||
# Safe to remove
|
||||
lock_file.unlink(missing_ok=True)
|
||||
except (portalocker.LockException, OSError):
|
||||
# File is locked or can't be accessed, skip it
|
||||
pass
|
||||
except (OSError, IOError):
|
||||
# Skip files we can't stat or process
|
||||
pass
|
||||
except Exception:
|
||||
# Cleanup is best-effort, don't fail on errors
|
||||
pass
|
||||
|
||||
|
||||
def _hash_str(value: str) -> str:
|
||||
"""Generate a short hash of a string.
|
||||
|
||||
Args:
|
||||
value: String to hash.
|
||||
|
||||
Returns:
|
||||
First 10 characters of SHA256 hash.
|
||||
"""
|
||||
return hashlib.sha256(value.encode()).hexdigest()[:10]
|
||||
|
||||
|
||||
def _qualname_for(func: Callable[..., Any], owner: object | None = None) -> str:
|
||||
"""Get qualified name for a function.
|
||||
|
||||
Args:
|
||||
func: Function to get qualified name for.
|
||||
owner: Optional owner object for the function.
|
||||
|
||||
Returns:
|
||||
Fully qualified name including module and class.
|
||||
"""
|
||||
target = inspect.unwrap(func)
|
||||
|
||||
if inspect.ismethod(func) and getattr(func, "__self__", None) is not None:
|
||||
owner_obj = func.__self__
|
||||
cls = owner_obj if inspect.isclass(owner_obj) else owner_obj.__class__
|
||||
return f"{target.__module__}.{cls.__qualname__}.{getattr(target, '__name__', '<?>')}"
|
||||
|
||||
if owner is not None:
|
||||
cls = owner if inspect.isclass(owner) else owner.__class__
|
||||
return f"{target.__module__}.{cls.__qualname__}.{getattr(target, '__name__', '<?>')}"
|
||||
|
||||
qn = getattr(target, "__qualname__", None)
|
||||
if qn is not None:
|
||||
return f"{getattr(target, '__module__', target.__class__.__module__)}.{qn}"
|
||||
|
||||
if isinstance(target, functools.partial):
|
||||
f = inspect.unwrap(target.func)
|
||||
return f"{getattr(f, '__module__', 'builtins')}.{getattr(f, '__qualname__', getattr(f, '__name__', '<?>'))}"
|
||||
|
||||
cls = target.__class__
|
||||
return f"{cls.__module__}.{cls.__qualname__}.__call__"
|
||||
|
||||
|
||||
def _get_lock_context(
|
||||
instance: Any | None,
|
||||
func: Callable[..., Any],
|
||||
kwargs: dict[str, Any],
|
||||
) -> tuple[Path, str | None]:
|
||||
"""Extract lock context from function call.
|
||||
|
||||
Args:
|
||||
instance: Instance the function is called on.
|
||||
func: Function being called.
|
||||
kwargs: Keyword arguments passed to function.
|
||||
|
||||
Returns:
|
||||
Tuple of (lock_file_path, collection_name).
|
||||
"""
|
||||
collection_name = (
|
||||
str(kwargs.get("collection_name")) if "collection_name" in kwargs else None
|
||||
)
|
||||
lock_dir = _default_lock_dir()
|
||||
base = _qualname_for(func, owner=instance)
|
||||
safe_base = re.sub(r"[^\w.\-]+", "_", base)
|
||||
suffix = f"_{_hash_str(collection_name)}" if collection_name else ""
|
||||
path = lock_dir / f"{safe_base}{suffix}.lock"
|
||||
return path, collection_name
|
||||
|
||||
|
||||
def _write_lock_metadata(lock_file_path: Path) -> None:
|
||||
"""Write metadata to lock file for staleness detection.
|
||||
|
||||
Args:
|
||||
lock_file_path: Path to the lock file.
|
||||
"""
|
||||
try:
|
||||
with open(lock_file_path, "w") as f:
|
||||
f.write(f"{os.getpid()}\n{time.time()}\n")
|
||||
f.flush()
|
||||
os.fsync(f.fileno()) # Ensure data is written to disk
|
||||
|
||||
# Set restrictive permissions on lock file (Unix only)
|
||||
if sys.platform not in ("win32", "cygwin"):
|
||||
try:
|
||||
lock_file_path.chmod(0o600)
|
||||
except Exception:
|
||||
pass
|
||||
except Exception:
|
||||
# Best effort - don't fail if we can't write metadata
|
||||
pass
|
||||
|
||||
|
||||
def _check_lock_staleness(lock_file_path: Path) -> bool:
|
||||
"""Check if a lock file is stale.
|
||||
|
||||
Args:
|
||||
lock_file_path: Path to the lock file.
|
||||
|
||||
Returns:
|
||||
True if lock is stale, False otherwise.
|
||||
"""
|
||||
try:
|
||||
if not lock_file_path.exists():
|
||||
return False
|
||||
|
||||
with open(lock_file_path) as f:
|
||||
lines = f.readlines()
|
||||
if len(lines) < 2:
|
||||
return True # unreadable metadata
|
||||
|
||||
pid = int(lines[0].strip())
|
||||
ts = float(lines[1].strip())
|
||||
|
||||
# If the process is alive, do NOT treat as stale based on time alone.
|
||||
if sys.platform not in ("win32", "cygwin"):
|
||||
try:
|
||||
os.kill(pid, 0)
|
||||
return False # alive → not stale
|
||||
except (OSError, ProcessLookupError):
|
||||
pass # dead process → proceed to time check
|
||||
|
||||
# Process dead: time window can be small; consider stale now
|
||||
return (time.time() - ts) > 1.0 # essentially “dead means stale”
|
||||
|
||||
except Exception:
|
||||
return True
|
||||
|
||||
|
||||
_sync_rlocks: dict[Path, threading.RLock] = {}
|
||||
_sync_rlocks_guard = threading.Lock()
|
||||
_tls = threading.local()
|
||||
|
||||
|
||||
def _get_sync_rlock(path: Path) -> threading.RLock:
|
||||
"""Get or create a reentrant lock for a path.
|
||||
|
||||
Args:
|
||||
path: Path to get lock for.
|
||||
|
||||
Returns:
|
||||
Threading RLock for the given path.
|
||||
"""
|
||||
with _sync_rlocks_guard:
|
||||
lk = _sync_rlocks.get(path)
|
||||
if lk is None:
|
||||
lk = threading.RLock()
|
||||
_sync_rlocks[path] = lk
|
||||
return lk
|
||||
|
||||
|
||||
class _SyncDepthManager:
|
||||
"""Context manager for sync depth tracking.
|
||||
|
||||
Tracks reentrancy depth for synchronous locks to determine
|
||||
when to acquire/release file locks.
|
||||
"""
|
||||
|
||||
def __init__(self, path: Path):
|
||||
"""Initialize depth manager.
|
||||
|
||||
Args:
|
||||
path: Path to track depth for.
|
||||
"""
|
||||
self.path = path
|
||||
self.depth = 0
|
||||
|
||||
def __enter__(self) -> int:
|
||||
"""Enter context and increment depth.
|
||||
|
||||
Returns:
|
||||
Current depth after increment.
|
||||
"""
|
||||
depths = getattr(_tls, "depths", None)
|
||||
if depths is None:
|
||||
depths = {}
|
||||
_tls.depths = depths
|
||||
self.depth = depths.get(self.path, 0) + 1
|
||||
depths[self.path] = self.depth
|
||||
return self.depth
|
||||
|
||||
def __exit__(self, *args: Any) -> None:
|
||||
"""Exit context and decrement depth.
|
||||
|
||||
Args:
|
||||
*args: Exception information if any.
|
||||
"""
|
||||
depths = getattr(_tls, "depths", {})
|
||||
v = depths.get(self.path, 1) - 1
|
||||
if v <= 0:
|
||||
depths.pop(self.path, None)
|
||||
else:
|
||||
depths[self.path] = v
|
||||
|
||||
|
||||
def _safe_to_delete(path: Path) -> bool:
|
||||
"""Check if a lock file can be safely deleted.
|
||||
|
||||
Args:
|
||||
path: Path to the lock file.
|
||||
|
||||
Returns:
|
||||
True if file can be deleted safely, False otherwise.
|
||||
"""
|
||||
try:
|
||||
with portalocker.Lock(
|
||||
str(path),
|
||||
mode="a+",
|
||||
timeout=0.01, # very short, non-blocking-ish
|
||||
fail_when_locked=True, # fail if someone holds it
|
||||
flags=constants.LockFlags.EXCLUSIVE,
|
||||
):
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
def with_lock(func: Callable[Concatenate[T, P], R]) -> Callable[Concatenate[T, P], R]:
|
||||
"""Decorator for file-based cross-process locking.
|
||||
|
||||
Args:
|
||||
func: Function to wrap with locking.
|
||||
|
||||
Returns:
|
||||
Wrapped function with locking behavior.
|
||||
"""
|
||||
|
||||
@functools.wraps(func)
|
||||
def wrapper(self, *args, **kwargs):
|
||||
_ensure_same_process()
|
||||
|
||||
path, _ = _get_lock_context(self, func, kwargs)
|
||||
local_lock = _get_sync_rlock(path)
|
||||
|
||||
prune_after = False
|
||||
with local_lock:
|
||||
with _SyncDepthManager(path) as depth:
|
||||
if depth == 1:
|
||||
# stale handling
|
||||
if _check_lock_staleness(path) and _safe_to_delete(path):
|
||||
try:
|
||||
path.unlink(missing_ok=True)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# acquire file lock
|
||||
lock_config = LockConfig(
|
||||
mode=LOCK_CONFIG["mode"],
|
||||
fail_when_locked=LOCK_CONFIG["fail_when_locked"],
|
||||
flags=LOCK_CONFIG["flags"],
|
||||
)
|
||||
with portalocker.Lock(str(path), **lock_config) as _fh:
|
||||
_write_lock_metadata(path)
|
||||
result = func(self, *args, **kwargs)
|
||||
try:
|
||||
path.unlink(missing_ok=True)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
prune_after = True
|
||||
else:
|
||||
result = func(self, *args, **kwargs)
|
||||
|
||||
# <-- NOW it’s safe to remove the entry
|
||||
if prune_after:
|
||||
with _sync_rlocks_guard:
|
||||
_sync_rlocks.pop(path, None)
|
||||
|
||||
return result
|
||||
|
||||
return wrapper
|
||||
|
||||
|
||||
# Use weak references to avoid keeping event loops alive
|
||||
_async_locks_by_loop: weakref.WeakKeyDictionary[
|
||||
asyncio.AbstractEventLoop, dict[Path, asyncio.Lock]
|
||||
] = weakref.WeakKeyDictionary()
|
||||
_async_locks_guard = threading.Lock()
|
||||
_task_depths_var: contextvars.ContextVar[dict[Path, int] | None] = (
|
||||
contextvars.ContextVar("locked_task_depths", default=None)
|
||||
)
|
||||
|
||||
|
||||
def _get_async_lock(path: Path) -> asyncio.Lock:
|
||||
"""Get or create an async lock for the current event loop.
|
||||
|
||||
Args:
|
||||
path: Path to get lock for.
|
||||
|
||||
Returns:
|
||||
Asyncio Lock for the given path in current event loop.
|
||||
"""
|
||||
loop = asyncio.get_running_loop()
|
||||
|
||||
with _async_locks_guard:
|
||||
# Get locks dict for this event loop
|
||||
loop_locks = _async_locks_by_loop.get(loop)
|
||||
if loop_locks is None:
|
||||
loop_locks = {}
|
||||
_async_locks_by_loop[loop] = loop_locks
|
||||
|
||||
# Get or create lock for this path
|
||||
lk = loop_locks.get(path)
|
||||
if lk is None:
|
||||
# Create lock in the context of the running loop
|
||||
lk = asyncio.Lock()
|
||||
loop_locks[path] = lk
|
||||
return lk
|
||||
|
||||
|
||||
class _AsyncDepthManager:
|
||||
"""Context manager for async task-local depth tracking.
|
||||
|
||||
Tracks reentrancy depth for async locks to determine
|
||||
when to acquire/release file locks.
|
||||
"""
|
||||
|
||||
def __init__(self, path: Path):
|
||||
"""Initialize async depth manager.
|
||||
|
||||
Args:
|
||||
path: Path to track depth for.
|
||||
"""
|
||||
self.path = path
|
||||
self.depths: dict[Path, int] | None = None
|
||||
self.is_reentrant = False
|
||||
|
||||
def __enter__(self) -> Self:
|
||||
"""Enter context and track async task depth.
|
||||
|
||||
Returns:
|
||||
Self for context management.
|
||||
"""
|
||||
d = _task_depths_var.get()
|
||||
if d is None:
|
||||
d = {}
|
||||
_task_depths_var.set(d)
|
||||
self.depths = d
|
||||
|
||||
cur_depth = self.depths.get(self.path, 0)
|
||||
if cur_depth > 0:
|
||||
self.is_reentrant = True
|
||||
self.depths[self.path] = cur_depth + 1
|
||||
else:
|
||||
self.depths[self.path] = 1
|
||||
return self
|
||||
|
||||
def __exit__(self, *args: Any) -> None:
|
||||
"""Exit context and update task depth.
|
||||
|
||||
Args:
|
||||
*args: Exception information if any.
|
||||
"""
|
||||
if self.depths is not None:
|
||||
new_depth = self.depths.get(self.path, 1) - 1
|
||||
if new_depth <= 0:
|
||||
self.depths.pop(self.path, None)
|
||||
else:
|
||||
self.depths[self.path] = new_depth
|
||||
|
||||
|
||||
async def _safe_to_delete_async(path: Path) -> bool:
|
||||
"""Check if a lock file can be safely deleted (async).
|
||||
|
||||
Args:
|
||||
path: Path to the lock file.
|
||||
|
||||
Returns:
|
||||
True if file can be deleted safely, False otherwise.
|
||||
"""
|
||||
|
||||
def _try_lock() -> bool:
|
||||
try:
|
||||
with portalocker.Lock(
|
||||
str(path),
|
||||
mode="a+",
|
||||
timeout=0.01, # very short, effectively non-blocking
|
||||
fail_when_locked=True, # fail if another process holds it
|
||||
flags=constants.LockFlags.EXCLUSIVE,
|
||||
):
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
return await asyncio.to_thread(_try_lock)
|
||||
|
||||
|
||||
def async_with_lock(
|
||||
func: Callable[Concatenate[T, P], Coroutine[Any, Any, R]],
|
||||
) -> Callable[Concatenate[T, P], Coroutine[Any, Any, R]]:
|
||||
"""Async decorator for file-based cross-process locking.
|
||||
|
||||
Args:
|
||||
func: Async function to wrap with locking.
|
||||
|
||||
Returns:
|
||||
Wrapped async function with locking behavior.
|
||||
"""
|
||||
|
||||
@functools.wraps(func)
|
||||
async def wrapper(self: T, *args: P.args, **kwargs: P.kwargs) -> R:
|
||||
_ensure_same_process()
|
||||
|
||||
path, _ = _get_lock_context(self, func, kwargs)
|
||||
|
||||
with _AsyncDepthManager(path) as depth_mgr:
|
||||
if depth_mgr.is_reentrant:
|
||||
# Re-entrant within the same task: skip file lock
|
||||
return await func(self, *args, **kwargs)
|
||||
|
||||
# Safer stale handling: only unlink if we can lock it first
|
||||
if _check_lock_staleness(path) and await _safe_to_delete_async(path):
|
||||
try:
|
||||
await asyncio.to_thread(lambda: path.unlink(missing_ok=True))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Acquire per-loop async lock to serialize within this loop
|
||||
async_lock = _get_async_lock(path)
|
||||
await async_lock.acquire()
|
||||
try:
|
||||
# Acquire cross-process file lock in a thread
|
||||
lock_config = LockConfig(
|
||||
mode=LOCK_CONFIG["mode"],
|
||||
fail_when_locked=LOCK_CONFIG["fail_when_locked"],
|
||||
flags=LOCK_CONFIG["flags"],
|
||||
)
|
||||
file_lock = portalocker.Lock(str(path), **lock_config)
|
||||
|
||||
await asyncio.to_thread(file_lock.acquire)
|
||||
try:
|
||||
# Write/refresh metadata while lock is held
|
||||
await asyncio.to_thread(lambda: _write_lock_metadata(path))
|
||||
|
||||
result = await func(self, *args, **kwargs)
|
||||
finally:
|
||||
# Release file lock before unlink to avoid inode race
|
||||
try:
|
||||
await asyncio.to_thread(file_lock.release)
|
||||
finally:
|
||||
# Now it's safe to unlink the path
|
||||
try:
|
||||
await asyncio.to_thread(
|
||||
lambda: path.unlink(missing_ok=True)
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return result
|
||||
finally:
|
||||
async_lock.release()
|
||||
|
||||
with _async_locks_guard:
|
||||
loop = asyncio.get_running_loop()
|
||||
loop_locks = _async_locks_by_loop.get(loop)
|
||||
if loop_locks is not None:
|
||||
loop_locks.pop(path, None)
|
||||
|
||||
return wrapper
|
||||
@@ -4,7 +4,6 @@ import json
|
||||
import logging
|
||||
import threading
|
||||
import uuid
|
||||
import warnings
|
||||
from concurrent.futures import Future
|
||||
from copy import copy
|
||||
from hashlib import md5
|
||||
@@ -158,13 +157,8 @@ class Task(BaseModel):
|
||||
default=None,
|
||||
description="Function or string description of a guardrail to validate task output before proceeding to next task",
|
||||
)
|
||||
max_retries: Optional[int] = Field(
|
||||
default=None,
|
||||
description="[DEPRECATED] Maximum number of retries when guardrail fails. Use guardrail_max_retries instead. Will be removed in v1.0.0"
|
||||
)
|
||||
guardrail_max_retries: int = Field(
|
||||
default=3,
|
||||
description="Maximum number of retries when guardrail fails"
|
||||
max_retries: int = Field(
|
||||
default=3, description="Maximum number of retries when guardrail fails"
|
||||
)
|
||||
retry_count: int = Field(default=0, description="Current number of retries")
|
||||
start_time: Optional[datetime.datetime] = Field(
|
||||
@@ -360,18 +354,6 @@ class Task(BaseModel):
|
||||
)
|
||||
return self
|
||||
|
||||
@model_validator(mode="after")
|
||||
def handle_max_retries_deprecation(self):
|
||||
if self.max_retries is not None:
|
||||
warnings.warn(
|
||||
"The 'max_retries' parameter is deprecated and will be removed in CrewAI v1.0.0. "
|
||||
"Please use 'guardrail_max_retries' instead.",
|
||||
DeprecationWarning,
|
||||
stacklevel=2
|
||||
)
|
||||
self.guardrail_max_retries = self.max_retries
|
||||
return self
|
||||
|
||||
def execute_sync(
|
||||
self,
|
||||
agent: Optional[BaseAgent] = None,
|
||||
@@ -451,7 +433,7 @@ class Task(BaseModel):
|
||||
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
task_output = TaskOutput(
|
||||
name=self.name or self.description,
|
||||
name=self.name,
|
||||
description=self.description,
|
||||
expected_output=self.expected_output,
|
||||
raw=result,
|
||||
@@ -468,9 +450,9 @@ class Task(BaseModel):
|
||||
retry_count=self.retry_count,
|
||||
)
|
||||
if not guardrail_result.success:
|
||||
if self.retry_count >= self.guardrail_max_retries:
|
||||
if self.retry_count >= self.max_retries:
|
||||
raise Exception(
|
||||
f"Task failed guardrail validation after {self.guardrail_max_retries} retries. "
|
||||
f"Task failed guardrail validation after {self.max_retries} retries. "
|
||||
f"Last error: {guardrail_result.error}"
|
||||
)
|
||||
|
||||
@@ -579,8 +561,8 @@ class Task(BaseModel):
|
||||
should_inject = self.allow_crewai_trigger_context
|
||||
|
||||
if should_inject and self.agent:
|
||||
crew = getattr(self.agent, "crew", None)
|
||||
if crew and hasattr(crew, "_inputs") and crew._inputs:
|
||||
crew = getattr(self.agent, 'crew', None)
|
||||
if crew and hasattr(crew, '_inputs') and crew._inputs:
|
||||
trigger_payload = crew._inputs.get("crewai_trigger_payload")
|
||||
if trigger_payload is not None:
|
||||
description += f"\n\nTrigger Payload: {trigger_payload}"
|
||||
@@ -798,9 +780,7 @@ Follow these guidelines:
|
||||
if self.create_directory and not directory.exists():
|
||||
directory.mkdir(parents=True, exist_ok=True)
|
||||
elif not self.create_directory and not directory.exists():
|
||||
raise RuntimeError(
|
||||
f"Directory {directory} does not exist and create_directory is False"
|
||||
)
|
||||
raise RuntimeError(f"Directory {directory} does not exist and create_directory is False")
|
||||
|
||||
with resolved_path.open("w", encoding="utf-8") as file:
|
||||
if isinstance(result, dict):
|
||||
|
||||
@@ -16,12 +16,6 @@ if TYPE_CHECKING:
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
|
||||
|
||||
class ToolUsageLimitExceeded(Exception):
|
||||
"""Exception raised when a tool has reached its maximum usage limit."""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class CrewStructuredTool:
|
||||
"""A structured tool that can operate on any number of inputs.
|
||||
|
||||
@@ -233,25 +227,17 @@ class CrewStructuredTool:
|
||||
"""
|
||||
parsed_args = self._parse_args(input)
|
||||
|
||||
if self.has_reached_max_usage_count():
|
||||
raise ToolUsageLimitExceeded(
|
||||
f"Tool '{self.name}' has reached its maximum usage limit of {self.max_usage_count}. You should not use the {self.name} tool again."
|
||||
)
|
||||
|
||||
self._increment_usage_count()
|
||||
|
||||
try:
|
||||
if inspect.iscoroutinefunction(self.func):
|
||||
return await self.func(**parsed_args, **kwargs)
|
||||
else:
|
||||
# Run sync functions in a thread pool
|
||||
import asyncio
|
||||
if inspect.iscoroutinefunction(self.func):
|
||||
return await self.func(**parsed_args, **kwargs)
|
||||
else:
|
||||
# Run sync functions in a thread pool
|
||||
import asyncio
|
||||
|
||||
return await asyncio.get_event_loop().run_in_executor(
|
||||
None, lambda: self.func(**parsed_args, **kwargs)
|
||||
)
|
||||
except Exception:
|
||||
raise
|
||||
return await asyncio.get_event_loop().run_in_executor(
|
||||
None, lambda: self.func(**parsed_args, **kwargs)
|
||||
)
|
||||
|
||||
def _run(self, *args, **kwargs) -> Any:
|
||||
"""Legacy method for compatibility."""
|
||||
@@ -266,22 +252,12 @@ class CrewStructuredTool:
|
||||
"""Main method for tool execution."""
|
||||
parsed_args = self._parse_args(input)
|
||||
|
||||
if self.has_reached_max_usage_count():
|
||||
raise ToolUsageLimitExceeded(
|
||||
f"Tool '{self.name}' has reached its maximum usage limit of {self.max_usage_count}. You should not use the {self.name} tool again."
|
||||
)
|
||||
|
||||
self._increment_usage_count()
|
||||
|
||||
if inspect.iscoroutinefunction(self.func):
|
||||
result = asyncio.run(self.func(**parsed_args, **kwargs))
|
||||
return result
|
||||
|
||||
try:
|
||||
result = self.func(**parsed_args, **kwargs)
|
||||
except Exception:
|
||||
raise
|
||||
|
||||
result = self.func(**parsed_args, **kwargs)
|
||||
|
||||
if asyncio.iscoroutine(result):
|
||||
@@ -289,13 +265,6 @@ class CrewStructuredTool:
|
||||
|
||||
return result
|
||||
|
||||
def has_reached_max_usage_count(self) -> bool:
|
||||
"""Check if the tool has reached its maximum usage count."""
|
||||
return (
|
||||
self.max_usage_count is not None
|
||||
and self.current_usage_count >= self.max_usage_count
|
||||
)
|
||||
|
||||
def _increment_usage_count(self) -> None:
|
||||
"""Increment the usage count."""
|
||||
self.current_usage_count += 1
|
||||
|
||||
@@ -178,11 +178,9 @@ class ToolUsage:
|
||||
|
||||
if self.agent.fingerprint:
|
||||
event_data.update(self.agent.fingerprint)
|
||||
if self.task:
|
||||
event_data["task_name"] = self.task.name or self.task.description
|
||||
event_data["task_id"] = str(self.task.id)
|
||||
crewai_event_bus.emit(self, ToolUsageStartedEvent(**event_data))
|
||||
|
||||
crewai_event_bus.emit(self,ToolUsageStartedEvent(**event_data))
|
||||
|
||||
started_at = time.time()
|
||||
from_cache = False
|
||||
result = None # type: ignore
|
||||
@@ -313,15 +311,12 @@ class ToolUsage:
|
||||
if self.agent and hasattr(self.agent, "tools_results"):
|
||||
self.agent.tools_results.append(data)
|
||||
|
||||
if available_tool and hasattr(available_tool, "current_usage_count"):
|
||||
if available_tool and hasattr(available_tool, 'current_usage_count'):
|
||||
available_tool.current_usage_count += 1
|
||||
if (
|
||||
hasattr(available_tool, "max_usage_count")
|
||||
and available_tool.max_usage_count is not None
|
||||
):
|
||||
if hasattr(available_tool, 'max_usage_count') and available_tool.max_usage_count is not None:
|
||||
self._printer.print(
|
||||
content=f"Tool '{available_tool.name}' usage: {available_tool.current_usage_count}/{available_tool.max_usage_count}",
|
||||
color="blue",
|
||||
color="blue"
|
||||
)
|
||||
|
||||
return result
|
||||
@@ -355,20 +350,20 @@ class ToolUsage:
|
||||
calling.arguments == last_tool_usage.arguments
|
||||
)
|
||||
return False
|
||||
|
||||
|
||||
def _check_usage_limit(self, tool: Any, tool_name: str) -> str | None:
|
||||
"""Check if tool has reached its usage limit.
|
||||
|
||||
|
||||
Args:
|
||||
tool: The tool to check
|
||||
tool_name: The name of the tool (used for error message)
|
||||
|
||||
|
||||
Returns:
|
||||
Error message if limit reached, None otherwise
|
||||
"""
|
||||
if (
|
||||
hasattr(tool, "max_usage_count")
|
||||
and tool.max_usage_count is not None
|
||||
hasattr(tool, 'max_usage_count')
|
||||
and tool.max_usage_count is not None
|
||||
and tool.current_usage_count >= tool.max_usage_count
|
||||
):
|
||||
return f"Tool '{tool_name}' has reached its usage limit of {tool.max_usage_count} times and cannot be used anymore."
|
||||
@@ -610,9 +605,6 @@ class ToolUsage:
|
||||
"output": result,
|
||||
}
|
||||
)
|
||||
if self.task:
|
||||
event_data["task_id"] = str(self.task.id)
|
||||
event_data["task_name"] = self.task.name or self.task.description
|
||||
crewai_event_bus.emit(self, ToolUsageFinishedEvent(**event_data))
|
||||
|
||||
def _prepare_event_data(
|
||||
|
||||
@@ -11,9 +11,7 @@ class BaseEvent(BaseModel):
|
||||
timestamp: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
|
||||
type: str
|
||||
source_fingerprint: Optional[str] = None # UUID string of the source entity
|
||||
source_type: Optional[str] = (
|
||||
None # "agent", "task", "crew", "memory", "entity_memory", "short_term_memory", "long_term_memory", "external_memory"
|
||||
)
|
||||
source_type: Optional[str] = None # "agent", "task", "crew", "memory", "entity_memory", "short_term_memory", "long_term_memory", "external_memory"
|
||||
fingerprint_metadata: Optional[Dict[str, Any]] = None # Any relevant metadata
|
||||
|
||||
def to_json(self, exclude: set[str] | None = None):
|
||||
@@ -27,20 +25,3 @@ class BaseEvent(BaseModel):
|
||||
dict: A JSON-serializable dictionary.
|
||||
"""
|
||||
return to_serializable(self, exclude=exclude)
|
||||
|
||||
def _set_task_params(self, data: Dict[str, Any]):
|
||||
if "from_task" in data and (task := data["from_task"]):
|
||||
self.task_id = task.id
|
||||
self.task_name = task.name or task.description
|
||||
self.from_task = None
|
||||
|
||||
def _set_agent_params(self, data: Dict[str, Any]):
|
||||
task = data.get("from_task", None)
|
||||
agent = task.agent if task else data.get("from_agent", None)
|
||||
|
||||
if not agent:
|
||||
return
|
||||
|
||||
self.agent_id = agent.id
|
||||
self.agent_role = agent.role
|
||||
self.from_agent = None
|
||||
|
||||
@@ -41,12 +41,6 @@ class TraceBatchManager:
|
||||
"""Single responsibility: Manage batches and event buffering"""
|
||||
|
||||
is_current_batch_ephemeral: bool = False
|
||||
trace_batch_id: Optional[str] = None
|
||||
current_batch: Optional[TraceBatch] = None
|
||||
event_buffer: List[TraceEvent] = []
|
||||
execution_start_times: Dict[str, datetime] = {}
|
||||
batch_owner_type: Optional[str] = None
|
||||
batch_owner_id: Optional[str] = None
|
||||
|
||||
def __init__(self):
|
||||
try:
|
||||
@@ -54,6 +48,11 @@ class TraceBatchManager:
|
||||
except AuthError:
|
||||
self.plus_api = PlusAPI(api_key="")
|
||||
|
||||
self.trace_batch_id: Optional[str] = None # Backend ID
|
||||
self.current_batch: Optional[TraceBatch] = None
|
||||
self.event_buffer: List[TraceEvent] = []
|
||||
self.execution_start_times: Dict[str, datetime] = {}
|
||||
|
||||
def initialize_batch(
|
||||
self,
|
||||
user_context: Dict[str, str],
|
||||
@@ -179,15 +178,13 @@ class TraceBatchManager:
|
||||
if not self.current_batch:
|
||||
return None
|
||||
|
||||
self.current_batch.events = self.event_buffer.copy()
|
||||
if self.event_buffer:
|
||||
self._send_events_to_backend()
|
||||
self._finalize_backend_batch()
|
||||
|
||||
finalized_batch = self.current_batch
|
||||
self.current_batch.events = self.event_buffer.copy()
|
||||
|
||||
self.batch_owner_type = None
|
||||
self.batch_owner_id = None
|
||||
finalized_batch = self.current_batch
|
||||
|
||||
self.current_batch = None
|
||||
self.event_buffer.clear()
|
||||
|
||||
@@ -75,18 +75,10 @@ class TraceCollectionListener(BaseEventListener):
|
||||
Trace collection listener that orchestrates trace collection
|
||||
"""
|
||||
|
||||
complex_events = [
|
||||
"task_started",
|
||||
"task_completed",
|
||||
"llm_call_started",
|
||||
"llm_call_completed",
|
||||
"agent_execution_started",
|
||||
"agent_execution_completed",
|
||||
]
|
||||
complex_events = ["task_started", "llm_call_started", "llm_call_completed"]
|
||||
|
||||
_instance = None
|
||||
_initialized = False
|
||||
_listeners_setup = False
|
||||
|
||||
def __new__(cls, batch_manager=None):
|
||||
if cls._instance is None:
|
||||
@@ -124,15 +116,10 @@ class TraceCollectionListener(BaseEventListener):
|
||||
def setup_listeners(self, crewai_event_bus):
|
||||
"""Setup event listeners - delegates to specific handlers"""
|
||||
|
||||
if self._listeners_setup:
|
||||
return
|
||||
|
||||
self._register_flow_event_handlers(crewai_event_bus)
|
||||
self._register_context_event_handlers(crewai_event_bus)
|
||||
self._register_action_event_handlers(crewai_event_bus)
|
||||
|
||||
self._listeners_setup = True
|
||||
|
||||
def _register_flow_event_handlers(self, event_bus):
|
||||
"""Register handlers for flow events"""
|
||||
|
||||
@@ -161,8 +148,7 @@ class TraceCollectionListener(BaseEventListener):
|
||||
@event_bus.on(FlowFinishedEvent)
|
||||
def on_flow_finished(source, event):
|
||||
self._handle_trace_event("flow_finished", source, event)
|
||||
if self.batch_manager.batch_owner_type == "flow":
|
||||
self.batch_manager.finalize_batch()
|
||||
self.batch_manager.finalize_batch()
|
||||
|
||||
@event_bus.on(FlowPlotEvent)
|
||||
def on_flow_plot(source, event):
|
||||
@@ -180,8 +166,7 @@ class TraceCollectionListener(BaseEventListener):
|
||||
@event_bus.on(CrewKickoffCompletedEvent)
|
||||
def on_crew_completed(source, event):
|
||||
self._handle_trace_event("crew_kickoff_completed", source, event)
|
||||
if self.batch_manager.batch_owner_type == "crew":
|
||||
self.batch_manager.finalize_batch()
|
||||
self.batch_manager.finalize_batch()
|
||||
|
||||
@event_bus.on(CrewKickoffFailedEvent)
|
||||
def on_crew_failed(source, event):
|
||||
@@ -233,7 +218,7 @@ class TraceCollectionListener(BaseEventListener):
|
||||
self._handle_trace_event("llm_guardrail_completed", source, event)
|
||||
|
||||
def _register_action_event_handlers(self, event_bus):
|
||||
"""Register handlers for action events (LLM calls, tool usage)"""
|
||||
"""Register handlers for action events (LLM calls, tool usage, memory)"""
|
||||
|
||||
@event_bus.on(LLMCallStartedEvent)
|
||||
def on_llm_call_started(source, event):
|
||||
@@ -304,9 +289,6 @@ class TraceCollectionListener(BaseEventListener):
|
||||
"crewai_version": get_crewai_version(),
|
||||
}
|
||||
|
||||
self.batch_manager.batch_owner_type = "crew"
|
||||
self.batch_manager.batch_owner_id = getattr(source, "id", str(uuid.uuid4()))
|
||||
|
||||
self._initialize_batch(user_context, execution_metadata)
|
||||
|
||||
def _initialize_flow_batch(self, source: Any, event: Any):
|
||||
@@ -319,9 +301,6 @@ class TraceCollectionListener(BaseEventListener):
|
||||
"execution_type": "flow",
|
||||
}
|
||||
|
||||
self.batch_manager.batch_owner_type = "flow"
|
||||
self.batch_manager.batch_owner_id = getattr(source, "id", str(uuid.uuid4()))
|
||||
|
||||
self._initialize_batch(user_context, execution_metadata)
|
||||
|
||||
def _initialize_batch(
|
||||
@@ -379,44 +358,12 @@ class TraceCollectionListener(BaseEventListener):
|
||||
return {
|
||||
"task_description": event.task.description,
|
||||
"expected_output": event.task.expected_output,
|
||||
"task_name": event.task.name or event.task.description,
|
||||
"task_name": event.task.name,
|
||||
"context": event.context,
|
||||
"agent_role": source.agent.role,
|
||||
"task_id": str(event.task.id),
|
||||
}
|
||||
elif event_type == "task_completed":
|
||||
return {
|
||||
"task_description": event.task.description if event.task else None,
|
||||
"task_name": event.task.name or event.task.description
|
||||
if event.task
|
||||
else None,
|
||||
"task_id": str(event.task.id) if event.task else None,
|
||||
"output_raw": event.output.raw if event.output else None,
|
||||
"output_format": str(event.output.output_format)
|
||||
if event.output
|
||||
else None,
|
||||
"agent_role": event.output.agent if event.output else None,
|
||||
}
|
||||
elif event_type == "agent_execution_started":
|
||||
return {
|
||||
"agent_role": event.agent.role,
|
||||
"agent_goal": event.agent.goal,
|
||||
"agent_backstory": event.agent.backstory,
|
||||
}
|
||||
elif event_type == "agent_execution_completed":
|
||||
return {
|
||||
"agent_role": event.agent.role,
|
||||
"agent_goal": event.agent.goal,
|
||||
"agent_backstory": event.agent.backstory,
|
||||
"agent": source.agent.role,
|
||||
}
|
||||
elif event_type == "llm_call_started":
|
||||
event_data = self._safe_serialize_to_dict(event)
|
||||
event_data["task_name"] = (
|
||||
event.task_name or event.task_description
|
||||
if hasattr(event, "task_name") and event.task_name
|
||||
else None
|
||||
)
|
||||
return event_data
|
||||
return self._safe_serialize_to_dict(event)
|
||||
elif event_type == "llm_call_completed":
|
||||
return self._safe_serialize_to_dict(event)
|
||||
else:
|
||||
|
||||
@@ -13,14 +13,26 @@ class LLMEventBase(BaseEvent):
|
||||
agent_id: Optional[str] = None
|
||||
agent_role: Optional[str] = None
|
||||
|
||||
from_task: Optional[Any] = None
|
||||
from_agent: Optional[Any] = None
|
||||
|
||||
def __init__(self, **data):
|
||||
super().__init__(**data)
|
||||
self._set_agent_params(data)
|
||||
self._set_task_params(data)
|
||||
|
||||
def _set_agent_params(self, data: Dict[str, Any]):
|
||||
task = data.get("from_task", None)
|
||||
agent = task.agent if task else data.get("from_agent", None)
|
||||
|
||||
if not agent:
|
||||
return
|
||||
|
||||
self.agent_id = agent.id
|
||||
self.agent_role = agent.role
|
||||
|
||||
def _set_task_params(self, data: Dict[str, Any]):
|
||||
if "from_task" in data and (task := data["from_task"]):
|
||||
self.task_id = task.id
|
||||
self.task_name = task.name
|
||||
|
||||
|
||||
class LLMCallType(Enum):
|
||||
"""Type of LLM call being made"""
|
||||
|
||||
@@ -3,24 +3,7 @@ from typing import Any, Dict, Optional
|
||||
from crewai.utilities.events.base_events import BaseEvent
|
||||
|
||||
|
||||
class MemoryBaseEvent(BaseEvent):
|
||||
"""Base event for memory operations"""
|
||||
|
||||
type: str
|
||||
task_id: Optional[str] = None
|
||||
task_name: Optional[str] = None
|
||||
from_task: Optional[Any] = None
|
||||
from_agent: Optional[Any] = None
|
||||
agent_role: Optional[str] = None
|
||||
agent_id: Optional[str] = None
|
||||
|
||||
def __init__(self, **data):
|
||||
super().__init__(**data)
|
||||
self._set_agent_params(data)
|
||||
self._set_task_params(data)
|
||||
|
||||
|
||||
class MemoryQueryStartedEvent(MemoryBaseEvent):
|
||||
class MemoryQueryStartedEvent(BaseEvent):
|
||||
"""Event emitted when a memory query is started"""
|
||||
|
||||
type: str = "memory_query_started"
|
||||
@@ -29,7 +12,7 @@ class MemoryQueryStartedEvent(MemoryBaseEvent):
|
||||
score_threshold: Optional[float] = None
|
||||
|
||||
|
||||
class MemoryQueryCompletedEvent(MemoryBaseEvent):
|
||||
class MemoryQueryCompletedEvent(BaseEvent):
|
||||
"""Event emitted when a memory query is completed successfully"""
|
||||
|
||||
type: str = "memory_query_completed"
|
||||
@@ -40,7 +23,7 @@ class MemoryQueryCompletedEvent(MemoryBaseEvent):
|
||||
query_time_ms: float
|
||||
|
||||
|
||||
class MemoryQueryFailedEvent(MemoryBaseEvent):
|
||||
class MemoryQueryFailedEvent(BaseEvent):
|
||||
"""Event emitted when a memory query fails"""
|
||||
|
||||
type: str = "memory_query_failed"
|
||||
@@ -50,7 +33,7 @@ class MemoryQueryFailedEvent(MemoryBaseEvent):
|
||||
error: str
|
||||
|
||||
|
||||
class MemorySaveStartedEvent(MemoryBaseEvent):
|
||||
class MemorySaveStartedEvent(BaseEvent):
|
||||
"""Event emitted when a memory save operation is started"""
|
||||
|
||||
type: str = "memory_save_started"
|
||||
@@ -59,7 +42,7 @@ class MemorySaveStartedEvent(MemoryBaseEvent):
|
||||
agent_role: Optional[str] = None
|
||||
|
||||
|
||||
class MemorySaveCompletedEvent(MemoryBaseEvent):
|
||||
class MemorySaveCompletedEvent(BaseEvent):
|
||||
"""Event emitted when a memory save operation is completed successfully"""
|
||||
|
||||
type: str = "memory_save_completed"
|
||||
@@ -69,7 +52,7 @@ class MemorySaveCompletedEvent(MemoryBaseEvent):
|
||||
save_time_ms: float
|
||||
|
||||
|
||||
class MemorySaveFailedEvent(MemoryBaseEvent):
|
||||
class MemorySaveFailedEvent(BaseEvent):
|
||||
"""Event emitted when a memory save operation fails"""
|
||||
|
||||
type: str = "memory_save_failed"
|
||||
@@ -79,14 +62,14 @@ class MemorySaveFailedEvent(MemoryBaseEvent):
|
||||
error: str
|
||||
|
||||
|
||||
class MemoryRetrievalStartedEvent(MemoryBaseEvent):
|
||||
class MemoryRetrievalStartedEvent(BaseEvent):
|
||||
"""Event emitted when memory retrieval for a task prompt starts"""
|
||||
|
||||
type: str = "memory_retrieval_started"
|
||||
task_id: Optional[str] = None
|
||||
|
||||
|
||||
class MemoryRetrievalCompletedEvent(MemoryBaseEvent):
|
||||
class MemoryRetrievalCompletedEvent(BaseEvent):
|
||||
"""Event emitted when memory retrieval for a task prompt completes successfully"""
|
||||
|
||||
type: str = "memory_retrieval_completed"
|
||||
|
||||
@@ -1,34 +1,16 @@
|
||||
from crewai.utilities.events.base_events import BaseEvent
|
||||
from typing import Any, Optional
|
||||
|
||||
|
||||
class ReasoningEvent(BaseEvent):
|
||||
"""Base event for reasoning events."""
|
||||
|
||||
type: str
|
||||
attempt: int = 1
|
||||
agent_role: str
|
||||
task_id: str
|
||||
task_name: Optional[str] = None
|
||||
from_task: Optional[Any] = None
|
||||
agent_id: Optional[str] = None
|
||||
from_agent: Optional[Any] = None
|
||||
|
||||
def __init__(self, **data):
|
||||
super().__init__(**data)
|
||||
self._set_task_params(data)
|
||||
self._set_agent_params(data)
|
||||
|
||||
|
||||
class AgentReasoningStartedEvent(ReasoningEvent):
|
||||
class AgentReasoningStartedEvent(BaseEvent):
|
||||
"""Event emitted when an agent starts reasoning about a task."""
|
||||
|
||||
type: str = "agent_reasoning_started"
|
||||
agent_role: str
|
||||
task_id: str
|
||||
attempt: int = 1 # The current reasoning/refinement attempt
|
||||
|
||||
|
||||
class AgentReasoningCompletedEvent(ReasoningEvent):
|
||||
class AgentReasoningCompletedEvent(BaseEvent):
|
||||
"""Event emitted when an agent finishes its reasoning process."""
|
||||
|
||||
type: str = "agent_reasoning_completed"
|
||||
@@ -36,12 +18,14 @@ class AgentReasoningCompletedEvent(ReasoningEvent):
|
||||
task_id: str
|
||||
plan: str
|
||||
ready: bool
|
||||
attempt: int = 1
|
||||
|
||||
|
||||
class AgentReasoningFailedEvent(ReasoningEvent):
|
||||
class AgentReasoningFailedEvent(BaseEvent):
|
||||
"""Event emitted when the reasoning process fails."""
|
||||
|
||||
type: str = "agent_reasoning_failed"
|
||||
agent_role: str
|
||||
task_id: str
|
||||
error: str
|
||||
attempt: int = 1
|
||||
@@ -9,24 +9,17 @@ class ToolUsageEvent(BaseEvent):
|
||||
|
||||
agent_key: Optional[str] = None
|
||||
agent_role: Optional[str] = None
|
||||
agent_id: Optional[str] = None
|
||||
tool_name: str
|
||||
tool_args: Dict[str, Any] | str
|
||||
tool_class: Optional[str] = None
|
||||
run_attempts: int | None = None
|
||||
delegations: int | None = None
|
||||
agent: Optional[Any] = None
|
||||
task_name: Optional[str] = None
|
||||
task_id: Optional[str] = None
|
||||
from_task: Optional[Any] = None
|
||||
from_agent: Optional[Any] = None
|
||||
|
||||
model_config = {"arbitrary_types_allowed": True}
|
||||
|
||||
def __init__(self, **data):
|
||||
super().__init__(**data)
|
||||
self._set_agent_params(data)
|
||||
self._set_task_params(data)
|
||||
# Set fingerprint data from the agent
|
||||
if self.agent and hasattr(self.agent, "fingerprint") and self.agent.fingerprint:
|
||||
self.source_fingerprint = self.agent.fingerprint.uuid_str
|
||||
|
||||
@@ -1,32 +0,0 @@
|
||||
"""Import utilities for optional dependencies."""
|
||||
|
||||
import importlib
|
||||
from types import ModuleType
|
||||
|
||||
|
||||
class OptionalDependencyError(ImportError):
|
||||
"""Exception raised when an optional dependency is not installed."""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
def require(name: str, *, purpose: str) -> ModuleType:
|
||||
"""Import a module, raising a helpful error if it's not installed.
|
||||
|
||||
Args:
|
||||
name: The module name to import.
|
||||
purpose: Description of what requires this dependency.
|
||||
|
||||
Returns:
|
||||
The imported module.
|
||||
|
||||
Raises:
|
||||
OptionalDependencyError: If the module is not installed.
|
||||
"""
|
||||
try:
|
||||
return importlib.import_module(name)
|
||||
except ImportError as exc:
|
||||
raise OptionalDependencyError(
|
||||
f"{purpose} requires the optional dependency '{name}'.\n"
|
||||
f"Install it with: uv add {name}"
|
||||
) from exc
|
||||
@@ -18,20 +18,17 @@ from crewai.utilities.events.reasoning_events import (
|
||||
|
||||
class ReasoningPlan(BaseModel):
|
||||
"""Model representing a reasoning plan for a task."""
|
||||
|
||||
plan: str = Field(description="The detailed reasoning plan for the task.")
|
||||
ready: bool = Field(description="Whether the agent is ready to execute the task.")
|
||||
|
||||
|
||||
class AgentReasoningOutput(BaseModel):
|
||||
"""Model representing the output of the agent reasoning process."""
|
||||
|
||||
plan: ReasoningPlan = Field(description="The reasoning plan for the task.")
|
||||
|
||||
|
||||
class ReasoningFunction(BaseModel):
|
||||
"""Model for function calling with reasoning."""
|
||||
|
||||
plan: str = Field(description="The detailed reasoning plan for the task.")
|
||||
ready: bool = Field(description="Whether the agent is ready to execute the task.")
|
||||
|
||||
@@ -41,7 +38,6 @@ class AgentReasoning:
|
||||
Handles the agent reasoning process, enabling an agent to reflect and create a plan
|
||||
before executing a task.
|
||||
"""
|
||||
|
||||
def __init__(self, task: Task, agent: Agent):
|
||||
if not task or not agent:
|
||||
raise ValueError("Both task and agent must be provided.")
|
||||
@@ -67,7 +63,6 @@ class AgentReasoning:
|
||||
agent_role=self.agent.role,
|
||||
task_id=str(self.task.id),
|
||||
attempt=1,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception:
|
||||
@@ -87,7 +82,6 @@ class AgentReasoning:
|
||||
plan=output.plan.plan,
|
||||
ready=output.plan.ready,
|
||||
attempt=1,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception:
|
||||
@@ -104,7 +98,6 @@ class AgentReasoning:
|
||||
task_id=str(self.task.id),
|
||||
error=str(e),
|
||||
attempt=1,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception:
|
||||
@@ -142,16 +135,14 @@ class AgentReasoning:
|
||||
system_prompt = self.i18n.retrieve("reasoning", "initial_plan").format(
|
||||
role=self.agent.role,
|
||||
goal=self.agent.goal,
|
||||
backstory=self.__get_agent_backstory(),
|
||||
backstory=self.__get_agent_backstory()
|
||||
)
|
||||
|
||||
response = self.llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": reasoning_prompt},
|
||||
],
|
||||
from_task=self.task,
|
||||
from_agent=self.agent,
|
||||
{"role": "user", "content": reasoning_prompt}
|
||||
]
|
||||
)
|
||||
|
||||
return self.__parse_reasoning_response(str(response))
|
||||
@@ -179,7 +170,6 @@ class AgentReasoning:
|
||||
agent_role=self.agent.role,
|
||||
task_id=str(self.task.id),
|
||||
attempt=attempt + 1,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception:
|
||||
@@ -193,16 +183,14 @@ class AgentReasoning:
|
||||
system_prompt = self.i18n.retrieve("reasoning", "refine_plan").format(
|
||||
role=self.agent.role,
|
||||
goal=self.agent.goal,
|
||||
backstory=self.__get_agent_backstory(),
|
||||
backstory=self.__get_agent_backstory()
|
||||
)
|
||||
|
||||
response = self.llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": refine_prompt},
|
||||
],
|
||||
from_task=self.task,
|
||||
from_agent=self.agent,
|
||||
{"role": "user", "content": refine_prompt}
|
||||
]
|
||||
)
|
||||
plan, ready = self.__parse_reasoning_response(str(response))
|
||||
|
||||
@@ -239,23 +227,23 @@ class AgentReasoning:
|
||||
"properties": {
|
||||
"plan": {
|
||||
"type": "string",
|
||||
"description": "The detailed reasoning plan for the task.",
|
||||
"description": "The detailed reasoning plan for the task."
|
||||
},
|
||||
"ready": {
|
||||
"type": "boolean",
|
||||
"description": "Whether the agent is ready to execute the task.",
|
||||
},
|
||||
"description": "Whether the agent is ready to execute the task."
|
||||
}
|
||||
},
|
||||
"required": ["plan", "ready"],
|
||||
},
|
||||
},
|
||||
"required": ["plan", "ready"]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
try:
|
||||
system_prompt = self.i18n.retrieve("reasoning", prompt_type).format(
|
||||
role=self.agent.role,
|
||||
goal=self.agent.goal,
|
||||
backstory=self.__get_agent_backstory(),
|
||||
backstory=self.__get_agent_backstory()
|
||||
)
|
||||
|
||||
# Prepare a simple callable that just returns the tool arguments as JSON
|
||||
@@ -266,12 +254,10 @@ class AgentReasoning:
|
||||
response = self.llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": prompt},
|
||||
{"role": "user", "content": prompt}
|
||||
],
|
||||
tools=[function_schema],
|
||||
available_functions={"create_reasoning_plan": _create_reasoning_plan},
|
||||
from_task=self.task,
|
||||
from_agent=self.agent,
|
||||
)
|
||||
|
||||
self.logger.debug(f"Function calling response: {response[:100]}...")
|
||||
@@ -284,43 +270,30 @@ class AgentReasoning:
|
||||
pass
|
||||
|
||||
response_str = str(response)
|
||||
return (
|
||||
response_str,
|
||||
"READY: I am ready to execute the task." in response_str,
|
||||
)
|
||||
return response_str, "READY: I am ready to execute the task." in response_str
|
||||
|
||||
except Exception as e:
|
||||
self.logger.warning(
|
||||
f"Error during function calling: {str(e)}. Falling back to text parsing."
|
||||
)
|
||||
self.logger.warning(f"Error during function calling: {str(e)}. Falling back to text parsing.")
|
||||
|
||||
try:
|
||||
system_prompt = self.i18n.retrieve("reasoning", prompt_type).format(
|
||||
role=self.agent.role,
|
||||
goal=self.agent.goal,
|
||||
backstory=self.__get_agent_backstory(),
|
||||
backstory=self.__get_agent_backstory()
|
||||
)
|
||||
|
||||
fallback_response = self.llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": prompt},
|
||||
],
|
||||
from_task=self.task,
|
||||
from_agent=self.agent,
|
||||
{"role": "user", "content": prompt}
|
||||
]
|
||||
)
|
||||
|
||||
fallback_str = str(fallback_response)
|
||||
return (
|
||||
fallback_str,
|
||||
"READY: I am ready to execute the task." in fallback_str,
|
||||
)
|
||||
return fallback_str, "READY: I am ready to execute the task." in fallback_str
|
||||
except Exception as inner_e:
|
||||
self.logger.error(f"Error during fallback text parsing: {str(inner_e)}")
|
||||
return (
|
||||
"Failed to generate a plan due to an error.",
|
||||
True,
|
||||
) # Default to ready to avoid getting stuck
|
||||
return "Failed to generate a plan due to an error.", True # Default to ready to avoid getting stuck
|
||||
|
||||
def __get_agent_backstory(self) -> str:
|
||||
"""
|
||||
@@ -346,7 +319,7 @@ class AgentReasoning:
|
||||
backstory=self.__get_agent_backstory(),
|
||||
description=self.task.description,
|
||||
expected_output=self.task.expected_output,
|
||||
tools=available_tools,
|
||||
tools=available_tools
|
||||
)
|
||||
|
||||
def __format_available_tools(self) -> str:
|
||||
@@ -357,7 +330,7 @@ class AgentReasoning:
|
||||
str: Comma-separated list of tool names.
|
||||
"""
|
||||
try:
|
||||
return ", ".join([tool.name for tool in (self.task.tools or [])])
|
||||
return ', '.join([tool.name for tool in (self.task.tools or [])])
|
||||
except (AttributeError, TypeError):
|
||||
return "No tools available"
|
||||
|
||||
@@ -375,7 +348,7 @@ class AgentReasoning:
|
||||
role=self.agent.role,
|
||||
goal=self.agent.goal,
|
||||
backstory=self.__get_agent_backstory(),
|
||||
current_plan=current_plan,
|
||||
current_plan=current_plan
|
||||
)
|
||||
|
||||
def __parse_reasoning_response(self, response: str) -> Tuple[str, bool]:
|
||||
|
||||
@@ -15,37 +15,37 @@ def mock_llm_responses():
|
||||
"ready": "I'll solve this simple math problem.\n\nREADY: I am ready to execute the task.\n\n",
|
||||
"not_ready": "I need to think about derivatives.\n\nNOT READY: I need to refine my plan because I'm not sure about the derivative rules.",
|
||||
"ready_after_refine": "I'll use the power rule for derivatives where d/dx(x^n) = n*x^(n-1).\n\nREADY: I am ready to execute the task.",
|
||||
"execution": "4",
|
||||
"execution": "4"
|
||||
}
|
||||
|
||||
|
||||
def test_agent_with_reasoning(mock_llm_responses):
|
||||
"""Test agent with reasoning."""
|
||||
llm = LLM("gpt-3.5-turbo")
|
||||
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="To test the reasoning feature",
|
||||
backstory="I am a test agent created to verify the reasoning feature works correctly.",
|
||||
llm=llm,
|
||||
reasoning=True,
|
||||
verbose=True,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
|
||||
task = Task(
|
||||
description="Simple math task: What's 2+2?",
|
||||
expected_output="The answer should be a number.",
|
||||
agent=agent,
|
||||
agent=agent
|
||||
)
|
||||
|
||||
|
||||
agent.llm.call = lambda messages, *args, **kwargs: (
|
||||
mock_llm_responses["ready"]
|
||||
if any("create a detailed plan" in msg.get("content", "") for msg in messages)
|
||||
else mock_llm_responses["execution"]
|
||||
)
|
||||
|
||||
|
||||
result = agent.execute_task(task)
|
||||
|
||||
|
||||
assert result == mock_llm_responses["execution"]
|
||||
assert "Reasoning Plan:" in task.description
|
||||
|
||||
@@ -53,7 +53,7 @@ def test_agent_with_reasoning(mock_llm_responses):
|
||||
def test_agent_with_reasoning_not_ready_initially(mock_llm_responses):
|
||||
"""Test agent with reasoning that requires refinement."""
|
||||
llm = LLM("gpt-3.5-turbo")
|
||||
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="To test the reasoning feature",
|
||||
@@ -61,21 +61,19 @@ def test_agent_with_reasoning_not_ready_initially(mock_llm_responses):
|
||||
llm=llm,
|
||||
reasoning=True,
|
||||
max_reasoning_attempts=2,
|
||||
verbose=True,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
|
||||
task = Task(
|
||||
description="Complex math task: What's the derivative of x²?",
|
||||
expected_output="The answer should be a mathematical expression.",
|
||||
agent=agent,
|
||||
agent=agent
|
||||
)
|
||||
|
||||
|
||||
call_count = [0]
|
||||
|
||||
|
||||
def mock_llm_call(messages, *args, **kwargs):
|
||||
if any(
|
||||
"create a detailed plan" in msg.get("content", "") for msg in messages
|
||||
) or any("refine your plan" in msg.get("content", "") for msg in messages):
|
||||
if any("create a detailed plan" in msg.get("content", "") for msg in messages) or any("refine your plan" in msg.get("content", "") for msg in messages):
|
||||
call_count[0] += 1
|
||||
if call_count[0] == 1:
|
||||
return mock_llm_responses["not_ready"]
|
||||
@@ -83,11 +81,11 @@ def test_agent_with_reasoning_not_ready_initially(mock_llm_responses):
|
||||
return mock_llm_responses["ready_after_refine"]
|
||||
else:
|
||||
return "2x"
|
||||
|
||||
|
||||
agent.llm.call = mock_llm_call
|
||||
|
||||
|
||||
result = agent.execute_task(task)
|
||||
|
||||
|
||||
assert result == "2x"
|
||||
assert call_count[0] == 2 # Should have made 2 reasoning calls
|
||||
assert "Reasoning Plan:" in task.description
|
||||
@@ -96,7 +94,7 @@ def test_agent_with_reasoning_not_ready_initially(mock_llm_responses):
|
||||
def test_agent_with_reasoning_max_attempts_reached():
|
||||
"""Test agent with reasoning that reaches max attempts without being ready."""
|
||||
llm = LLM("gpt-3.5-turbo")
|
||||
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="To test the reasoning feature",
|
||||
@@ -104,53 +102,52 @@ def test_agent_with_reasoning_max_attempts_reached():
|
||||
llm=llm,
|
||||
reasoning=True,
|
||||
max_reasoning_attempts=2,
|
||||
verbose=True,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
|
||||
task = Task(
|
||||
description="Complex math task: Solve the Riemann hypothesis.",
|
||||
expected_output="A proof or disproof of the hypothesis.",
|
||||
agent=agent,
|
||||
agent=agent
|
||||
)
|
||||
|
||||
|
||||
call_count = [0]
|
||||
|
||||
|
||||
def mock_llm_call(messages, *args, **kwargs):
|
||||
if any(
|
||||
"create a detailed plan" in msg.get("content", "") for msg in messages
|
||||
) or any("refine your plan" in msg.get("content", "") for msg in messages):
|
||||
if any("create a detailed plan" in msg.get("content", "") for msg in messages) or any("refine your plan" in msg.get("content", "") for msg in messages):
|
||||
call_count[0] += 1
|
||||
return f"Attempt {call_count[0]}: I need more time to think.\n\nNOT READY: I need to refine my plan further."
|
||||
else:
|
||||
return "This is an unsolved problem in mathematics."
|
||||
|
||||
|
||||
agent.llm.call = mock_llm_call
|
||||
|
||||
|
||||
result = agent.execute_task(task)
|
||||
|
||||
|
||||
assert result == "This is an unsolved problem in mathematics."
|
||||
assert (
|
||||
call_count[0] == 2
|
||||
) # Should have made exactly 2 reasoning calls (max_attempts)
|
||||
assert call_count[0] == 2 # Should have made exactly 2 reasoning calls (max_attempts)
|
||||
assert "Reasoning Plan:" in task.description
|
||||
|
||||
|
||||
def test_agent_reasoning_input_validation():
|
||||
"""Test input validation in AgentReasoning."""
|
||||
llm = LLM("gpt-3.5-turbo")
|
||||
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="To test the reasoning feature",
|
||||
backstory="I am a test agent created to verify the reasoning feature works correctly.",
|
||||
llm=llm,
|
||||
reasoning=True,
|
||||
reasoning=True
|
||||
)
|
||||
|
||||
|
||||
with pytest.raises(ValueError, match="Both task and agent must be provided"):
|
||||
AgentReasoning(task=None, agent=agent)
|
||||
|
||||
task = Task(description="Simple task", expected_output="Simple output")
|
||||
|
||||
task = Task(
|
||||
description="Simple task",
|
||||
expected_output="Simple output"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Both task and agent must be provided"):
|
||||
AgentReasoning(task=task, agent=None)
|
||||
|
||||
@@ -158,33 +155,33 @@ def test_agent_reasoning_input_validation():
|
||||
def test_agent_reasoning_error_handling():
|
||||
"""Test error handling during the reasoning process."""
|
||||
llm = LLM("gpt-3.5-turbo")
|
||||
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="To test the reasoning feature",
|
||||
backstory="I am a test agent created to verify the reasoning feature works correctly.",
|
||||
llm=llm,
|
||||
reasoning=True,
|
||||
reasoning=True
|
||||
)
|
||||
|
||||
|
||||
task = Task(
|
||||
description="Task that will cause an error",
|
||||
expected_output="Output that will never be generated",
|
||||
agent=agent,
|
||||
agent=agent
|
||||
)
|
||||
|
||||
|
||||
call_count = [0]
|
||||
|
||||
|
||||
def mock_llm_call_error(*args, **kwargs):
|
||||
call_count[0] += 1
|
||||
if call_count[0] <= 2: # First calls are for reasoning
|
||||
raise Exception("LLM error during reasoning")
|
||||
return "Fallback execution result" # Return a value for task execution
|
||||
|
||||
|
||||
agent.llm.call = mock_llm_call_error
|
||||
|
||||
|
||||
result = agent.execute_task(task)
|
||||
|
||||
|
||||
assert result == "Fallback execution result"
|
||||
assert call_count[0] > 2 # Ensure we called the mock multiple times
|
||||
|
||||
@@ -192,36 +189,37 @@ def test_agent_reasoning_error_handling():
|
||||
def test_agent_with_function_calling():
|
||||
"""Test agent with reasoning using function calling."""
|
||||
llm = LLM("gpt-3.5-turbo")
|
||||
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="To test the reasoning feature",
|
||||
backstory="I am a test agent created to verify the reasoning feature works correctly.",
|
||||
llm=llm,
|
||||
reasoning=True,
|
||||
verbose=True,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
|
||||
task = Task(
|
||||
description="Simple math task: What's 2+2?",
|
||||
expected_output="The answer should be a number.",
|
||||
agent=agent,
|
||||
agent=agent
|
||||
)
|
||||
|
||||
|
||||
agent.llm.supports_function_calling = lambda: True
|
||||
|
||||
|
||||
def mock_function_call(messages, *args, **kwargs):
|
||||
if "tools" in kwargs:
|
||||
return json.dumps(
|
||||
{"plan": "I'll solve this simple math problem: 2+2=4.", "ready": True}
|
||||
)
|
||||
return json.dumps({
|
||||
"plan": "I'll solve this simple math problem: 2+2=4.",
|
||||
"ready": True
|
||||
})
|
||||
else:
|
||||
return "4"
|
||||
|
||||
|
||||
agent.llm.call = mock_function_call
|
||||
|
||||
|
||||
result = agent.execute_task(task)
|
||||
|
||||
|
||||
assert result == "4"
|
||||
assert "Reasoning Plan:" in task.description
|
||||
assert "I'll solve this simple math problem: 2+2=4." in task.description
|
||||
@@ -230,34 +228,34 @@ def test_agent_with_function_calling():
|
||||
def test_agent_with_function_calling_fallback():
|
||||
"""Test agent with reasoning using function calling that falls back to text parsing."""
|
||||
llm = LLM("gpt-3.5-turbo")
|
||||
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="To test the reasoning feature",
|
||||
backstory="I am a test agent created to verify the reasoning feature works correctly.",
|
||||
llm=llm,
|
||||
reasoning=True,
|
||||
verbose=True,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
|
||||
task = Task(
|
||||
description="Simple math task: What's 2+2?",
|
||||
expected_output="The answer should be a number.",
|
||||
agent=agent,
|
||||
agent=agent
|
||||
)
|
||||
|
||||
|
||||
agent.llm.supports_function_calling = lambda: True
|
||||
|
||||
|
||||
def mock_function_call(messages, *args, **kwargs):
|
||||
if "tools" in kwargs:
|
||||
return "Invalid JSON that will trigger fallback. READY: I am ready to execute the task."
|
||||
else:
|
||||
return "4"
|
||||
|
||||
|
||||
agent.llm.call = mock_function_call
|
||||
|
||||
|
||||
result = agent.execute_task(task)
|
||||
|
||||
|
||||
assert result == "4"
|
||||
assert "Reasoning Plan:" in task.description
|
||||
assert "Invalid JSON that will trigger fallback" in task.description
|
||||
@@ -23,7 +23,6 @@ from crewai.utilities.events import crewai_event_bus
|
||||
from crewai.utilities.events.tool_usage_events import ToolUsageFinishedEvent
|
||||
from crewai.process import Process
|
||||
|
||||
|
||||
def test_agent_llm_creation_with_env_vars():
|
||||
# Store original environment variables
|
||||
original_api_key = os.environ.get("OPENAI_API_KEY")
|
||||
@@ -236,7 +235,7 @@ def test_logging_tool_usage():
|
||||
)
|
||||
|
||||
assert agent.llm.model == "gpt-4o-mini"
|
||||
assert agent.tools_handler.last_used_tool is None
|
||||
assert agent.tools_handler.last_used_tool == {}
|
||||
task = Task(
|
||||
description="What is 3 times 4?",
|
||||
agent=agent,
|
||||
@@ -594,17 +593,42 @@ def test_agent_repeated_tool_usage_check_even_with_disabled_cache(capsys):
|
||||
)
|
||||
|
||||
captured = capsys.readouterr()
|
||||
output = (
|
||||
captured.out.replace("\n", " ")
|
||||
.replace(" ", " ")
|
||||
.strip()
|
||||
.replace("╭", "")
|
||||
.replace("╮", "")
|
||||
.replace("╯", "")
|
||||
.replace("╰", "")
|
||||
.replace("│", "")
|
||||
.replace("─", "")
|
||||
.replace("[", "")
|
||||
.replace("]", "")
|
||||
.replace("bold", "")
|
||||
.replace("blue", "")
|
||||
.replace("yellow", "")
|
||||
.replace("green", "")
|
||||
.replace("red", "")
|
||||
.replace("dim", "")
|
||||
.replace("🤖", "")
|
||||
.replace("🔧", "")
|
||||
.replace("✅", "")
|
||||
.replace("\x1b[93m", "")
|
||||
.replace("\x1b[00m", "")
|
||||
.replace("\\", "")
|
||||
.replace('"', "")
|
||||
.replace("'", "")
|
||||
)
|
||||
|
||||
# More flexible check, look for either the repeated usage message or verification that max iterations was reached
|
||||
output_lower = captured.out.lower()
|
||||
|
||||
has_repeated_usage_message = "tried reusing the same input" in output_lower
|
||||
has_max_iterations = "maximum iterations reached" in output_lower
|
||||
has_final_answer = "final answer" in output_lower or "42" in captured.out
|
||||
# Look for the message in the normalized output, handling the apostrophe difference
|
||||
expected_message = (
|
||||
"I tried reusing the same input, I must stop using this action input"
|
||||
)
|
||||
|
||||
assert (
|
||||
has_repeated_usage_message or (has_max_iterations and has_final_answer)
|
||||
), f"Expected repeated tool usage handling or proper max iteration handling. Output was: {captured.out[:500]}..."
|
||||
expected_message in output
|
||||
), f"Expected message not found in output. Output was: {output}"
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -759,10 +783,10 @@ def test_agent_without_max_rpm_respects_crew_rpm(capsys):
|
||||
|
||||
with patch.object(RPMController, "_wait_for_next_minute") as moveon:
|
||||
moveon.return_value = True
|
||||
result = crew.kickoff()
|
||||
# Verify the crew executed and RPM limit was triggered
|
||||
assert result is not None
|
||||
assert moveon.called
|
||||
crew.kickoff()
|
||||
captured = capsys.readouterr()
|
||||
assert "get_final_answer" in captured.out
|
||||
assert "Max RPM reached, waiting for next minute to start." in captured.out
|
||||
moveon.assert_called_once()
|
||||
|
||||
|
||||
@@ -1189,13 +1213,17 @@ Thought:<|eot_id|>
|
||||
def test_task_allow_crewai_trigger_context():
|
||||
from crewai import Crew
|
||||
|
||||
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory"
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Analyze the data",
|
||||
expected_output="Analysis report",
|
||||
agent=agent,
|
||||
allow_crewai_trigger_context=True,
|
||||
allow_crewai_trigger_context=True
|
||||
)
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
crew.kickoff({"crewai_trigger_payload": "Important context data"})
|
||||
@@ -1210,13 +1238,17 @@ def test_task_allow_crewai_trigger_context():
|
||||
def test_task_without_allow_crewai_trigger_context():
|
||||
from crewai import Crew
|
||||
|
||||
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory"
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Analyze the data",
|
||||
expected_output="Analysis report",
|
||||
agent=agent,
|
||||
allow_crewai_trigger_context=False,
|
||||
allow_crewai_trigger_context=False
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
@@ -1233,18 +1265,23 @@ def test_task_without_allow_crewai_trigger_context():
|
||||
def test_task_allow_crewai_trigger_context_no_payload():
|
||||
from crewai import Crew
|
||||
|
||||
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory"
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Analyze the data",
|
||||
expected_output="Analysis report",
|
||||
agent=agent,
|
||||
allow_crewai_trigger_context=True,
|
||||
allow_crewai_trigger_context=True
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
crew.kickoff({"other_input": "other data"})
|
||||
|
||||
|
||||
prompt = task.prompt()
|
||||
|
||||
assert "Analyze the data" in prompt
|
||||
@@ -1256,9 +1293,7 @@ def test_do_not_allow_crewai_trigger_context_for_first_task_hierarchical():
|
||||
from crewai import Crew
|
||||
|
||||
agent1 = Agent(role="First Agent", goal="First goal", backstory="First backstory")
|
||||
agent2 = Agent(
|
||||
role="Second Agent", goal="Second goal", backstory="Second backstory"
|
||||
)
|
||||
agent2 = Agent(role="Second Agent", goal="Second goal", backstory="Second backstory")
|
||||
|
||||
first_task = Task(
|
||||
description="Process initial data",
|
||||
@@ -1266,11 +1301,12 @@ def test_do_not_allow_crewai_trigger_context_for_first_task_hierarchical():
|
||||
agent=agent1,
|
||||
)
|
||||
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent1, agent2],
|
||||
tasks=[first_task],
|
||||
process=Process.hierarchical,
|
||||
manager_llm="gpt-4o",
|
||||
manager_llm="gpt-4o"
|
||||
)
|
||||
|
||||
crew.kickoff({"crewai_trigger_payload": "Initial context data"})
|
||||
@@ -1285,9 +1321,7 @@ def test_first_task_auto_inject_trigger():
|
||||
from crewai import Crew
|
||||
|
||||
agent1 = Agent(role="First Agent", goal="First goal", backstory="First backstory")
|
||||
agent2 = Agent(
|
||||
role="Second Agent", goal="Second goal", backstory="Second backstory"
|
||||
)
|
||||
agent2 = Agent(role="Second Agent", goal="Second goal", backstory="Second backstory")
|
||||
|
||||
first_task = Task(
|
||||
description="Process initial data",
|
||||
@@ -1301,7 +1335,10 @@ def test_first_task_auto_inject_trigger():
|
||||
agent=agent2,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent1, agent2], tasks=[first_task, second_task])
|
||||
crew = Crew(
|
||||
agents=[agent1, agent2],
|
||||
tasks=[first_task, second_task]
|
||||
)
|
||||
crew.kickoff({"crewai_trigger_payload": "Initial context data"})
|
||||
|
||||
first_prompt = first_task.prompt()
|
||||
@@ -1312,31 +1349,31 @@ def test_first_task_auto_inject_trigger():
|
||||
assert "Process secondary data" in second_prompt
|
||||
assert "Trigger Payload:" not in second_prompt
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_ensure_first_task_allow_crewai_trigger_context_is_false_does_not_inject():
|
||||
from crewai import Crew
|
||||
|
||||
agent1 = Agent(role="First Agent", goal="First goal", backstory="First backstory")
|
||||
agent2 = Agent(
|
||||
role="Second Agent", goal="Second goal", backstory="Second backstory"
|
||||
)
|
||||
agent2 = Agent(role="Second Agent", goal="Second goal", backstory="Second backstory")
|
||||
|
||||
first_task = Task(
|
||||
description="Process initial data",
|
||||
expected_output="Initial analysis",
|
||||
agent=agent1,
|
||||
allow_crewai_trigger_context=False,
|
||||
allow_crewai_trigger_context=False
|
||||
)
|
||||
|
||||
second_task = Task(
|
||||
description="Process secondary data",
|
||||
expected_output="Secondary analysis",
|
||||
agent=agent2,
|
||||
allow_crewai_trigger_context=True,
|
||||
allow_crewai_trigger_context=True
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent1, agent2], tasks=[first_task, second_task])
|
||||
crew = Crew(
|
||||
agents=[agent1, agent2],
|
||||
tasks=[first_task, second_task]
|
||||
)
|
||||
crew.kickoff({"crewai_trigger_payload": "Context data"})
|
||||
|
||||
first_prompt = first_task.prompt()
|
||||
@@ -1346,6 +1383,7 @@ def test_ensure_first_task_allow_crewai_trigger_context_is_false_does_not_inject
|
||||
assert "Trigger Payload: Context data" in second_prompt
|
||||
|
||||
|
||||
|
||||
@patch("crewai.agent.CrewTrainingHandler")
|
||||
def test_agent_training_handler(crew_training_handler):
|
||||
task_prompt = "What is 1 + 1?"
|
||||
@@ -2309,13 +2347,12 @@ def mock_get_auth_token():
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
|
||||
def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
|
||||
# Mock embedchain initialization to prevent race conditions in parallel CI execution
|
||||
with patch("embedchain.client.Client.setup"):
|
||||
from crewai_tools import (
|
||||
SerperDevTool,
|
||||
FileReadTool,
|
||||
EnterpriseActionTool,
|
||||
)
|
||||
from crewai_tools import (
|
||||
SerperDevTool,
|
||||
XMLSearchTool,
|
||||
CSVSearchTool,
|
||||
EnterpriseActionTool,
|
||||
)
|
||||
|
||||
mock_get_response = MagicMock()
|
||||
mock_get_response.status_code = 200
|
||||
@@ -2331,9 +2368,10 @@ def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
|
||||
},
|
||||
{
|
||||
"module": "crewai_tools",
|
||||
"name": "FileReadTool",
|
||||
"init_params": {"file_path": "test.txt"},
|
||||
"name": "XMLSearchTool",
|
||||
"init_params": {"summarize": "true"},
|
||||
},
|
||||
{"module": "crewai_tools", "name": "CSVSearchTool", "init_params": {}},
|
||||
# using a tools that returns a list of BaseTools
|
||||
{
|
||||
"module": "crewai_tools",
|
||||
@@ -2358,22 +2396,23 @@ def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
|
||||
assert agent.role == "test role"
|
||||
assert agent.goal == "test goal"
|
||||
assert agent.backstory == "test backstory"
|
||||
assert len(agent.tools) == 3
|
||||
assert len(agent.tools) == 4
|
||||
|
||||
assert isinstance(agent.tools[0], SerperDevTool)
|
||||
assert agent.tools[0].n_results == 30
|
||||
assert isinstance(agent.tools[1], FileReadTool)
|
||||
assert agent.tools[1].file_path == "test.txt"
|
||||
assert isinstance(agent.tools[1], XMLSearchTool)
|
||||
assert agent.tools[1].summarize
|
||||
|
||||
assert isinstance(agent.tools[2], EnterpriseActionTool)
|
||||
assert agent.tools[2].name == "test_name"
|
||||
assert isinstance(agent.tools[2], CSVSearchTool)
|
||||
assert not agent.tools[2].summarize
|
||||
|
||||
assert isinstance(agent.tools[3], EnterpriseActionTool)
|
||||
assert agent.tools[3].name == "test_name"
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
|
||||
def test_agent_from_repository_override_attributes(mock_get_agent, mock_get_auth_token):
|
||||
# Mock embedchain initialization to prevent race conditions in parallel CI execution
|
||||
with patch("embedchain.client.Client.setup"):
|
||||
from crewai_tools import SerperDevTool
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
mock_get_response = MagicMock()
|
||||
mock_get_response.status_code = 200
|
||||
@@ -508,162 +508,4 @@ interactions:
|
||||
- req_80eed127ea0361c637657470cf9b647e
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Researcher\nThe input to this tool should be the coworker, the task
|
||||
you want them to do, and ALL necessary context to execute the task, they know
|
||||
nothing about the task, so share absolutely everything you know, don''t reference
|
||||
things but instead explain them.\nTool Name: Ask question to coworker\nTool
|
||||
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
|
||||
of the following coworkers: Researcher\nThe input to this tool should be the
|
||||
coworker, the question you have for them, and ALL necessary context to ask the
|
||||
question properly, they know nothing about the question, so share absolutely
|
||||
everything you know, don''t reference things but instead explain them.\n\nIMPORTANT:
|
||||
Use the following format in your response:\n\n```\nThought: you should always
|
||||
think about what to do\nAction: the action to take, only one name of [Delegate
|
||||
work to coworker, Ask question to coworker], just the name, exactly as it''s
|
||||
written.\nAction Input: the input to the action, just a simple JSON object,
|
||||
enclosed in curly braces, using \" to wrap keys and values.\nObservation: the
|
||||
result of the action\n```\n\nOnce all necessary information is gathered, return
|
||||
the following format:\n\n```\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question\n```"}, {"role": "user", "content":
|
||||
"\nCurrent Task: Ask the researched to say hi!\n\nThis is the expected criteria
|
||||
for your final answer: Howdy!\nyou MUST return the actual complete content as
|
||||
the final answer, not a summary.\n\nBegin! This is VERY important to you, use
|
||||
the tools available and give your best Final Answer, your job depends on it!\n\nThought:"},
|
||||
{"role": "assistant", "content": "Thought: To complete the task, I need to ask
|
||||
the researcher to say \"Howdy!\" I will use the \"Ask question to coworker\"
|
||||
tool to instruct the researcher accordingly.\n\nAction: Ask question to coworker\nAction
|
||||
Input: {\"question\": \"Can you please say hi?\", \"context\": \"The expected
|
||||
greeting is: Howdy!\", \"coworker\": \"Researcher\"}\nObservation: Howdy!"}],
|
||||
"model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3318'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=g371YzJ.yOdjD9dcZxJ8VI4huWlRJL2j8lbKDhE0qV8-1743463280779-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFJdj9MwEHzPr1j83KL0Ky15A6SKQ7oHdCAh4BS5ziYxdWxjbyjl1P9+
|
||||
stNrcnAn3Ysle3bGM7t7lwAwWbIcmGg4idaq6fvN7nrb/LrObm72/su3d8dP9uux83+3/mr5kU0C
|
||||
w+x+oqAH1mthWquQpNE9LBxywqA6W6+yWZbNs0UEWlOiCrTa0nRppvN0vpymm2manYmNkQI9y+F7
|
||||
AgBwF89gUZf4h+WQTh5eWvSe18jySxEAc0aFF8a9l564JjYZQGE0oY6uPzemqxvK4Qq0OcA+HNQg
|
||||
VFJzBVz7AzqAH3ob72/jPYcP5lAeX40lHVad5yGR7pQaAVxrQzx0JIa5PSOni31lauvMzv9DZZXU
|
||||
0jeFQ+6NDlY9GcsiekoAbmObukfJmXWmtVSQ2WP8bp3Oez02DGZAZ6szSIa4GrFm68kTekWJxKXy
|
||||
o0YzwUWD5UAdpsK7UpoRkIxS/+/mKe0+udT1S+QHQAi0hGVhHZZSPE48lDkMe/tc2aXL0TDz6H5L
|
||||
gQVJdGESJVa8U/1KMX/0hG1RSV2js072e1XZYpO+WWWrxULsWHJK7gEAAP//AwAfm3tPYAMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974f08878cb7239e-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:57:43 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=veVLwO9uj9FvIBC_v55vfVlSdU8NHY.wdBapmpmtHn4-1756166263-1.0.1.1-LJ9bNXe6v06jg_mjVZp8vfvLT.9Hf8xUHOf2FempuntDnL5ogQXRuJvIJipz1trGr96_3WUCWlsexhQlkdtveEH6NbFMrm__Y61khA_IyPM;
|
||||
path=/; expires=Tue, 26-Aug-25 00:27:43 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=miqmxIsNDZtXvrtcyhpeSI3TjT_zcUas9Enn6gGtIsI-1756166263603-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '472'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '550'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999222'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_4f627ee303b9429db117e9699b00656a
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -498,204 +498,4 @@ interactions:
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: First Agent\nThe input to this tool should be the coworker, the task
|
||||
you want them to do, and ALL necessary context to execute the task, they know
|
||||
nothing about the task, so share absolutely everything you know, don''t reference
|
||||
things but instead explain them.\nTool Name: Ask question to coworker\nTool
|
||||
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
|
||||
of the following coworkers: First Agent\nThe input to this tool should be the
|
||||
coworker, the question you have for them, and ALL necessary context to ask the
|
||||
question properly, they know nothing about the question, so share absolutely
|
||||
everything you know, don''t reference things but instead explain them.\n\nIMPORTANT:
|
||||
Use the following format in your response:\n\n```\nThought: you should always
|
||||
think about what to do\nAction: the action to take, only one name of [Delegate
|
||||
work to coworker, Ask question to coworker], just the name, exactly as it''s
|
||||
written.\nAction Input: the input to the action, just a simple JSON object,
|
||||
enclosed in curly braces, using \" to wrap keys and values.\nObservation: the
|
||||
result of the action\n```\n\nOnce all necessary information is gathered, return
|
||||
the following format:\n\n```\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question\n```"}, {"role": "user", "content":
|
||||
"\nCurrent Task: Process initial data\n\nThis is the expected criteria for your
|
||||
final answer: Initial analysis\nyou MUST return the actual complete content
|
||||
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
|
||||
use the tools available and give your best Final Answer, your job depends on
|
||||
it!\n\nThought:"}, {"role": "assistant", "content": "Thought: I need to delegate
|
||||
the task of processing the initial data to the First Agent to ensure we have
|
||||
a thorough and accurate analysis. I will provide them with all the necessary
|
||||
details to complete this task effectively.\n\nAction: Delegate work to coworker\nAction
|
||||
Input: {\"task\": \"Process initial data\", \"context\": \"The task involves
|
||||
analyzing the initial data set we have received. This includes cleaning the
|
||||
data, categorizing it for analysis, identifying any trends or patterns, and
|
||||
summarizing the findings. The goal is to have a clear understanding of what
|
||||
the data indicates and any initial insights that can be drawn from it.\", \"coworker\":
|
||||
\"First Agent\"}\nObservation: To process the initial data set effectively,
|
||||
we will follow several key steps to ensure we clean, categorize, analyze, and
|
||||
summarize our findings comprehensively.\n\n1. **Data Cleaning**: \n - First,
|
||||
we will review the dataset for missing, inconsistent, or erroneous entries.
|
||||
\n - We will handle missing values by either imputing them based on surrounding
|
||||
data or removing records where necessary.\n - Additionally, we will standardize
|
||||
categorical variables to ensure consistency (e.g., ensuring all location names
|
||||
are spelled the same way).\n\n2. **Data Categorization**: \n - Next, we will
|
||||
categorize the data into relevant segments that will aid our analysis. \n -
|
||||
This involves grouping data points based on common characteristics, such as
|
||||
demographics, time periods, or any key performance indicators (KPIs) we are
|
||||
focusing on.\n\n3. **Trend and Pattern Identification**: \n - With the cleaned
|
||||
and categorized data, we will perform a detailed analysis to identify trends
|
||||
and patterns.\n - This will involve using statistical tools and visualizations
|
||||
to uncover relationships within the data. We will look at time series analysis,
|
||||
correlation coefficients, and significant outliers that may require further
|
||||
investigation.\n\n4. **Summarizing Findings**: \n - Finally, we will compile
|
||||
a summary of our findings, including both qualitative insights and quantitative
|
||||
metrics.\n - This summary should encapsulate the key trends identified, any
|
||||
notable patterns, and implications of these findings.\n - We will also document
|
||||
any limitations of the data and suggest areas for further research if necessary.\n\nBy
|
||||
completing these steps, we will not only have a clear understanding of what
|
||||
the data indicates but also provide actionable insights that can guide our next
|
||||
steps. This comprehensive analysis will serve as a solid foundation for any
|
||||
additional exploration or decision-making initiatives related to our project."}],
|
||||
"model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5593'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=YeODa6MF5ug3OZUV6ob1dSrBKCM8BXbKkS77TIihYoE-1755550362828-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFXbbtxGDH33VxB6KRCsDV8T129pmgBBH5qiblO0DuzxDCWxHnFUklpn
|
||||
HeTfC472ljYF+rLAisPLOTwkPx0ANJSaK2hiHywOYz58dTn2H18/9T+d/KT8+Otr/uXFu6ffLs/k
|
||||
+rvfY7Nwj3L/J0bbeB3FMowZjQrP5igYDD3qyYuL56enJ+enF9UwlITZ3brRDs/L4enx6fnh8eXh
|
||||
8fO1Y18oojZX8McBAMCn+uslcsKPzRUcLzZfBlQNHTZX20cAjZTsX5qgSmqBrVnsjLGwIdeq7+7u
|
||||
bvi6L1PX2xW8BS6P8OA/1iO0xCFDYH1EueE39d/L+u8KrguMUiKq1qfEZBQypGABFA2wbTEaLTGv
|
||||
FvCI8Eg5w8QJxcIDguISJWSIQkYxZFDDUcEKIOskCAGcS8EeWWmJEDjklZIewQ3f8MkRPHv2ved6
|
||||
lTEwcffs2RW8X6dRC2JwvwJKyEbtiriDwAliEfGquIOBVIm7BRDHwk4Ssi2gCKBIYSyTArIJocIj
|
||||
WU9ccTo+RTuC654UiJclL1Eh9qV4OECyHgVoGCcLrgMY0PqSFNoim6SwDHlC9WSCQ1n6Jy+jagdB
|
||||
MBZJegRvJvFoQxGsHA6TmoPjFCTRE0IMhl2RSuAyCIX7jHOmiaktMpCtjpyv0x1fa5enWp2z9iNH
|
||||
hOg0YlpsQc5M3iMU6QLTEyYgtgIjihEjGyh2A7J5/4NBGyJlsmD7raoZOynT6BA3Ee+DYoLCoH0Q
|
||||
TBD7ICEaCqlRVMj0gJBwKJ2EsfcvRcBoQE9OJekCkPvA0YN6vYIZl8FhlBaIlbreFJKER67ozxz9
|
||||
tSCnqoJ3wQyF4e2sDopbKt6T9VtSHbIzsYAyCbQlTrqWV0+tuVT35fWAKzBPoTXHOOdQWFLwllnF
|
||||
FjJYKXl+siSdQl43Qo/gB1xtxVLzEMc8JZyhK1YpbrjdyTnPMlObEqEu4LGnjKDUcYXGBmWyTCi6
|
||||
bQAXw1Rl0k7mw6ZRJiOetXLubP08DUMQenJob4gTcafO0HWPEEstzE0+tbspSBLaOlwBtPqvZm3U
|
||||
AhgV/nLATsYSd30KuXCnlNDNbBv7gCYUdT1ptfRYlii140Pweax0L7ZcL2bmi0/ydhdlGmiexLVd
|
||||
p65DNWjn4QJBxSCxh0R1N3gvnIaXqUepCiueUnG9oropSGBDbwZYX8R35/6yKuwy3MzRN+rLIK9F
|
||||
pot5B47EPE+9T6kLDSO56+EQHjbbat2cUYrfGBj7oOi13d3d7W9zwXbS4MeEp5z3DIG5rKH7Hfmw
|
||||
tnzeXo5culHKvf7DtWmJSftbwaCF/UqolbGp1s8HAB/qhZq+ODrNKGUY7dbKA9Z0Jycn53PAZncU
|
||||
d+bTi43VioW853f2/HLxlZC3CS1Q1r0z18QQe0w7391NDFOismc42AP+73q+FnsGT9z9n/A7Q4w4
|
||||
GqbbUTBR/BLz7pmgd/S/nm2JrgU3irKkiLdGKN6MhG2Y8nzQG12p4XDbEncoo9B81dvx9vL424vn
|
||||
F2dn8b45+HzwNwAAAP//AwDhfkSS3ggAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 97544b3fd9c66894-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 26 Aug 2025 15:17:10 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=AK6x7s00CdjvAhZqoKc.oyU2huXbBJAB_qi1o9cIHkk-1756221430-1.0.1.1-s9cWi1kLPHCBoqRe8BhCYWgaKEG.LQvm0b0NNJkJrpuMMIAUz9sSqijPatK.t2wknR3Qo65.PTew2trnDH5_.mL1l4JewiW1VndksvCWngY;
|
||||
path=/; expires=Tue, 26-Aug-25 15:47:10 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=3NkIk1Ua5GwknkJHax_bb1dBUHU9Yobu11sjZ9yu7Rg-1756221430892-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '5563'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '5651'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29998658'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 2ms
|
||||
x-request-id:
|
||||
- req_8ee5ddbc01374cf487da8763d7dee507
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1158,201 +1158,4 @@ interactions:
|
||||
- req_04364e8bad0fc80241f59550b9320360
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Senior Writer, Researcher, CEO\nThe input to this tool should be
|
||||
the coworker, the task you want them to do, and ALL necessary context to execute
|
||||
the task, they know nothing about the task, so share absolutely everything you
|
||||
know, don''t reference things but instead explain them.\nTool Name: Ask question
|
||||
to coworker\nTool Arguments: {''question'': {''description'': ''The question
|
||||
to ask'', ''type'': ''str''}, ''context'': {''description'': ''The context for
|
||||
the question'', ''type'': ''str''}, ''coworker'': {''description'': ''The role/name
|
||||
of the coworker to ask'', ''type'': ''str''}}\nTool Description: Ask a specific
|
||||
question to one of the following coworkers: Senior Writer, Researcher, CEO\nThe
|
||||
input to this tool should be the coworker, the question you have for them, and
|
||||
ALL necessary context to ask the question properly, they know nothing about
|
||||
the question, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, only one
|
||||
name of [Delegate work to coworker, Ask question to coworker], just the name,
|
||||
exactly as it''s written.\nAction Input: the input to the action, just a simple
|
||||
JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
|
||||
the result of the action\n```\n\nOnce all necessary information is gathered,
|
||||
return the following format:\n\n```\nThought: I now know the final answer\nFinal
|
||||
Answer: the final answer to the original input question\n```"}, {"role": "user",
|
||||
"content": "\nCurrent Task: Write one amazing paragraph about AI.\n\nThis is
|
||||
the expected criteria for your final answer: A single paragraph with 4 sentences.\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nThis
|
||||
is the context you''re working with:\n\"Artificial Intelligence (AI) stands
|
||||
at the forefront of technological advancement, utilizing machine learning, neural
|
||||
networks, and natural language processing to replicate human cognitive functions.
|
||||
The transformative power of AI spans across industries, driving revolutionary
|
||||
changes from predictive diagnostics in healthcare to algorithmic trading in
|
||||
finance. By rapidly analyzing large datasets, AI reveals hidden patterns, enhances
|
||||
efficiency, and sparks unprecedented innovation. As we embrace this technology,
|
||||
it is crucial to prioritize ethical considerations and responsible development
|
||||
to fully leverage its potential while addressing associated risks.\"\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}, {"role": "assistant", "content":
|
||||
"Thought: To craft an amazing paragraph about AI as described in the context,
|
||||
I should delegate this task to the Senior Writer and include all the detailed
|
||||
context provided.\n\nAction: Delegate work to coworker\nAction Input: {\"task\":
|
||||
\"Write one amazing paragraph about AI\", \"context\": \"Artificial Intelligence
|
||||
(AI) is a transformative technology that leverages machine learning, neural
|
||||
networks, and natural language processing to perform tasks that traditionally
|
||||
require human intelligence, such as visual perception, speech recognition, and
|
||||
decision-making. Its potential to revolutionize industries\u2014from healthcare
|
||||
with predictive diagnostics to finance with algorithmic trading\u2014underscores
|
||||
its profound impact on our daily lives and the global economy. With the ability
|
||||
to analyze vast amounts of data quickly and accurately, AI can unveil hidden
|
||||
patterns, drive efficiencies, and foster innovation at an unprecedented scale.
|
||||
As we move forward, the ethical and responsible development of AI will be paramount
|
||||
to harness its benefits while mitigating risks.\", \"coworker\": \"Senior Writer\"}\n\nObservation:
|
||||
The Senior Writer has received the task and context.\nObservation: Artificial
|
||||
Intelligence (AI) is a transformative technology that leverages machine learning,
|
||||
neural networks, and natural language processing to perform tasks that traditionally
|
||||
require human intelligence, such as visual perception, speech recognition, and
|
||||
decision-making. Its potential to revolutionize industries\u2014from healthcare
|
||||
with predictive diagnostics to finance with algorithmic trading\u2014underscores
|
||||
its profound impact on our daily lives and the global economy. With the ability
|
||||
to analyze vast amounts of data quickly and accurately, AI can unveil hidden
|
||||
patterns, drive efficiencies, and foster innovation at an unprecedented scale.
|
||||
As we move forward, the ethical and responsible development of AI will be paramount
|
||||
to harness its benefits while mitigating risks."}], "model": "gpt-4o", "stop":
|
||||
["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5729'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=Q23zZGhbuNaTNh.RPoM_1O4jWXLFM.KtSgSytn2NO.Q-1744492727869-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFTbbhtHDH33VxD71AKyIcu27PhNCNBCBYICRdACqQOZmuHuspolN0Ou
|
||||
VCUwkI/IF+ZLiln5orYp0JfB7pzhmXPIIT+dAFQcq1uoQoseuj6dvr5Zv1m+e/P6yvI+XL+bCl//
|
||||
+uN08cvPP81u3laTEqHrPyj4U9RZ0K5P5KxygEMmdCqs59dX8/P5fDafj0CnkVIJa3o/vdTT2XR2
|
||||
eTq9OZ3OHwNb5UBW3cLvJwAAn8a1SJRIf1a3MJ087XRkhg1Vt8+HAKqsqexUaMbmKF5NXsCg4iSj
|
||||
6vv7+zt52+rQtH4LSxDdwaYs3hLULJgAxXaUz+7kh/F3Mf7ewiI71xwYEyzFKSVuSALBd4vl98AG
|
||||
CJ5RrNbcofOWwCm0okmbPXiLDom2lLEhgw5Dy0KQCLOwNBMQGjImEPKd5o1NACWCoI+7CaUZsCHo
|
||||
swYyY2nAFXrK5TJwtI0drvCMkUs1MKU9ZPowcCZohw4F+Ej0BGwILaDBlm3AVLgC9SVyAtYThRYy
|
||||
BW2ED3tFTqTAxiqnHW5YmjNYukGvJbMlKa6QaatpKBH8kYAlDuaZyb5+/lJn7aAlTN4GzAQ79hb6
|
||||
TJHDmKzI2Iiac7DCVCpRkjsew9RoZm87DgeH0nz9/GWQSNmCZjLgoiRrrYNE4K7H4KACOmSIyGkP
|
||||
ibdko4tS5ybpGhNQUNFufwa/lVsKgGtO7PuiAAXT/iPBFs0BOx3EDbSGiI7wYeCwSfuREEMYMjql
|
||||
/QQWSwgoMMiWOEHLMZJAj+6UxSYQc7FKdXlGJIHpsdC1mlMGFtEtlvQBOow8faZAkcQpggVMdAYL
|
||||
gx1Bp1uCWvMOc5yM2slbDuPzjZDJehXjdSKItKWkfUfiRf9iCTtOCdYEPeaDseK3xSxkh1SuSagu
|
||||
H7uWE0HHzg16eXaZbWNnd3J/f3/cYJnqwbD0twwpHQEooj5aGlv7/SPy8NzMSZs+69r+EVrVLGzt
|
||||
KhOaSmlcc+2rEX04AXg/Do3hb3Og6rN2va9cNzRedz69fnUgrF7m1BF8efGIujqmI2A2m02+QbmK
|
||||
5MjJjiZPFTC0FF9iX8YUDpH1CDg5Mv5vPd/iPphnaf4P/QsQSidTXD1117Hnl2OZyiD/r2PPiR4F
|
||||
V0Z5y4FWzpRLMSLVOKTDjK1sb07dqmZpKPeZD4O27lc301dX86uLi7CuTh5O/gIAAP//AwAI5M9k
|
||||
cQYAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974f089aacfe239e-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:57:47 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=x3Z22TL7MiNrhhoqoBAkoerRf2slLEFNFLerhsr8uWg-1756166267-1.0.1.1-YsxFr.4IpdTvhEkob8OJGHggHZj5mRzRdK0Ta9PllabKVSUEp9sdbUxQpYatb5wW12dk9nglKUcebewHsL7RgYB0Y0BPnFRoyX0r5kn14HI;
|
||||
path=/; expires=Tue, 26-Aug-25 00:27:47 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=H4uvKyyXorF4TlJkoLerJYI2PABtQ6M4T.XxqO7FHCQ-1756166267474-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1314'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1380'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29998623'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 2ms
|
||||
x-request-id:
|
||||
- req_47e6588839144756828145be5a5a3b63
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -716,249 +716,4 @@ interactions:
|
||||
- req_04659f093830c14dbc5342c119533861
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Researcher, Senior Writer\nThe input to this tool should be the coworker,
|
||||
the task you want them to do, and ALL necessary context to execute the task,
|
||||
they know nothing about the task, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\nTool Name: Ask question to coworker\nTool
|
||||
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
|
||||
of the following coworkers: Researcher, Senior Writer\nThe input to this tool
|
||||
should be the coworker, the question you have for them, and ALL necessary context
|
||||
to ask the question properly, they know nothing about the question, so share
|
||||
absolutely everything you know, don''t reference things but instead explain
|
||||
them.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, only one
|
||||
name of [Delegate work to coworker, Ask question to coworker], just the name,
|
||||
exactly as it''s written.\nAction Input: the input to the action, just a simple
|
||||
JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
|
||||
the result of the action\n```\n\nOnce all necessary information is gathered,
|
||||
return the following format:\n\n```\nThought: I now know the final answer\nFinal
|
||||
Answer: the final answer to the original input question\n```"}, {"role": "user",
|
||||
"content": "\nCurrent Task: Come up with a list of 5 interesting ideas to explore
|
||||
for an article, then write one amazing paragraph highlight for each idea that
|
||||
showcases how good an article about this topic could be. Return the list of
|
||||
ideas with their paragraph and your notes.\n\nThis is the expected criteria
|
||||
for your final answer: 5 bullet points with a paragraph for each idea.\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}, {"role": "assistant", "content":
|
||||
"```\nThought: To come up with interesting article ideas and write compelling
|
||||
highlights, I should first generate a list of potential topics. The Researcher
|
||||
can help with gathering current trends or interesting topics that might be engaging.
|
||||
Then, I can delegate writing the paragraph highlights to the Senior Writer.\n\nAction:
|
||||
Ask question to coworker\nAction Input: {\"question\": \"What are some current
|
||||
trending topics or subjects that are interesting to explore for an article?\",
|
||||
\"context\": \"I need to generate a list of 5 interesting ideas to explore for
|
||||
an article, each paired with an amazing paragraph highlight. Current trends
|
||||
or unusual insights could be a good source of inspiration for these ideas.\",
|
||||
\"coworker\": \"Researcher\"}\nObservation: **1. The Rise of Autonomous AI Agents
|
||||
in Daily Life** \nAs artificial intelligence technology progresses, the integration
|
||||
of autonomous AI agents into everyday life becomes increasingly prominent. These
|
||||
agents, capable of making decisions without human intervention, are reshaping
|
||||
industries from healthcare to finance. Exploring case studies where autonomous
|
||||
AI has successfully decreased operational costs or improved efficiency can reveal
|
||||
not only the benefits but also the ethical implications of delegating decision-making
|
||||
to machines. This topic offers an exciting opportunity to dive into the AI landscape,
|
||||
showcasing current developments such as AI assistants and autonomous vehicles.\n\n**2.
|
||||
Ethical Implications of Generative AI in Creative Industries** \nThe surge
|
||||
of generative AI tools in creative fields, such as art, music, and writing,
|
||||
has sparked a heated debate about authorship and originality. This article could
|
||||
investigate how these tools are being used by artists and creators, examining
|
||||
both the potential for innovation and the risk of devaluing traditional art
|
||||
forms. Highlighting perspectives from creators, legal experts, and ethicists
|
||||
could provide a comprehensive overview of the challenges faced, including copyright
|
||||
concerns and the emotional impact on human artists. This discussion is vital
|
||||
as the creative landscape evolves alongside technological advancements, making
|
||||
it ripe for exploration.\n\n**3. AI in Climate Change Mitigation: Current Solutions
|
||||
and Future Potential** \nAs the world grapples with climate change, AI technology
|
||||
is increasingly being harnessed to develop innovative solutions for sustainability.
|
||||
From predictive analytics that optimize energy consumption to machine learning
|
||||
algorithms that improve carbon capture methods, AI''s potential in environmental
|
||||
science is vast. This topic invites an exploration of existing AI applications
|
||||
in climate initiatives, with a focus on groundbreaking research and initiatives
|
||||
aimed at reducing humanity''s carbon footprint. Highlighting successful projects
|
||||
and technology partnerships can illustrate the positive impact AI can have on
|
||||
global climate efforts, inspiring further exploration and investment in this
|
||||
area.\n\n**4. The Future of Work: How AI is Reshaping Employment Landscapes** \nThe
|
||||
discussions around AI''s impact on the workforce are both urgent and complex,
|
||||
as advances in automation and machine learning continue to transform the job
|
||||
market. This article could delve into the current trends of AI-driven job displacement
|
||||
alongside opportunities for upskilling and the creation of new job roles. By
|
||||
examining case studies of companies that integrate AI effectively and the resulting
|
||||
workforce adaptations, readers can gain valuable insights into preparing for
|
||||
a future where humans and AI collaborate. This exploration highlights the importance
|
||||
of policies that promote workforce resilience in the face of change.\n\n**5.
|
||||
Decentralized AI: Exploring the Role of Blockchain in AI Development** \nAs
|
||||
blockchain technology sweeps through various sectors, its application in AI
|
||||
development presents a fascinating topic worth examining. Decentralized AI could
|
||||
address issues of data privacy, security, and democratization in AI models by
|
||||
allowing users to retain ownership of data while benefiting from AI''s capabilities.
|
||||
This article could analyze how decentralized networks are disrupting traditional
|
||||
AI development models, featuring innovative projects that harness the synergy
|
||||
between blockchain and AI. Highlighting potential pitfalls and the future landscape
|
||||
of decentralized AI could stimulate discussion among technologists, entrepreneurs,
|
||||
and policymakers alike.\n\nThese topics not only reflect current trends but
|
||||
also probe deeper into ethical and practical considerations, making them timely
|
||||
and relevant for contemporary audiences."}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '7865'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=icM8L9.yRk22iqPJZhDQeVHgfWZOnl1YiBFMJmVO8_8-1743465529821-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//bFfbbhy5EX3fryjoZQFjJNiO7az1pvUlK8ABAsdBAqweXENWd5fFZhEs
|
||||
do9G+/NBkZyLtPsiaGZYzbqcc+r0Hz8BXLC/uIYLN2FxcwqXH37Zfnz/6N/e/qBZCf3+/X/e3H+5
|
||||
j/8rMYwXG4uQ7Q9y5RB15WROgQpLbD+7TFjInvrq72/fvXr39t379/WHWTwFCxtTuXwjl69fvn5z
|
||||
+fKXy5fveuAk7EgvruH3nwAA/qh/LcXo6eHiGl5uDt/MpIojXVwfDwFcZAn2zQWqshaM5WJz+tFJ
|
||||
LBRr1t+/f7+L3yZZxqlcwy1E2cGEKwFCYC0gA3AslEkLxxGKJHYKGD2UiThDwoxjxjTBxOMUeJyK
|
||||
QpmwwExU7BC4zIUyIwySYaRIGeuzKI442j+YC7tAwJ5Qr+7iXfzMEQPcRN1Rvoa7+OoKXrz4NhF8
|
||||
ZSXL6WYpEmWWReHmFm5GikWBI3xEDnv4wgO9eAFwFwHgRusFAzvGUIsJgUeKjqCQm6IEGfeQsoyZ
|
||||
VEk3NWk7N1qiEu0+fHIfHu4rArRS3nvcQ+CBYEtOZrKfbPLKcQz12TNHiuUKvk2k1OM34DDhNtSC
|
||||
Zry3VnhyrCxRYcdlkqXAtMwY2wxWipbPBjATZNIJk8Vw9IuWzKQwZJlhIgxlcnaoCAwcMTq6gk8P
|
||||
KUi2AIdKoGXxFrKbKNOz+iZU0MU5Uh2WEPaWlpVDHiRR6woGcKJFQTLwnLKs5IEGazNFtweHETKt
|
||||
hAGiFJAY9rWxW4o0cFHYLgUwqNRvqUzsbDxzCuzqBWpt8RRobHA5tOayt6oIzOgmjqTWV9YGTpBh
|
||||
oGwQBXpwXEMlJclliVz2FuZ5pTY8u/rmFgJGrw4TbUAn2bk6OHBLzhQLeFopSJrrzHVxE2BDwYFa
|
||||
jQ9nLVxpMkA3LL827H7q9d0+q+8fnQ5rzYMjfDDBsI+3x6kekWwE0CWPFTDjk8giEioB3CF+YApe
|
||||
N8eEMZcNzIuy29R0d7n2ZtOGnTDfkwc08BTy4GmLhQC3BkFcyiRZJ041UjKPxk8u+974A4GdLMED
|
||||
x9XUYrQHTLKzJiv1DA2UW7LuLoam7b7Gam9hTV6yboAecOZo57ZSpjqnJKZZxmHTEY5R1kbPLkaQ
|
||||
We8bZlYMS4VIRs8drJiLBc56Bb8dlMrOJMqayFnPOn9OWRj2AtBDomx0tYsqUGvCrVgDPnuTS5P+
|
||||
TBNFtfbLSnll2llCVQQnDIHiaJegI78xiQiLr0CTtM+WDziJjnI8CizQLD1/nhM641FXhN63PgHP
|
||||
6hY1egArrFysYu3y2xFxRDnQKsHKxSBxVEv/qIQVpehX04y5qVTnGxfInKh2n6qW1PZXjP/NMN4B
|
||||
HHi2yX+YMI4E/+SKBJZ4DR86of4tYWkMsDI/L2XJBP86jPdcuC3/neTgwXZMCtR0EVy/xNVLNpUB
|
||||
Jy3nZ/rbADdhjibwvkpA4/QRRiuBHrOyCnXRghxxyw3nnw0ZKZPnChXAiGFfbBfWZSep8MyPBMbK
|
||||
cW+D1GVOFZ8noYJAmCuqMYySuUxzj+8KCg7zVqIthtqUmcokxuKb25/1jAEcgeLKWaLNCANoVV2q
|
||||
w0ctTwSR48qFuiIe52bApAdua93ULJ0pkwlJbzFHLlw7pJvWfFvkblGD4phliX6bqUEkkxJm05vo
|
||||
zwMBeTZ1KZDJL65Ow0DMZf+zHmoeRErKbFvyCUFPi8jIZm6r0+NseWMukapCaV08HIKJp+XfpEO5
|
||||
Tq2T6Oa2nqo+x6oIsrV11iumYZDKd46auG7MYclloie47zWa1NkMrGWlaSFh5cSbg2fpAJcB/iv5
|
||||
/hp+k10Ve4Wvxx3+aU5B9vVBXw40fSr8J4bbHdb2BoqTLnSy3A+SHTWlNe20hRFL09fqTh82dR80
|
||||
jtdZ2+qaT1X9Ca3mFzku1U+UjFFNSOt9P2QLs+2O8perwFM4X7WHjVoyRV/3383tpc+8UqxP8qwp
|
||||
YBOeM3E6re/qcSTDkvSeQ6hM6krZZK7hOtKuPs9MsF7Br/uzffLE/MhQe4LRPjQidt9XtyoNQ9sM
|
||||
YX/aMqRLqMA867XHVBp1NpAJvRkQQ9iIHMG2UbV5HLX549qQlClhQ5dkI1WDSfNjlR4N5wZWCQG3
|
||||
BjzqbT4H4hPjXTEuudhsrbwkwRxZr86cqJRzmGRSDl07GoRsPdXGVG2tUH5rUP5IjmLJGPiRLKvr
|
||||
M0dpcV+lOdlfg7h7N1nlHC37jyf/dK7t29O5MyrrjihZttleSWDFzOaolFxbyeYcz6SqX3Fm0ayv
|
||||
Wq0awoDqOOLpzcUKL9MJDVfwvKoOXPTe3gaAVZeGE48FIWVe0e03ls+SueybKfA0i7NpPJ4nVV/x
|
||||
tHqcEGTXTU82WYZMtl1Adl22jjfsJg5Hk1zBYYunMr2+Ldg64qPjfcq2upMem+nyT+qKVGzkzX95
|
||||
1ryk8twgPWtjy34DA2FZcnvPOO7KoxBXVPXNWlGg+7YAt1R2RPF8yg3Mz+3XcaclLgOGcPI+nRAn
|
||||
21K93V+OSwvPSzDWnhkhnMVKPDobNU23aEqZIi25W7r/AwAA//+MWMsOwiAQvPcrDGcPvlr1YwxB
|
||||
WCopAuFx8NB/N0ur4CvxPMuwC7vJzuQRuV3ZkGWDVgM2fdbFtWb2IFNgKNlN0roCmDF2Hn5U66cZ
|
||||
GZ/6XNveeXsOb0eJVEaFC8U9xRrU4iFaRzI6NovFKfsA6UXaExxgF2m0A+Tr1rt2PxGSYj0UuNsd
|
||||
ZzTayHQBNutVt/xCSQU2pg6VmUA44xcQ5WxxHlgSylZAUxX+mc837ql4Zfp/6AvAObgIgj4Wsrrm
|
||||
EuYBm/RX2POhc8Ik4L7OgUYFHj9DgGRJT7YJCbcQ4UqlMj34vKRgiHT0sDq2Xbvd8jNpxuYOAAD/
|
||||
/wMAWvKreUQSAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974efac6faf5cf15-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:48:25 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=yJXo3QqFAYYxljNSZBr47v.aphUMHM_ulx8v95ohSS4-1756165705-1.0.1.1-hhSjgTF5yD4vPuetC1LZGfC4KDF6RXEOSk8B4k4kOCq0XyfGNZkzt8kMn.F..aqu9T7UeYwKJprb.ZhscrKkokkqPou4V3aYXTAqIb2GZvE;
|
||||
path=/; expires=Tue, 26-Aug-25 00:18:25 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=yh2RjFXBfB0RrOxRjxqkpRIka8SH8Bv0F2PRUGTNLkY-1756165705944-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '6170'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '6192'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29998089'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 3ms
|
||||
x-request-id:
|
||||
- req_f470345c656b4bb7abb0ab09372eabef
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1384,351 +1384,4 @@ interactions:
|
||||
- req_7fdc097dc5895a4d2d953e8b1a2479d1
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Researcher, Senior Writer\nThe input to this tool should be the coworker,
|
||||
the task you want them to do, and ALL necessary context to execute the task,
|
||||
they know nothing about the task, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\nTool Name: Ask question to coworker\nTool
|
||||
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
|
||||
of the following coworkers: Researcher, Senior Writer\nThe input to this tool
|
||||
should be the coworker, the question you have for them, and ALL necessary context
|
||||
to ask the question properly, they know nothing about the question, so share
|
||||
absolutely everything you know, don''t reference things but instead explain
|
||||
them.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, only one
|
||||
name of [Delegate work to coworker, Ask question to coworker], just the name,
|
||||
exactly as it''s written.\nAction Input: the input to the action, just a simple
|
||||
JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
|
||||
the result of the action\n```\n\nOnce all necessary information is gathered,
|
||||
return the following format:\n\n```\nThought: I now know the final answer\nFinal
|
||||
Answer: the final answer to the original input question\n```"}, {"role": "user",
|
||||
"content": "\nCurrent Task: Come up with a list of 5 interesting ideas to explore
|
||||
for an article, then write one amazing paragraph highlight for each idea that
|
||||
showcases how good an article about this topic could be. Return the list of
|
||||
ideas with their paragraph and your notes.\n\nThis is the expected criteria
|
||||
for your final answer: 5 bullet points with a paragraph for each idea.\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}, {"role": "assistant", "content":
|
||||
"Thought: The first step is to brainstorm and research interesting ideas for
|
||||
the article. Since the task involves both research and writing, I will need
|
||||
to delegate the brainstorming of ideas to the Researcher and the writing of
|
||||
the paragraphs to the Senior Writer. \n\nAction: Delegate work to coworker\nAction
|
||||
Input: {\"task\": \"Come up with a list of 5 interesting ideas to explore for
|
||||
an article.\", \"context\": \"We need a list of 5 interesting ideas that would
|
||||
make compelling articles. The topics should be relevant, timely, and engaging
|
||||
for a broad audience.\", \"coworker\": \"Researcher\"}\nObservation: 1. **The
|
||||
Rise of AI in Healthcare: Transforming Patient Care and Diagnostics**\n \"Imagine
|
||||
a world where routine check-ups and complex diagnostics are seamlessly conducted
|
||||
by intelligent systems. AI in healthcare is rapidly transforming this vision
|
||||
into reality, enhancing patient care through early and accurate disease detection,
|
||||
personalized treatment plans, and continuous health monitoring. This revolution
|
||||
is not just a futuristic concept; it''s already happening with AI-driven technologies
|
||||
enabling doctors to make more informed decisions and offering patients a more
|
||||
proactive approach to their health. The ethical and privacy concerns, however,
|
||||
pose new challenges that the industry must navigate to ensure responsible advancement.\"\n\n2.
|
||||
**AI Agents in the Workplace: Boosting Productivity and Shaping the Future of
|
||||
Jobs**\n \"The modern workplace is being redefined by the integration of AI
|
||||
agents, intelligent systems designed to handle repetitive tasks, analyze vast
|
||||
amounts of data, and assist in complex decision-making processes. These innovations
|
||||
are boosting productivity, allowing employees to focus on creativity and strategic
|
||||
thinking. The rise of AI in the workspace heralds a future where human and machine
|
||||
collaboration leads to unparalleled efficiencies and novel job opportunities.
|
||||
However, this shift also brings with it challenges related to job displacement
|
||||
and the urgent need for reskilling the workforce to thrive in an AI-augmented
|
||||
environment.\"\n\n3. **Ethical AI: Ensuring Fairness, Accountability, and Transparency**\n \"As
|
||||
AI technology becomes more pervasive, the quest for ethical AI has become critical.
|
||||
Ensuring fairness, accountability, and transparency in AI systems is essential
|
||||
to prevent biases and promote trust. This involves creating robust frameworks
|
||||
and guidelines that govern AI development and deployment. Real-world case studies
|
||||
highlight both the triumphs and the tribulations of tech companies as they navigate
|
||||
the complex moral landscape that AI presents. As society grapples with these
|
||||
challenges, the push for ethical AI continues to drive innovation and policy-making
|
||||
in the tech industry.\"\n\n4. **AI-Powered Startups: Innovators Changing the
|
||||
Tech Landscape**\n \"In the bustling world of tech startups, AI-powered companies
|
||||
are emerging as the vanguards of innovation, disrupting traditional business
|
||||
models and reshaping entire industries. From revolutionizing financial services
|
||||
to pioneering advances in healthcare and beyond, these startups are at the forefront
|
||||
of technological progress. By leveraging machine learning and AI, they are solving
|
||||
complex problems and bringing novel solutions to market at an unprecedented
|
||||
pace. Investors are taking notice, pouring capital into these ventures, and
|
||||
fueling a new wave of growth and competition in the AI ecosystem.\"\n\n5. **The
|
||||
Future of AI in Entertainment: From Movies to Video Games**\n \"The entertainment
|
||||
industry is undergoing a seismic shift driven by AI technology. In movies, AI
|
||||
is revolutionizing special effects, enabling filmmakers to create stunning,
|
||||
hyper-realistic worlds. In video games, AI is enhancing player experiences by
|
||||
creating adaptive, intelligent characters and dynamic storylines that respond
|
||||
to player choices. This transformative power of AI promises to push the boundaries
|
||||
of creativity, offering audiences more immersive and interactive entertainment
|
||||
experiences. As the technology continues to evolve, the future of entertainment
|
||||
looks set to be more exciting and unpredictable than ever before.\""}, {"role":
|
||||
"assistant", "content": "Thought: I''ll proceed to delegate the writing of one
|
||||
amazing paragraph highlight for each of the identified ideas to the Senior Writer,
|
||||
ensuring each topic is adequately detailed to showcase its potential.\n\nAction:
|
||||
Delegate work to coworker\nAction Input: {\"task\": \"Write one amazing paragraph
|
||||
highlight for each of these 5 ideas that showcases how good an article about
|
||||
these topics could be.\", \"context\": \"Here are the 5 ideas:\\n1. The Rise
|
||||
of AI in Healthcare: Transforming Patient Care and Diagnostics - Explore how
|
||||
AI is revolutionizing healthcare through enhanced diagnostic tools, personalized
|
||||
treatment plans, and post-operative care. Discuss current achievements, potential
|
||||
future applications, and ethical considerations.\\n2. AI Agents in the Workplace:
|
||||
Boosting Productivity and Shaping the Future of Jobs - Examine the integration
|
||||
of AI agents in various industries. Highlight how these intelligent agents are
|
||||
improving efficiency, automating mundane tasks, and assisting in decision-making
|
||||
processes. Address potential job displacement and the need for upskilling the
|
||||
workforce.\\n3. Ethical AI: Ensuring Fairness, Accountability, and Transparency
|
||||
- Delve into the ethical implications of AI deployment. Analyze the frameworks
|
||||
and guidelines being proposed to ensure AI systems are fair, transparent, and
|
||||
accountable. Include case studies of ethical dilemmas faced by companies using
|
||||
AI.\\n4. AI-Powered Startups: Innovators Changing the Tech Landscape - Provide
|
||||
an overview of how AI-driven startups are disrupting traditional business models
|
||||
and industries. Profile some of the most innovative startups, their business
|
||||
models, and the unique challenges they face. Discuss the role of venture capital
|
||||
in fostering AI innovation.\\n5. The Future of AI in Entertainment: From Movies
|
||||
to Video Games - Investigate how AI is transforming the entertainment industry,
|
||||
from scriptwriting and special effects in movies to creating more immersive
|
||||
and intelligent experiences in video games. Offer predictions about future trends
|
||||
and possible impacts on content creation and consumption.\", \"coworker\": \"Senior
|
||||
Writer\"}\nObservation:\nObservation: 1. **The Rise of AI in Healthcare: Transforming
|
||||
Patient Care and Diagnostics**\n AI is undeniably revolutionizing the healthcare
|
||||
sector, introducing transformative changes in patient care and diagnostics.
|
||||
With the advent of advanced diagnostic tools powered by AI, healthcare professionals
|
||||
can now detect and monitor diseases with unprecedented accuracy and speed. Personalized
|
||||
treatment plans, which were once a distant dream, are becoming a reality as
|
||||
AI algorithms analyze vast amounts of patient data, tailoring treatments that
|
||||
are specific to individual needs. Post-operative care has also seen significant
|
||||
improvements, with AI-driven monitoring systems ensuring that patients receive
|
||||
timely and effective care. Current achievements like IBM Watson''s oncology
|
||||
platform or Google''s DeepMind in eye care are just the beginning, with future
|
||||
applications promising to further enhance surgical robotics, predictive analytics,
|
||||
and remote patient monitoring. However, alongside these advancements, ethical
|
||||
considerations such as patient privacy, data security, and decision-making transparency
|
||||
must be rigorously addressed to fully harness AI\u2019s potential in healthcare.\n\n2.
|
||||
**AI Agents in the Workplace: Boosting Productivity and Shaping the Future of
|
||||
Jobs**\n The integration of AI agents in the workplace marks a pivotal shift
|
||||
in how businesses operate and compete. These intelligent agents are revolutionizing
|
||||
various industries by automating mundane tasks, optimizing workflows, and providing
|
||||
actionable insights from data analysis. For instance, AI chatbots in customer
|
||||
service can handle routine inquiries, freeing human agents to tackle more complex
|
||||
issues. In manufacturing, AI systems predict equipment failures, reducing downtime
|
||||
and maintaining peak productivity. By enhancing operational efficiency, AI agents
|
||||
not only boost productivity but also reshape the future of jobs. While the risk
|
||||
of job displacement is a valid concern, it concurrently heralds opportunities
|
||||
for upskilling and creating new roles that harness human creativity and strategic
|
||||
thinking. Preparing the workforce for these changes is crucial, fostering a
|
||||
symbiotic relationship between AI and human labor to drive innovation and economic
|
||||
growth.\n\n3. **Ethical AI: Ensuring Fairness, Accountability, and Transparency**\n As
|
||||
AI technologies become increasingly pervasive, ensuring their ethical deployment
|
||||
is paramount. Ethical AI demands the formulation and implementation of robust
|
||||
frameworks that enshrine fairness, accountability, and transparency. Such frameworks
|
||||
are essential in mitigating biases embedded within AI systems, which can perpetuate
|
||||
and even exacerbate inequality. Case studies, such as the controversy surrounding
|
||||
facial recognition systems and their racial biases, highlight the pressing need
|
||||
for ethical oversight. Transparency in how AI models make decisions, coupled
|
||||
with accountability measures for their outcomes, establishes trust and integrity
|
||||
in AI applications. Legislative frameworks from the EU\u2019s GDPR to the IEEE\u2019s
|
||||
global initiatives for ethical AI provide the scaffolding for these principles.
|
||||
Companies are now faced with the challenge of integrating these ethical considerations
|
||||
into their development processes, ensuring that AI advancements benefit all
|
||||
of society equitably.\n\n4. **AI-Powered Startups: Innovators Changing the Tech
|
||||
Landscape**\n AI-powered startups are at the forefront of a technological
|
||||
renaissance, disrupting traditional business models and reshaping industries.
|
||||
These startups leverage AI to offer innovative solutions, whether it''s automating
|
||||
financial trading with precision algorithms or enhancing cybersecurity through
|
||||
predictive analytics. High-profile examples like OpenAI and UiPath illustrate
|
||||
the transformative potential of AI-driven innovation. These companies face unique
|
||||
challenges, from securing sufficient funding to navigating the complexities
|
||||
of large-scale implementation. Venture capital plays a critical role in this
|
||||
ecosystem, providing the necessary resources and support to turn bold ideas
|
||||
into impactful realities. The synergy between AI innovation and entrepreneurial
|
||||
spirit is fostering a dynamic landscape where the boundaries of technology and
|
||||
commerce are continually expanding.\n\n5. **The Future of AI in Entertainment:
|
||||
From Movies to Video Games**\n The entertainment industry is undergoing a
|
||||
metamorphic change with the integration of AI technologies, paving the way for
|
||||
more immersive and engaging experiences. In movies, AI streamlines the scriptwriting
|
||||
process, enhances special effects, and even predicts box office success. Machine
|
||||
learning models analyze vast datasets to create compelling narratives and optimize
|
||||
marketing strategies. Video games have also been revolutionized by AI, with
|
||||
non-player characters (NPCs) displaying more lifelike behaviors and adaptive
|
||||
learning algorithms personalizing gaming experiences. Future trends indicate
|
||||
a seamless blend of AI with augmented reality (AR) and virtual reality (VR),
|
||||
creating interactive environments where the line between virtual and real worlds
|
||||
blur. These advancements are not only transforming content creation but also
|
||||
revolutionizing how audiences consume and interact with entertainment, predicting
|
||||
a future where AI is an indispensable tool in the creative arsenal."}], "model":
|
||||
"gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '14961'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=Dmt0PIbI1kDJNEjEUHh50Vv4Y9ZcWX6w2Uku_CIohuk-1751391377646-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA3RXwW7cSA69z1cQvgQI2o0km80EvjmOM9OzE8CYeDdYbC5UiZIYl4oastSdzlzm
|
||||
N/b39ksWrJLanWD3YMOWqoqvyMf3qD9+ALjg9uIKLsKAOYxTvLx53bz/2wva/WP/9p+v8PrHt7n/
|
||||
5dme2y9B728uNr5Dms8U8rprG2ScImWWVF8HJczkpz7/8a+vnr969eLV6/JilJaib+unfPlSLl88
|
||||
e/Hy8tnry2evlo2DcCC7uIJ//QAA8Ef57RBTS18uruDZZn0ykhn2dHF1WgRwoRL9yQWasWVM+WLz
|
||||
+DJIypQK6vtB5n7IV7CDJAd48F95IOg4YQRMdiDdwg4G3BP0mAdSassKpd9n9n+4JTTA1EJQ7DK1
|
||||
EESVbJLUcuphQsVecRoMssBIlMt++jJRyOi5MpCuPMtoD9tP6VN6V8Jfl/BXn9LzLTx9ej8Q/MZG
|
||||
vvh6B5zgZ8KYh4BKV3CvmKwTHT3kHWamlOEGlQqyt4x9Essc7OnTTwmgnGAwp5YSYxOPoLSXODsc
|
||||
/upnOJ7hFACMQhbdAKes0s6hLFljYuY9QRgw9WSObFoAhBVA+whgCx85D+V8bPe+Sjr/C1Og84WQ
|
||||
RaLBJIeS8+YI17vNOaRJpSMzloTRIGAqJWwpU8gl6CiJsyi0bIRGBgcPPKdJKVBLyWuFIcyK4Vg2
|
||||
2ETUbuGO1PxQ/uq1dgqPDnOKmGwDh4HDAA4KJAUC9POdYtAq4bgBx9ZQkFIKBCWMnI+A5knH2Ity
|
||||
HkanDMbjV4I9WgYcZU65MGFNXosZN5CRo2hN94LEIA+YSxybKHBXkgWcWt5zO2OERNTaFu7E8qVM
|
||||
pEuBfMfgZI0mYEQJjPvk+x0+j5PKnkqETc3V9e6yVd5TWnPpOOxomUYDSjZXYI5mQW3gyfVomUeK
|
||||
Na/UdRROELZwM6v6DTEMTEtEiPxAsHvzHj5iNklPzNMrUfqjZz47z0AUfhLpIz0xeEs0vefUOt/o
|
||||
uNzOfz7PVlusoZ5T4tQvt+nmPPuaaYocls6bVEa2cguBblbvcKA0OBvBZu05YASVRpy7G5iUWq53
|
||||
KfWrT/2SSqNkOlXvMWFb+FkOtCfdAEZJvXFLjs9opf2ScspDiRYk+RpdINocBmfPevKkvMdw3BSC
|
||||
eGPOyvlYQbQU2DvicsSHU4tOqJTCEUZPTEOg3IvKbF6dtlUyc5779WM8woCayJys//nz395/rpWM
|
||||
0fP82H1Fpl64Ll3v4LovFeRU0v5R9GGKGOgK3oi3sitSEY3M+9IJqYUPA06rzLyrdZEOfpFmFSiX
|
||||
O06Z+pqGRfXwm0iHNRKMqA8GCBPvJWMEG7jLBbEcoJmN/UpkUJuhapKbFWXaeiirwWLkvhCzhnE2
|
||||
fa+Le1SW2SG0s2VlMpcmnLO4DKYexjm1mKqY2wZkyjzWrQ63i3JYCOPdxsUiMPj52ERHYdwP2aBT
|
||||
GWuFC8+MbQvvRH1Bds5sPB1uuo3UhITZsoykYKR7DlQUccDURgKVOXPy0920mGwDnRJ57GEeMa0X
|
||||
zgIZw0MkGEUJqp1/ATabybawSzBimjsMuXR+wbDKwdIZ4L44FcHskOOsHkxpsYxWDsmFoeozcsrI
|
||||
qbgk4YNn5MSSLbw5Lo3o7xcVc1l2OeHATunNGSeSZJAUj9A46b45C5o5V9FTsgEnqh5/ot1nadyW
|
||||
Bo71jbI9LM9d3AvFyo3YObbHyM6eFEjTBjiXv6ukef+QYmydapNonhNn50gnCvNkDxxjqXgZF6gy
|
||||
JtEBfGBZdH1twFqZuurUN5adv70r/sDpocjLndKEunZTYZlooBKz6szJmg2CzoExbqATy6TVo+w4
|
||||
NuwKB0qxys7AEzSUD+4SnuTULoAiNn6uQHEG4JRkX1u0SH2QJCMH6FUOeSgy8ReXidtF3K53V3C7
|
||||
Osc75HLXDVyH4AaIDceTmN2fidc6thQTzRSG5NZQus+t1nF4plzJ4xEm0j0a72lz7lLEetLYlqYo
|
||||
x7WoPqMVA97CI05oacTUWiWL6DjHx3uyd4ZvP6mTSuP62imO5DVYqknJBvXe6053xf9x13Oh3sIH
|
||||
l/yzk1yIXMBOQjxy5r6yp+Ey29DYUNtSW6yO01lnriOL68FEOlGeVwkkt3b6goG08Wec6Pe5jCtb
|
||||
uEEjsDy3RS5WE/Jc+Pzsk4La0S1SZa6TbofOLB8ApHfaSzqJQ7liKYDWRRX1Bgbuh+iKV06e3Ixq
|
||||
S1Bb+LvWq4TzddtvaLFK/PUOyieFwYgPdLJB20CQeYpLWr7LPIyE5gK1dgoryJydTu7GlrGJbIP3
|
||||
pXppS92LJfnmmuPzYWILv1LPFuu0dVa+ouV+v9u/F0/96e3db0VrB4Ld7e1tedhHaUpxOXM5wb5J
|
||||
wPVuMYwqURaw6yTWvJ/6fFJOgafoYn0j44TJO8TZ46Nxh2FNRCnjgDFS6osEnqy2dorR/5tGOFXg
|
||||
rNDSnqJUrZ9UQrHYzXdzoefobMqBhhJ17HIcPa5JYMrH4hqe72NRjJd1sLi8W4b/Dxk1z5Ndwa7K
|
||||
jajBjYvaqnr3FAb41ds14ESnj5zL9fPBlhNKMjCvPU2dyvIFcqYqZeSjhGxWrbZl03nKy0TV8mJE
|
||||
62Cxcq+OgbaMNo8TwjpinEBEHwexpyJmAtJ1pCcl3RPYMnKU3qUyk3J+Yudjhn+hptJKBZGPF15Y
|
||||
/7Yp1D//0hA9c9JwbEjXqRHy8F8AAAD//4xYS28bNxC++1cQOsWAbAR1HcS9BUXbXNIGAepbIHBX
|
||||
syuiFMkMSdU6+L8X35DcpWIXyJn74JAz34vhgF/Vtbfqo5kPN/BZYEZ60gC9qtT/CuQqL/xtPut0
|
||||
UMbaXOipmNlLd7jqSFFyzVes9NEOaVz6Fv2qsjPfctesIl38sepe2JFc5UBSUwWi5JXTpwaQBbJE
|
||||
yRQu9pOymme6iaOG5LqA8lv1SE6EwaiDgZYMVp/B+yObVN2ACDUwcAThFZDbdoIOv3SEgdAMax19
|
||||
5pFKg8QswkDmP7NTg7ctRJDZMsegxzRlW51j6x8Vz454Pves/D35usQUmBxlxknHYNgIwfVkvz87
|
||||
DYa2bVrQY1wubQCWa66ntEzEuQnmI0FYaC4kYFzWsAz0FLScvAzvfUsrVl1f8orfXCKG4MNh/6J+
|
||||
xzV+8if8LHn1aPbk1R/6SL0FoP6dNlLnll3w7EtNR0r66DkczFjFzopzL11Erx+2KujTop30WdBU
|
||||
1K9BtbFMBM521oI2CG4Y6rMpYqmgSGFY9KMFJlSQZhPSv+gbSNwCkdtmL2Nx70XQ0piqLRBOruMY
|
||||
1eCfAA9G3OiI92/VJ/hmR8qSZhHPC/x0gQKsQ6Si6UsMVwxP0Z9OM1eSwT+rSSk+imS3TWiiynI1
|
||||
M66mJGGipQc0YW+P1pRGzt55d4PZIcaVsB4TcVRv/vz8a7yusvpcds+krJlIcGWggz4Z4Ds2pvc6
|
||||
CHwstXawFpacBguzPr64ntqBiQk6DgHJKNJHRdwTkHuwhPqlLYpEyPOxREMtuHnz4cu1bOZkOGVt
|
||||
14XHL9fbVcajz1AltkvuZNi7QnnreKE1lvltnyusoS2UO4zDYDM3MLygzsLj1eWkPvCriWbdi3e9
|
||||
4bm0rxBLGppOGhDcnqsba9svx3Axd2vsIcNWjVMpqwSJ2snpxkAuio9FeNdserUwwI1ITtvbPoxl
|
||||
mnLUyIJdtrZb0M75mo8iBv5aV56X4Nf6ObAf4nevbibjTDzs4Aa8Q8gbkw8bWX2+UuqrBMz5IjPe
|
||||
IAYKaZf8PyS/++nd+4fywc2aaa/LD29/rqsJccO6cHf/cLd95ZO7PSHIi11KvRn1eKD9+u4aaeOC
|
||||
fLdw1RX+cj+vfbsUb9z8I59fF8aRQqL9rl13X/P6GBNC//97bDlo2fCmphG7ZIhxGXuadLYlj98U
|
||||
5txNxs3EEK8Syk9h9/7tw/27+7u7cdhcPV/9BwAA//8DAN7ucS6dGAAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974f08a7bda1239e-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:58:18 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=ivMv06nBtEBQA0RDYIILchv9UWNGtVY5W5EPjY_N5kM-1756166298-1.0.1.1-2Qisdm01V_T5uiCTx3ArSE5G4lPmSJXAC_FziI_DwOsWC_nN5OI4nlAAcT5w8A.DGB6SVkyNXoYzPqHQgSR2pUc8YECmxkJF6LXkXDhzC0A;
|
||||
path=/; expires=Tue, 26-Aug-25 00:28:18 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=xP6HSAlowr0MDcDirVgpB.BcvTTqAlUD_fl6h1bu7io-1756166298177-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '29910'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '29994'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29996336'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 7ms
|
||||
x-request-id:
|
||||
- req_ae1d0ce3d5584bbba38ba76c69a0e541
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1524,362 +1524,4 @@ interactions:
|
||||
- req_55957b9cd106057878eea339ef651998
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Researcher, Senior Writer\nThe input to this tool should be the coworker,
|
||||
the task you want them to do, and ALL necessary context to execute the task,
|
||||
they know nothing about the task, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\nTool Name: Ask question to coworker\nTool
|
||||
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
|
||||
of the following coworkers: Researcher, Senior Writer\nThe input to this tool
|
||||
should be the coworker, the question you have for them, and ALL necessary context
|
||||
to ask the question properly, they know nothing about the question, so share
|
||||
absolutely everything you know, don''t reference things but instead explain
|
||||
them.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, only one
|
||||
name of [Delegate work to coworker, Ask question to coworker], just the name,
|
||||
exactly as it''s written.\nAction Input: the input to the action, just a simple
|
||||
JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
|
||||
the result of the action\n```\n\nOnce all necessary information is gathered,
|
||||
return the following format:\n\n```\nThought: I now know the final answer\nFinal
|
||||
Answer: the final answer to the original input question\n```"}, {"role": "user",
|
||||
"content": "\nCurrent Task: Come up with a list of 5 interesting ideas to explore
|
||||
for an article, then write one amazing paragraph highlight for each idea that
|
||||
showcases how good an article about this topic could be. Return the list of
|
||||
ideas with their paragraph and your notes.\n\nThis is the expected criteria
|
||||
for your final answer: 5 bullet points with a paragraph for each idea.\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}, {"role": "assistant", "content":
|
||||
"Thought: To come up with a list of 5 interesting article ideas and write a
|
||||
compelling paragraph for each one, I should first generate the ideas. For this,
|
||||
I will delegate the task to the Researcher and then use the Senior Writer to
|
||||
create the paragraphs.\n\nAction: Delegate work to coworker\nAction Input: {\"task\":
|
||||
\"Generate a list of 5 interesting ideas to explore for an article.\", \"context\":
|
||||
\"We need to come up with 5 interesting and unique article ideas that will captivate
|
||||
readers.\", \"coworker\": \"Researcher\"}\nObservation: 1. **The Rise of AI
|
||||
in Healthcare: Revolutionizing Diagnosis and Treatment**\n - Artificial intelligence
|
||||
is transforming the healthcare industry in unprecedented ways. From enhancing
|
||||
diagnostic accuracy with AI-driven tools to creating personalized treatment
|
||||
plans, AI is bridging gaps in medical care and bringing about a revolution.
|
||||
Imagine a world where diseases are diagnosed at lightning speed, and treatments
|
||||
are tailored to individual genetic makeups. This article will explore these
|
||||
groundbreaking advancements, featuring real-world applications and expert insights,
|
||||
revealing the future of healthcare driven by AI.\n\n2. **AI and Ethics: Navigating
|
||||
the Moral Complexity of Machine Intelligence**\n - As AI technology advances,
|
||||
ethical dilemmas emerge, challenging our understanding of morality and responsibility.
|
||||
This article delves into the complex world of AI ethics, examining pressing
|
||||
issues like algorithmic bias, privacy concerns, and the ethical responsibilities
|
||||
of developers. With input from ethicists, technologists, and policymakers, it
|
||||
offers a comprehensive look at the moral landscape of AI, guiding readers through
|
||||
the critical questions that shape the future of machine intelligence and its
|
||||
impact on society.\n\n3. **AI Agents in the Workplace: Enhancing Productivity
|
||||
or Eroding Jobs?**\n - The introduction of AI agents in the workplace presents
|
||||
a double-edged sword; while these technologies can vastly enhance productivity
|
||||
by automating routine tasks, they also raise concerns about job displacement.
|
||||
This article paints a balanced picture, drawing on data and expert forecasts
|
||||
to explore both sides of the argument. Are we heading towards a future where
|
||||
AI aids human creativity and efficiency, or are we on the brink of a job crisis?
|
||||
The answer may be a nuanced blend of both.\n\n4. **From Science Fiction to Reality:
|
||||
The Evolution of AI Agents in Popular Culture**\n - AI has long been a staple
|
||||
of science fiction, captivating audiences with visions of intelligent machines.
|
||||
This article traces the portrayal of AI agents in movies, books, and video games,
|
||||
juxtaposing these fictional narratives against today\u2019s technological realities.
|
||||
Interviews with creators from the entertainment industry and AI experts illuminate
|
||||
how close\u2014or far\u2014we are from making these sci-fi dreams a reality,
|
||||
providing a fascinating journey through the evolution of AI in popular culture.\n\n5.
|
||||
**The Future of AI in Personalization: Creating Tailored Experiences in Everyday
|
||||
Life**\n - Imagine a world where every experience, from shopping to entertainment,
|
||||
is uniquely tailored to your preferences by intelligent systems. This article
|
||||
explores the cutting-edge of AI-driven personalization, discussing current technologies
|
||||
and future potentials. Featuring insights from industry leaders and technologists,
|
||||
readers will discover how AI is revolutionizing the way we live, offering bespoke
|
||||
experiences that were once the realm of fantasy but are swiftly becoming a part
|
||||
of our daily lives."}, {"role": "assistant", "content": "Thought: Now that I
|
||||
have the list of 5 interesting article ideas from the Researcher, the next step
|
||||
is to ask the Senior Writer to create a compelling paragraph for each idea.
|
||||
This will ensure that each topic is highlighted effectively.\n\nAction: Delegate
|
||||
work to coworker\nAction Input: {\"task\": \"Write one amazing paragraph highlight
|
||||
for each of these 5 article ideas: \\n\\n1. **The Rise of AI in Healthcare:
|
||||
Revolutionizing Diagnosis and Treatment**\\n- This article could explore how
|
||||
AI is making strides in the healthcare industry. From AI-driven diagnostic tools
|
||||
to personalized treatment plans, the piece could highlight real-world applications
|
||||
and the potential future impact. Interviews with healthcare professionals and
|
||||
case studies could add depth and credibility.\\n\\n2. **AI and Ethics: Navigating
|
||||
the Moral Complexity of Machine Intelligence**\\n- Delve into the ethical considerations
|
||||
surrounding AI development and deployment. The article could examine issues
|
||||
such as bias in AI algorithms, privacy concerns, and the ethical responsibility
|
||||
of AI developers. Expert opinions from ethicists, technologists, and policymakers
|
||||
can provide a well-rounded perspective.\\n\\n3. **AI Agents in the Workplace:
|
||||
Enhancing Productivity or Eroding Jobs?**\\n- This piece can analyze the dual
|
||||
impact of AI agents in the professional environment. While some argue that AI
|
||||
can significantly boost productivity by automating mundane tasks, others fear
|
||||
job displacement. The article can present both sides, backed by data and forecasts
|
||||
about labor market trends.\\n\\n4. **From Science Fiction to Reality: The Evolution
|
||||
of AI Agents in Popular Culture**\\n- An engaging piece that traces the depiction
|
||||
of AI agents in movies, books, and video games, and compares these fictional
|
||||
portrayals to the current state of AI technology. This can include interviews
|
||||
with creators from the entertainment industry and AI experts to discuss how
|
||||
close we are to realizing these sci-fi visions.\\n\\n5. **The Future of AI in
|
||||
Personalization: Creating Tailored Experiences in Everyday Life**\\n- Discuss
|
||||
how AI is enhancing personalization in various domains such as shopping, entertainment,
|
||||
education, and personal finance. The focus can be on current technologies and
|
||||
future possibilities, featuring insights from industry leaders and technologists
|
||||
who are driving these innovations.\", \"context\": \"Create compelling paragraphs
|
||||
that highlight the potential and intrigue of each article idea. These should
|
||||
be engaging and showcase how fascinating each article could be.\", \"coworker\":
|
||||
\"Senior Writer\"}\n\nObservation: The Senior Writer provided the following
|
||||
compelling paragraphs for each article idea:\n\n1. **The Rise of AI in Healthcare:
|
||||
Revolutionizing Diagnosis and Treatment**\n - Imagine a world where a doctor
|
||||
can diagnose a condition with pinpoint accuracy, thanks to the help of artificial
|
||||
intelligence. AI is already transforming healthcare with advanced diagnostic
|
||||
tools and personalized treatment plans. This article would delve into real-world
|
||||
cases where AI has made a significant impact, providing insights from healthcare
|
||||
professionals who witness these technological advancements firsthand. The future
|
||||
of medicine is not only in the hands of doctors and nurses but also in the code
|
||||
and algorithms of AI systems.\n\n2. **AI and Ethics: Navigating the Moral Complexity
|
||||
of Machine Intelligence**\n - The rapid development of artificial intelligence
|
||||
brings with it not just technological advancements but also a slew of ethical
|
||||
dilemmas. This article will explore the complex moral landscape AI inhabits,
|
||||
questioning the bias that may be programmed into algorithms, the privacy concerns
|
||||
surrounding data usage, and the ethical responsibilities of those who develop
|
||||
AI technologies. Insightful perspectives from ethicists, technologists, and
|
||||
policymakers will highlight how society can navigate these challenges to ensure
|
||||
that AI benefits humanity as a whole.\n\n3. **AI Agents in the Workplace: Enhancing
|
||||
Productivity or Eroding Jobs?**\n - As AI agents become more integrated into
|
||||
the workforce, the debate over their impact intensifies. This article will examine
|
||||
how AI can boost productivity by automating routine tasks, thereby freeing up
|
||||
human workers for more creative and strategic roles. However, it will also address
|
||||
the fear of job displacement and present data and forecasts about the future
|
||||
labor market. Balanced viewpoints from industry analysts and workers affected
|
||||
by AI implementation will offer a comprehensive understanding of the pros and
|
||||
cons of AI in the workplace.\n\n4. **From Science Fiction to Reality: The Evolution
|
||||
of AI Agents in Popular Culture**\n - From HAL 9000 to Jarvis, AI agents have
|
||||
long captivated our imaginations in books, movies, and video games. This article
|
||||
will take readers on a journey through time, highlighting how AI has been portrayed
|
||||
in popular culture and comparing these fictional depictions to the current state
|
||||
of AI technology. Conversations with creators from the entertainment industry
|
||||
and AI experts will reveal whether these sci-fi visions are becoming our reality
|
||||
and what the future holds for AI in culture.\n\n5. **The Future of AI in Personalization:
|
||||
Creating Tailored Experiences in Everyday Life**\n - In an increasingly digital
|
||||
world, AI is enhancing personalization across various domains, crafting tailored
|
||||
experiences that cater to individual needs. This article will explore how AI
|
||||
is revolutionizing everything from shopping and entertainment to education and
|
||||
personal finance. Featuring insights from industry leaders and technologists,
|
||||
the piece will showcase the current state of personalization technologies and
|
||||
speculate on future advancements, painting a vivid picture of a world where
|
||||
AI knows us better than we know ourselves.\nObservation: 1. **The Rise of AI
|
||||
in Healthcare: Revolutionizing Diagnosis and Treatment**\n - Artificial Intelligence
|
||||
is spearheading a transformative era in healthcare, fundamentally altering how
|
||||
diseases are diagnosed and treated. AI-driven diagnostic tools are already outperforming
|
||||
human experts in identifying ailments from medical images, while personalized
|
||||
treatment plans are becoming a reality through predictive analytics that cater
|
||||
to individual patient needs. This article will dive into real-world applications,
|
||||
such as AI algorithms detecting early-stage cancers or predicting patient outcomes
|
||||
with unprecedented accuracy. Featuring interviews with healthcare professionals
|
||||
and insightful case studies, it will paint a vivid picture of how AI is not
|
||||
only enhancing medical outcomes but also reshaping the entire healthcare landscape
|
||||
for the future.\n\n2. **AI and Ethics: Navigating the Moral Complexity of Machine
|
||||
Intelligence**\n - As Artificial Intelligence continues to evolve, it brings
|
||||
with it a host of ethical dilemmas that society must grapple with. This article
|
||||
will delve into the moral intricacies of AI, focusing on pressing issues such
|
||||
as algorithmic bias, the preservation of privacy, and the ethical duties of
|
||||
AI developers. By examining high-profile cases of biased AI and exploring the
|
||||
perspectives of ethicists, technologists, and policymakers, the piece will offer
|
||||
a comprehensive look at the ethical landscape of machine intelligence. It aims
|
||||
to provoke thought and spark dialogue on how we can responsibly harness AI''s
|
||||
power while safeguarding human values and rights.\n\n3. **AI Agents in the Workplace:
|
||||
Enhancing Productivity or Eroding Jobs?**\n - The integration of AI agents
|
||||
in the workplace presents a paradox of progress and peril. On one hand, AI can
|
||||
drastically enhance productivity by automating repetitive tasks and offering
|
||||
intelligent insights that drive innovation. On the other, there is a growing
|
||||
concern about job displacement and the socioeconomic implications of a heavily
|
||||
automated workforce. This article will explore this dual impact, presenting
|
||||
data and forecasts on labor market trends and featuring opinions from industry
|
||||
experts, economists, and affected workers. By striking a balance between optimism
|
||||
and caution, it will provide a nuanced view of AI''s role in reshaping the future
|
||||
of work.\n\n4. **From Science Fiction to Reality: The Evolution of AI Agents
|
||||
in Popular Culture**\n - The portrayal of AI agents in popular culture has
|
||||
long fascinated and terrified us, from the benevolent, human-like robots of
|
||||
movies and books to the dystopian scenarios of video games. This article will
|
||||
take readers on a journey through the evolution of AI in fiction, contrasting
|
||||
these imaginative narratives with the current capabilities and limitations of
|
||||
real-world AI. Interviews with filmmakers, authors, and game developers, alongside
|
||||
insights from AI researchers, will shed light on how closely today''s technology
|
||||
mirrors these sci-fi visions. The piece aims to captivate readers by exploring
|
||||
the interplay between creativity and technological advancement, revealing the
|
||||
thin line between fantasy and reality.\n\n5. **The Future of AI in Personalization:
|
||||
Creating Tailored Experiences in Everyday Life**\n - AI is revolutionizing
|
||||
the concept of personalization, offering users tailored experiences in areas
|
||||
as diverse as shopping, entertainment, education, and personal finance. This
|
||||
article will explore how AI algorithms are learning from user behaviors and
|
||||
preferences to deliver highly customized content and recommendations, enhancing
|
||||
user satisfaction and engagement. By highlighting current technologies and featuring
|
||||
interviews with industry leaders and technologists, it will forecast future
|
||||
possibilities and innovations. The piece will demonstrate how AI personalization
|
||||
is not just a trend, but a paradigm shift in how we interact with technology
|
||||
in our daily lives, promising a future where every experience can be uniquely
|
||||
tailored to individual needs. \n\nThese paragraphs showcase the potential and
|
||||
intrigue of each topic, demonstrating how engaging and insightful the articles
|
||||
could be."}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '16854'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//fFdLb9zIEb7vryjoYsCYEWyvvFnMJVAUCVGQ3TUcBzlEB9d0F8myml1M
|
||||
dXNG3P3zQRXJGUnZ5CJoZtjV9fgexd++A7jgeLGDi9BhDf2Qtjc/4nhz9Y/93dXPN+/i3fheb8bv
|
||||
m/rLk/6714uNnZD9Nwp1PXUZpB8SVZY8/xyUsJJFff+Hjz+8/+Hq44cr/6GXSMmOtUPdXsn2w7sP
|
||||
V9t3P27f/bAc7IQDlYsd/Os7AIDf/K+lmCM9Xezg3Wb9pqdSsKWL3ekhgAuVZN9cYClcKuZ6sTn/
|
||||
GCRXyp71169fH/KXTsa2qzu4hyxHeLQ/tSNoOGMCzOVI+pDv/NO1f9rBQ35/CW/ffukIPnMhkAau
|
||||
74Ez/IUw1S6g0g4+00HSaO3gXzm38GfGNkvhApgjfLHm9JTr27cPGQC2cK2VGw6MCe5zpZS4pRwI
|
||||
uEAZCLUjjBYHoSrm0oj2WPlAQIp2d3e6ewPNmCNadExpAkyV1I52coTIhbBQAVSCOOdE0XOq88Au
|
||||
4fp+G5UPlNcHKgeoImk+hUkJ4wQy1oHUEvHgY48Z6GkgrcUS4ki5cjN50pwsnQKNSg89RQ6YgHts
|
||||
qWzg2HEiGEiLZEz8Ky252BEYEub53j0F6ecWKGHiOkHt1MYHg1pIbwdmTFPlUKB2WCFgJYUqwDny
|
||||
geOICQasbKEzUSyX8KWzoWjlkAiOnBJEC8S5il+0PYqmCDgMiQPaRMsGyhg6wGKDx9SKcu36ApEq
|
||||
hWo5EmqatqViSxAwB9ICoqdEc3vKQsYapKcCR64djHlQCmS9s7GEMCqG6RLuCOvoU+RcSQ9Mx+XE
|
||||
efAwqDRUClsbZ5xxLtx2tRkTBCwEpY6Rredc51IH5FwB4WDNgYFDHdURbWAxVBfIUkFymoByhzlY
|
||||
DusET7nvxwqYijWsdDjYM8YiQ4DS8xQT5lgCDgSN6My00a68fMgP+YPRyjqaI9zWjkPZwc944Bbr
|
||||
GvEnUUxw41rzZBCQBn7C0HGmF7w586r8T2qZFnAeqRg+jK4H8sbsrc9Ld9ma00mpdhFZTmj4SNT3
|
||||
uECsSGCqE/RjqdCq4YT88O9Ci9KKLSun93I4V+WAganMYrKBRsJYrGjJBpri/3Mplu2KvRPwOMCe
|
||||
sWw8pD1NenCgWrRB+YBh2swct6GsVYz1dCFEOlAS4+Al/GkCesKes/Oa225rwDKOBpcOafw6irDM
|
||||
ip6GJLqOyIIM5Gwsp65xqZYfhS5Lknb+aGcHSRymHh9J1wKYwtItaRpSQDBvUeooF2NmEnkErC+K
|
||||
OeNKGugXQPCzaV/CfQXk3oc9qBzkkaDO6u+JlAH10RQvSTuS9d0ocHT6GqwHyYX3aYIONVMx6r8p
|
||||
MMiRdFGwgg21I2o86+EBk03M4qsRsTjOv19wft26KHL2Uv4p+jgkDLSD2xPTPqnE0XrpWFe4VfHw
|
||||
f5V9+eMJ5GZFVmyrp7HbaF6EP67hZ4TYLwgDKkZ5mnEirQFtngopp0v4JYNkgg5z3FhEa0VUNENw
|
||||
a5kVwYXnnOV+Ahyr9DNplQaq7NJcsTzO4X2uq5jNE6qrVi2scgcCzllmKHsyVofUjtSRou6OCK3K
|
||||
0YIFMZ3NgHsZK3yTvbmdl+xOsuLf+CoUJIsRh/uzrFsb0NTqwOlUBEVvXSNqIPpvRs/gNyxxAfcX
|
||||
7gcMdbP22VKLWNETaEQpYKnFAJZwLwo96iNVs7wc5/Y0J7GXgbNn5sbJOY6l6rTa7AaWMk5swqah
|
||||
sKa8krlU5cfZOPeYfGB7qkeiDDJU7rn0fjig7yvPzEHlwJEAIY92LIL5zoyuNwVs0TJ0vdT8Wc/t
|
||||
IcvB8X5leL+zCv4e2KX3zkxQsnHx8+zlOwfx7bo0LRA+M+STDGNChZsxWfwX0B9Eq+JkhvQK+MNy
|
||||
KsynoMMCSXILDZbA2afruCBVbpgijGUzN9tq2VM2X6BcNzOht4kfCVT2Uh0tvRx44fde5NHFxQ7G
|
||||
qVQZGDOUQBmVxR+3bgq02NPv7h0VLThh9G0hA8I3GTXTedFxyXvVI87QzO3cuKU5P+dhFPIlywo1
|
||||
NmVUxVmX3dwsWhhVjR0BB9xz4rrWk7jneubFs03o+v7SjfT5FtJw6lcRx7F2ogsirdhn7rIBtP4X
|
||||
g9WJ8N7v63sDEqGGzp/zjpSOLBPT6EWQQ5JCaYIqEac35WwpE/SsKlqWwkvgbcNwYNuHvN2rtawu
|
||||
EHCofMB67vl+euVlvmwNCacTY/y1Zha6GTiro5kLYTwYT3oHjNLBoL1Eqh1nSOZKa6QGc8Uyh1kW
|
||||
WmfLx/Xl4u7EpHnKn04bso9lBzeeS27hC7JpUIRbEwanmMP/9kA6RZzgb9w8W4l8rdNXLygOBlPQ
|
||||
wXed4eVlm7Nmj8VaVdcr6eWVqGSrSfElWgvZv6WTwfRhYwshaUXOc4sojmEJv7iO3+lvX/n/y+2y
|
||||
nz5bvn29JFRfWxxSlinsqcMDGy78CqWGdM62iq1jlqZvOWmCMJYqvb+BLC+Ky3SC9D3luG7/503Y
|
||||
ryhYuTQ4S5ovRLnF1mHgAuzRDcTuUgvhzsih16r/esU/6X5acPoSea7+q2avBrPq8CCl8Atinz21
|
||||
/AcAAP//jFjLbsMgELz7KxDnHOy2avMxjRA2i01qg8NDVQ/+92oBB5L60PPCwsI+ZqauigxPF6Od
|
||||
t1gU+YWfEmFnBFdEuzxNrVPC/wlNqHEhblLSR1aaIFQMiA8+t51SskoTEywRHAcufoXDsYkTLc2r
|
||||
HMR3HPaA2VzlW4QjPZCg1S3EnrDn5CPfyzzvU0fGX6sBFmRwHMUIHea5MnCtTW5+qENcsmW7Kw+z
|
||||
GVdreve0lUqllZsYFoHRqDLgHKDRujWEXKLCER5EC4oBr5558wXxuNeu7ZJDWkSVYv7odqs3ns/V
|
||||
vnP3cjpwyQTgy7hKJqEDHyYQZW/RVHgQylSGpgr8732OfKfglR7/474YBmw9INjOkeuYyzIL14hw
|
||||
jpfdHzpemCIRUgMwr8DiZwiQPMxJEKLux3lYmFR6BLtalVQhubK3917KFtrhTJut+QUAAP//AwAX
|
||||
W2+SHhMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974ede154cc51701-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:29:03 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=5JVBbOOy3Z42TimGf_AeViTUOxCEjozvdxUWf6hbQmU-1756164543-1.0.1.1-ygsmJWfEEPPXLcyJJ.RM_4EU3eUrQ5umh1p5NnpVok3tJoYTzI_qdTIg85sL3QuSmGfOIM9i8qLgCV4etmoxJoh_wAlBljpVEi27.U3_dz0;
|
||||
path=/; expires=Mon, 25-Aug-25 23:59:03 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=zAJ0jqazHLpztPpOyEi_EdNt5yCl9A2Gzu_U4E02x2E-1756164543702-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '19204'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '19224'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29995866'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 8ms
|
||||
x-request-id:
|
||||
- req_946a3113146e463cb4c0bc9cb8cb9a22
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -607,163 +607,4 @@ interactions:
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
|
||||
tools available and give your best Final Answer, your job depends on it!\n\nThought:"},
|
||||
{"role": "assistant", "content": "Thought: I need to get an integer score between
|
||||
1-5 for the given title. To do this, I''ll ask the Scorer to evaluate the title
|
||||
based on specific criteria.\n\nAction: Ask question to coworker\nAction Input:
|
||||
{\"question\": \"Can you provide an integer score between 1-5 for the following
|
||||
title: ''The impact of AI in the future of work''?\", \"context\": \"We need
|
||||
a score for the title to assess its impact and relevance. Please evaluate based
|
||||
on factors such as clarity, relevance, and potential interest to readers.\",
|
||||
\"coworker\": \"Scorer\"}\nObservation: The score for the title ''The impact
|
||||
of AI in the future of work'' is 4 out of 5."}], "model": "gpt-4o", "stop":
|
||||
["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3675'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=lnKPmS3pW0Tw6qE4ZI8cf4lyVDuLg7ZNxmzxi7CqlME-1756165116673-0.0.1.1-604800000;
|
||||
__cf_bm=r95s2Hh1kHgI6PI08wwTySLyufxeDnmixIX5GVDEyoA-1756165116-1.0.1.1-X5dML3NG4uvFgDnjqJ4pQWbB9MyxJisxAZJlRivMc7T1f83cCNBH_cy5fJjn4XjKks_UKUUj5PvBGifu2gLkaiqYYWvbSjrhjKseuPN8.Q4
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLva9swEP3uv0LoczzsJnYTfyuFsUHHftDA2FJsRT7bamRJSOd1o+R/
|
||||
H5LT2F1b2BfB3bv3dO/uHiNCqKhpQSjvGPLeyPh6vV8e7vHbdvv5dtvfaPhw/WX59dOP71l+dUMX
|
||||
nqH398DxifWO695IQKHVCHMLDMGrppdZnuZZmuYB6HUN0tNag/FKxxfJxSpO1nGSn4idFhwcLcjP
|
||||
iBBCHsPrW1Q1/KYFSRZPmR6cYy3Q4lxECLVa+gxlzgmHTCFdTCDXCkGFrquq2qnbTg9thwX5SJR+
|
||||
IAf/YAekEYpJwpR7ALtT70N0FaKCrHaqqqq5qoVmcMybUoOUM4AppZH5oQQ/dyfkeHYgdWus3rt/
|
||||
qLQRSriutMCcVr5bh9rQgB4jQu7CpIZn5qmxujdYoj5A+O5yk456dNrNhKbrE4gamZzy62SzeEWv
|
||||
rAGZkG42a8oZ76CeqNNi2FALPQOimeuX3bymPToXqv0f+QngHAxCXRoLteDPHU9lFvzpvlV2nnJo
|
||||
mDqwvwSHEgVYv4kaGjbI8aqo++MQ+rIRqgVrrBhPqzHlOtlkebZc8j2NjtFfAAAA//8DAJCAq9Rj
|
||||
AwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec8be8b98486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:37 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '383'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '488'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999132'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_58742f78663b48c2be01fcc6497a46da
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -449,162 +449,4 @@ interactions:
|
||||
- req_32907c02e96c6932e9424220bb24bb7a
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Manager. You''re great
|
||||
at delegating work about scoring.\nYour personal goal is: Coordinate scoring
|
||||
processes\nYou ONLY have access to the following tools, and should NEVER make
|
||||
up tools that are not listed here:\n\nTool Name: Delegate work to coworker\nTool
|
||||
Arguments: {''task'': {''description'': ''The task to delegate'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the task'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
delegate to'', ''type'': ''str''}}\nTool Description: Delegate a specific task
|
||||
to one of the following coworkers: Scorer\nThe input to this tool should be
|
||||
the coworker, the task you want them to do, and ALL necessary context to execute
|
||||
the task, they know nothing about the task, so share absolutely everything you
|
||||
know, don''t reference things but instead explain them.\nTool Name: Ask question
|
||||
to coworker\nTool Arguments: {''question'': {''description'': ''The question
|
||||
to ask'', ''type'': ''str''}, ''context'': {''description'': ''The context for
|
||||
the question'', ''type'': ''str''}, ''coworker'': {''description'': ''The role/name
|
||||
of the coworker to ask'', ''type'': ''str''}}\nTool Description: Ask a specific
|
||||
question to one of the following coworkers: Scorer\nThe input to this tool should
|
||||
be the coworker, the question you have for them, and ALL necessary context to
|
||||
ask the question properly, they know nothing about the question, so share absolutely
|
||||
everything you know, don''t reference things but instead explain them.\n\nIMPORTANT:
|
||||
Use the following format in your response:\n\n```\nThought: you should always
|
||||
think about what to do\nAction: the action to take, only one name of [Delegate
|
||||
work to coworker, Ask question to coworker], just the name, exactly as it''s
|
||||
written.\nAction Input: the input to the action, just a simple JSON object,
|
||||
enclosed in curly braces, using \" to wrap keys and values.\nObservation: the
|
||||
result of the action\n```\n\nOnce all necessary information is gathered, return
|
||||
the following format:\n\n```\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question\n```"}, {"role": "user", "content":
|
||||
"\nCurrent Task: Give me an integer score between 1-5 for the following title:
|
||||
''The impact of AI in the future of work''\n\nThis is the expected criteria
|
||||
for your final answer: The score of the title.\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}, {"role": "assistant", "content": "Thought: To provide
|
||||
an accurate score for the title \"The impact of AI in the future of work,\"
|
||||
I need to delegate the task of scoring it to the Scorer, who is specialized
|
||||
in this activity.\n\nAction: Delegate work to coworker\nAction Input: {\"coworker\":
|
||||
\"Scorer\", \"task\": \"Give me an integer score between 1-5 for the following
|
||||
title: ''The impact of AI in the future of work''\", \"context\": \"Your task
|
||||
is to evaluate the provided title on a scale from 1 to 5 based on criteria such
|
||||
as relevance, clarity, and intrigue.\"}\n\nObservation: The scorer has received
|
||||
the task to score the title ''The impact of AI in the future of work''.\nObservation:
|
||||
4"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3322'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=CW_cKQGYWY3cL.S6Xo5z0cmkmWHy5Q50OA_KjPEijNk-1735926034530-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA4xSTWvcMBC9+1cMOq+Dvd/1LTQt5NJLP6C0wWjlsa1G1ghpnLSE/e9F2uzaaVPo
|
||||
RSC9eU/vzcxTBiB0IyoQqpesBmfyt/vDSpvbDx+97L+gftjhzVdX3rzT1/3nrVhEBh1+oOIz60rR
|
||||
4AyyJnuClUfJGFXL3WZbbjdluU7AQA2aSOsc52vKl8VynRf7vHjWVT1phUFU8C0DAHhKZ7RoG/wp
|
||||
KigW55cBQ5AdiupSBCA8mfgiZAg6sLQsFhOoyDLa5PpTT2PXcwW3YOkR7uPBPUKrrTQgbXhEf/Xd
|
||||
vk/X63StYD0X89iOQcYsdjRmBkhriWXsRYpx94wcL8YNdc7TIfxBFa22OvS1RxnIRpOByYmEHjOA
|
||||
u9Sg8UVm4TwNjmume0zf7U59TmHPI5nQC8jE0sxYy/3iFb26QZbahFmLhZKqx2aiTvOQY6NpBmSz
|
||||
1H+7eU37lFzb7n/kJ0ApdIxN7Tw2Wr1MPJV5jBv7r7JLl5NhEdA/aIU1a/RxEg22cjSnZRLhV2Ac
|
||||
6lbbDr3z+rRRrav3xZvNdrNaqYPIjtlvAAAA//8DAAsTiaNaAwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec7c4d438486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:35 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=ULOCzvd5MabFyMl9eSNQ3Sk_hNbVH25aTAfkSaYM6Cw-1756165115-1.0.1.1-jeyMlIwg.jml3h9XQDOYQXVxEFP44CSbUCENlr4W6RpdsZg08M.fWUD5kBWDNcupp2skTxNh5gCYFNop3HIrWq5pzgrydiA3FuSe7U15X1Y;
|
||||
path=/; expires=Tue, 26-Aug-25 00:08:35 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=e_81pnlfj.JohLmFm8gTRISqkPCT6GwCVpOdPsECI.s-1756165115098-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '680'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '715'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999221'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_4006f3362c434ea2a39b9901f3f2e5cb
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -728,173 +728,4 @@ interactions:
|
||||
- req_c065668c78262d2781db5b3b04f70dba
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work'', check examples to based your evaluation.\n\nThis is the
|
||||
expected criteria for your final answer: The score of the title.\nyou MUST return
|
||||
the actual complete content as the final answer, not a summary.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
|
||||
To give an accurate score for the title \"The impact of AI in the future of
|
||||
work\" based on the specified criteria, I''ll need the expertise of the Scorer.
|
||||
They specialize in evaluating content, so I must delegate this task to them.
|
||||
I need to provide them with all the necessary context, including the title and
|
||||
the importance of giving a well-considered, integer score between 1-5. Since
|
||||
the request stresses the importance of this task and its implications for my
|
||||
job, I will make sure to communicate the urgency and the criteria very clearly
|
||||
to the Scorer.\n\nAction: Delegate work to coworker\n\nAction Input: {\"task\":
|
||||
\"Evaluate the title ''The impact of AI in the future of work'' and give it
|
||||
an integer score between 1-5, based on your expert assessment.\", \"context\":
|
||||
\"The task is to assign a score to the given title based on its significance,
|
||||
relevance, and impactfulness regarding the future of work influenced by artificial
|
||||
intelligence. This score needs to reflect a thorough evaluation within the range
|
||||
of 1 (being the least impactful or relevant) to 5 (being the most impactful
|
||||
or relevant). The assessment is highly important and could significantly affect
|
||||
our future work, so it''s crucial to consider the current discourse on AI, its
|
||||
potential transformative power on industries and job markets, and general expectations
|
||||
of its future role.\", \"coworker\": \"Scorer\"}\nObservation: The score of
|
||||
the title \"The impact of AI in the future of work\" is 5."}], "model": "gpt-4-0125-preview",
|
||||
"stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '4575'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=FasG_74jjKfEtA2Mjn2iKdu62tv3zosBxYNYTk1E12M-1734025911900-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jJJNaxsxEIbv/hWDznaIE9t19uYWUkLTQiG3OiyydnZXtlajSLN2m+D/
|
||||
XqSNvZs2hV4E0jPvq/l6GQEIXYgMhKolq8aZyafl5nq7emqWz9+/bO8/Pu9n3+4VL+nz01e8FeOo
|
||||
oM0WFZ9UF4oaZ5A12Q4rj5Ixuk4/zBfTxXw6nSfQUIEmyirHk9nkcno1nziPe42HV2VNWmEQGfwY
|
||||
AQC8pDPmaAv8KTK4HJ9eGgxBViiycxCA8GTii5Ah6MDSshj3UJFltCnth5raquYM7sDSAXbx4Bqh
|
||||
1FYakDYc0K/t2t6m+yrdM3ioEYIij0BlCmfNBmEtItCNk4ojWd2Btp1dy20XfSC/WwvQAeYXw5w8
|
||||
lm2QsSe2NWYApLXEMvY0dePxlRzP9RuqnKdN+EMqSm11qHOPMpCNtQYmJxI9jgAeU5/bN60TzlPj
|
||||
OGfaYfruZnHV+Yl+tD29PkEmlmagupmN3/HLC2SpTRhMSiipaix6aT9W2RaaBmA0qPrvbN7z7irX
|
||||
tvof+x4ohY6xyJ3HQqu3FfdhHuPm/yvs3OWUsAjo91phzhp9nESBpWxNt5Mi/AqMTV5qW6F3XqfF
|
||||
jJMcHUe/AQAA//8DAErhO7yXAwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec827a978486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:36 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=r95s2Hh1kHgI6PI08wwTySLyufxeDnmixIX5GVDEyoA-1756165116-1.0.1.1-X5dML3NG4uvFgDnjqJ4pQWbB9MyxJisxAZJlRivMc7T1f83cCNBH_cy5fJjn4XjKks_UKUUj5PvBGifu2gLkaiqYYWvbSjrhjKseuPN8.Q4;
|
||||
path=/; expires=Tue, 26-Aug-25 00:08:36 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=lnKPmS3pW0Tw6qE4ZI8cf4lyVDuLg7ZNxmzxi7CqlME-1756165116673-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1211'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1305'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '2000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '1998911'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 32ms
|
||||
x-request-id:
|
||||
- req_3ee53a55bafb9d8ab8855cf07987362b
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -555,639 +555,4 @@ interactions:
|
||||
- req_d13285e05176425b72094c7e74370ab6
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3194'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFRNb9swDL3nVxC69JIESZukrW9Dt0MxbNi6HQbMRaBItM1WkTSRTjYU
|
||||
/e+D7KTOvoBdDFmPpN57IvU0AlBkVQHKNFrMNrrJzdVm9vZuER8u7lbz9zfvPlzSbvXmo919ebg7
|
||||
V+OcETYPaOSYNTVhGx0KBd/DJqEWzFXnl8vVfLW4ni06YBssupxWR5kswuR8dr6YzK4ms9UhsQlk
|
||||
kFUBX0cAAE/dN1P0Fr+rAmbj484WmXWNqngJAlApuLyjNDOxaC9qPIAmeEHfsf7chLZupIDPDYJo
|
||||
foSE31pKyKCBTUgIEmCDkAvVHm3+lRxL4hBKlfNoG7URCBW8ugXyHV610ibMe/uQHksFG81oIXjg
|
||||
iIYqMmASCSbSU7gFbkLrLFh0WGtBkIa453M471PmksawbwIQQ0KOwTNtHEIVEuBOu1YL+Rq0tx3z
|
||||
vO5Y8rT0pX9l8rUU8Pp4ROaVq5uQV5iOIXDrYysFPJUqEyhVAaX61FtxFF7A2f8JPyvVGMre8e/S
|
||||
1/qQwo4sgvZAXrDGdHB6g7JH9DCfLAe7SBiwqtAI7dAjcycwocOd9gancJN9sJigCXvYo3NAAkbH
|
||||
TINBS77qLCunxf5oBvJMdSOZwOE+QyTTKzljyO2TBXGjY/bxIKqzDP2OUvBb9MLTo7yDhYNXqVTP
|
||||
pz2XsGpZ55b3rXMngPY+iM4Mu26/PyDPL/3tQh1T2PBvqaoiT9ysE2oOPvcyS4iqQ59HAPfdHLW/
|
||||
jIaKKWyjrCU8Ynfc6nLV11PD5A7o/Pz6gEoQ7QbgarYc/6Xg2qJocnwyispo06AdUoe51a2lcAKM
|
||||
TmT/SedvtXvp5Ov/KT8AxmAUtOuY0JL5VfIQljC/bP8Ke7G5I6wY044MroUw5auwWOnW9Y+O4h8s
|
||||
uF1X5GtMMVH/8lRxfTW7Xq6WFxdmo0bPo58AAAD//wMA5uaSKYIFAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974ee75a69b9fa49-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:35:08 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=YpzdHjazbzCs_jFT50s5M0hX1JuSBHogZUhvJPwKBxo-1756164908-1.0.1.1-0_bgL0r_yqn68W2fJN2xF5CCACGY7K50iAbPVUOmryv.tRDq7ol7sxSwg426hLg0zfDxbndIK7kH_GroqzjZPEDEvz1epE4gpDLuWX.pkao;
|
||||
path=/; expires=Tue, 26-Aug-25 00:05:08 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=tx8GIgZ811onK882sIkQbg1SC1WHXyFyvZ_UvhApcYI-1756164908485-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '4262'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '4328'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999242'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_45f89eb33cb2484197c2a2510b2e4416
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Scorer. You''re an
|
||||
expert scorer, specialized in scoring titles.\nYour personal goal is: Score
|
||||
the title\nTo give my best complete final answer to the task respond using the
|
||||
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
Your final answer must be the great and the most complete as possible, it must
|
||||
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
|
||||
{"role": "user", "content": "\nCurrent Task: Score the title: ''The impact of
|
||||
AI in the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
Your best answer to your coworker asking you this, accounting for the context
|
||||
shared.\nyou MUST return the actual complete content as the final answer, not
|
||||
a summary.\n\nThis is the context you''re working with:\nProvide an integer
|
||||
score between 1-5 based on its effectiveness and relevance. Consider how well
|
||||
it captures attention and provides insight into the topic of AI''s role in shaping
|
||||
future work environments.\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"},
|
||||
{"role": "assistant", "content": "Thought: The task requires a score to be assigned
|
||||
to the title \"The impact of AI in the future of work\" based on specific criteria.
|
||||
I should delegate this task to the Scorer, who is responsible for evaluating
|
||||
and scoring titles.\n\nAction: Delegate work to coworker\nAction Input: {\"task\":
|
||||
\"Score the title: ''The impact of AI in the future of work''\", \"context\":
|
||||
\"Provide an integer score between 1-5 based on its effectiveness and relevance.
|
||||
Consider how well it captures attention and provides insight into the topic
|
||||
of AI''s role in shaping future work environments.\", \"coworker\": \"Scorer\"}\nObservation:
|
||||
I encountered an error: Action ''Delegate work to coworker'' don''t exist, these
|
||||
are the only available Actions:\n\nMoving on then. I MUST either use a tool
|
||||
(use one at time) OR give my best final answer not both at the same time. When
|
||||
responding, I must use the following format:\n\n```\nThought: you should always
|
||||
think about what to do\nAction: the action to take, should be one of []\nAction
|
||||
Input: the input to the action, dictionary enclosed in curly braces\nObservation:
|
||||
the result of the action\n```\nThis Thought/Action/Action Input/Result can repeat
|
||||
N times. Once I know the final answer, I must return the following format:\n\n```\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described\n\n```"}], "model":
|
||||
"gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2680'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=YpzdHjazbzCs_jFT50s5M0hX1JuSBHogZUhvJPwKBxo-1756164908-1.0.1.1-0_bgL0r_yqn68W2fJN2xF5CCACGY7K50iAbPVUOmryv.tRDq7ol7sxSwg426hLg0zfDxbndIK7kH_GroqzjZPEDEvz1epE4gpDLuWX.pkao;
|
||||
_cfuvid=tx8GIgZ811onK882sIkQbg1SC1WHXyFyvZ_UvhApcYI-1756164908485-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//lFVLi+NGEL77VxS65GIbj/G8nNMQ8hhyCAkT2BAvpt1dkmqnVS26SnbM
|
||||
Mv996ZZlazKzsHsxuOv9fV+VPk8ACnLFGgpbG7VN62c/3e0Wfxj9+/ahWdarp+Xv+pvx/zRPH/68
|
||||
/1AV0xQRdp/Q6hA1t6FpPSoF7s02olFMWa9ur2+ublb3i7tsaIJDn8KqVmerMFsulqvZ4m62uDkF
|
||||
1oEsSrGGfycAAJ/zb2qRHf5XrGExHV4aFDEVFuuzE0ARg08vhREhUcNaTC9GG1iRc9dPdeiqWtfw
|
||||
CIzoQAMYERQBrRGU1CNsiqcagZrWWIVQwsMjEGd72WkXMb0dQnzeFLAzgg4CA6mA2ZEnPaac1rTZ
|
||||
02gqTIHBsIM2hj05BGKhqlYg1pDzpu4vlaQ2LXE1VEulAHlPMXCDrDKHRziQ94B74zujOOo9VQKx
|
||||
xiOUMTRwlbq5noINLOQwprypVyxLtEp75DR7ai6ix71hi/MNb/hqDj+PXdbwdK5hPZroj0DsyBrF
|
||||
hJ3R3MQJ6b49R2I7EajDIU1mcj55B8gpHGqyNZCAAQ0t2WTZxWBcwggjis5zA20djSSGxuxsCpCu
|
||||
qlBUQKhiKskaVrC14QoFQgSHe/ShzfAN1azhzBPtE4SGwXSOkC2C7SKFLhEaOgVFW3PwoSJrPBiX
|
||||
QeqJSEgt5/DXAN0YJRKoqar9cUBWszC6GBNAJ2wosGRxDTWOmQtsWh+OqQYIWg1R5vAgWR7JPW2Z
|
||||
EFeZg9J3yDbx+insoDHxGbVn9I1wpqA1yam/M2QtRiVOtVKQUoP+eCbSOBdRUrHEm+AAah791ySO
|
||||
0/tJi2mg6aBPDoe0XVT1ogyx19Z4B8Mh81DRHsFAlc4HGJYDxg3/Qmw8POR/a7iG2Qjd79hRGql9
|
||||
rHT9EUiHTZX3V1V6sb/d2LNKHx5/kDMLef++urbjkxSx7MSki8id9yODYQ7a45iO4ceT5eV8/nyo
|
||||
2hh28r/QoiQmqbdJGoHTqRMNbZGtLxOAj/nMdq8uZ9HG0LS61fCMudz1atXnKy6H/WJdLk9HuNCg
|
||||
xl8MtzdD2KuEW4dqyMvoUhfW2BrdJfRy1tP2hZFhMhr7bTvv5e5HJ66+Jf3FYC22im7bRnRkX498
|
||||
cYuYPnxfczvDnBsuBOOeLG6VMCYqHJam8/03qZCjKDbbkrjC2EbqP0xlu3Xlorzd3V/f2GLyMvkC
|
||||
AAD//wMAlpQiYaEHAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974ee7768809fa49-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:35:12 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '4274'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '4339'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999380'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_9ad5330b210c40cab7dbaa4822ab4144
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
|
||||
The task requires a score to be assigned to the title \"The impact of AI in
|
||||
the future of work\" based on specific criteria. I should delegate this task
|
||||
to the Scorer, who is responsible for evaluating and scoring titles.\n\nAction:
|
||||
Delegate work to coworker\nAction Input: {\"task\": \"Score the title: ''The
|
||||
impact of AI in the future of work''\", \"context\": \"Provide an integer score
|
||||
between 1-5 based on its effectiveness and relevance. Consider how well it captures
|
||||
attention and provides insight into the topic of AI''s role in shaping future
|
||||
work environments.\", \"coworker\": \"Scorer\"}\nObservation: 5 - The title
|
||||
\"The impact of AI in the future of work\" is effective and relevant; it captures
|
||||
attention and provides clear insight into the topic of AI''s influence on future
|
||||
work environments."}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '4046'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=YpzdHjazbzCs_jFT50s5M0hX1JuSBHogZUhvJPwKBxo-1756164908-1.0.1.1-0_bgL0r_yqn68W2fJN2xF5CCACGY7K50iAbPVUOmryv.tRDq7ol7sxSwg426hLg0zfDxbndIK7kH_GroqzjZPEDEvz1epE4gpDLuWX.pkao;
|
||||
_cfuvid=tx8GIgZ811onK882sIkQbg1SC1WHXyFyvZ_UvhApcYI-1756164908485-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLLbtswELzrKxY8W4FkW37oFhQomhbopUEPrQKBplYSG4pkyVXSwPC/
|
||||
F5RiS0lTIBcC5OwMZ3b3GAEwWbEcmGg5ic6q+MPukNx+eyJt09/f/fbHzUNF6svXZLX//Klji8Aw
|
||||
h18o6My6EqazCkkaPcLCIScMquk226Sb9T5dDUBnKlSB1liK1yZeJst1nOziZPNMbI0U6FkOPyMA
|
||||
gONwBou6wj8sh2RxfunQe94gyy9FAMwZFV4Y91564prYYgKF0YR6cH3bmr5pKYcb0OYR7sNBLUIt
|
||||
NVfAtX9Ed1Xoj8P1erjmcCw0QMG8MA4LlkNW6NNc32Hdex7i6V6pGcC1NsRDe4Zkd8/I6ZJFmcY6
|
||||
c/CvqKyWWvq2dMi90cG3J2PZgJ4igLuhZ/2LNjDrTGepJHOPw3e7LB312DSlCV2eQTLE1Yy1XS7e
|
||||
0CsrJC6Vn3WdCS5arCbqNCLeV9LMgGiW+l83b2mPyaVu3iM/AUKgJaxK67CS4mXiqcxhWOL/lV26
|
||||
PBhmHt2DFFiSRBcmUWHNezXuF/NPnrAra6kbdNbJcclqW+6SfbbJVitxYNEp+gsAAP//AwDpVFDZ
|
||||
bQMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974ee792a8c6fa49-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:35:13 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '570'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '598'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999042'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_457bdb91af6147b39002e1b4804064fa
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
|
||||
The task requires a score to be assigned to the title \"The impact of AI in
|
||||
the future of work\" based on specific criteria. I should delegate this task
|
||||
to the Scorer, who is responsible for evaluating and scoring titles.\n\nAction:
|
||||
Delegate work to coworker\nAction Input: {\"task\": \"Score the title: ''The
|
||||
impact of AI in the future of work''\", \"context\": \"Provide an integer score
|
||||
between 1-5 based on its effectiveness and relevance. Consider how well it captures
|
||||
attention and provides insight into the topic of AI''s role in shaping future
|
||||
work environments.\", \"coworker\": \"Scorer\"}\nObservation: {\n \"score\":
|
||||
5\n}"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3872'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=YpzdHjazbzCs_jFT50s5M0hX1JuSBHogZUhvJPwKBxo-1756164908-1.0.1.1-0_bgL0r_yqn68W2fJN2xF5CCACGY7K50iAbPVUOmryv.tRDq7ol7sxSwg426hLg0zfDxbndIK7kH_GroqzjZPEDEvz1epE4gpDLuWX.pkao;
|
||||
_cfuvid=tx8GIgZ811onK882sIkQbg1SC1WHXyFyvZ_UvhApcYI-1756164908485-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA4xSXYvbMBB8969Y9BwXx03c4LfjoKVwlGsphbY+jCKvbfVkSZXWTY+Q/16kfNhp
|
||||
e3AvAml2VjOzu08AmGxYCUz0nMRgVXq72eab/tOXdx/vOudvv+4+PHU/+3vxend3/40tAsNsf6Cg
|
||||
M+uVMINVSNLoIywccsLQdflmXSyLdbYsIjCYBlWgdZbSlUnzLF+l2SbNihOxN1KgZyV8TwAA9vEM
|
||||
EnWDv1kJ2eL8MqD3vENWXooAmDMqvDDuvfTENbHFBAqjCXVU/bk3Y9dTCe9Bmx08hoN6hFZqroBr
|
||||
v0NX6bfxdhNvJewrDVAxL4zDipWwrvRh3t5hO3oe3OlRqRnAtTbEQzrR2MMJOVysKNNZZ7b+Lypr
|
||||
pZa+rx1yb3SQ7clYFtFDAvAQIxuvUmDWmcFSTeYR43eb/BQZm4Y0ofnyBJIhrmas1Rm46lc3SFwq
|
||||
PwudCS56bCbqNCE+NtLMgGTm+l81/+t9dC5195L2EyAEWsKmtg4bKa4dT2UOww4/V3ZJOQpmHt0v
|
||||
KbAmiS5MosGWj+q4Xsw/ecKhbqXu0FknjzvW2npVbNs2w0xsWHJI/gAAAP//AwCoHa0pbAMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eea15dfd2fa56-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:36:57 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '867'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '949'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999086'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_78fb283974264a01982de445532cac51
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -579,657 +579,4 @@ interactions:
|
||||
- req_dd38f9b6e9e6ac64ebbb827111e31414
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3194'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=N_ULh2gJbt2dWumbG8h6_rw_QD0TBQ3f1NYS.FsG3w0-1756165097-1.0.1.1-fGULs7H8u7wOLv7QAwRQdlWcZr2zj6dkLXQk5xPa.7SBOn9qj1nh6.VDONMzgxqO2telES4KZZPzeC2G4YFJXV5Q4hemTm4jcMsXA10XFYM;
|
||||
_cfuvid=_3IyqhyxR3x9q67TqmIC1eWhmeXZIMbk_6ChUHMngHM-1756165097262-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFRNb9swDL3nVxC69OIEcdukmW/FdlgOOwwL1mJzESgSbWuVJY+i84Ei
|
||||
/32Q8+F064BeDJuP7/GRpvQyABBGiwyEqiSrurHDj7PVzePj3aJ6UA9f59Xd/H7ifqy/04fPX2gn
|
||||
ksjwq1+o+MQaKV83Ftl4d4AVoWSMqundZJpOJ+n4pgNqr9FGWtnw8NYPr8fXt8PxbDieHomVNwqD
|
||||
yODnAADgpXtGi07jVmQwTk6RGkOQJYrsnAQgyNsYETIEE1g6FkkPKu8YXed6Ufm2rDiDOThEDexB
|
||||
o8VSMgJXCCzDM/gCgvJkXHmIGbYIuVhUCKZupOKYcT8H4zq8aLkljLGNp+dcRNEY/6Y8IY1gDhtj
|
||||
LTTk10Z3ZWrYGK5AWgsOVeyHdtC53HJk4xZVe+FolLvc3as45ww+nfzGajFb+fiGdEqBuWtazuAl
|
||||
F5Gciwxy0Zm5aOfqfd1cgXcgIShpEQryNaSx5GSUiwRycfR8KLE4zY/wd2sIA+Ba2lby60GuZEAd
|
||||
ZQup2FOA0KoKZABCi2vpFCZHYwlIp0FZSYZ3I4j6oWsjVL61GlYI0oFxjCUSrJA3iO5oMIFNhYSQ
|
||||
ggnQeE+d1iR+4Vahtej43MRxfv2gKBf7yw0iLNog4wK71toLQDrnWcapd7v7dET25221vmzIr8Jf
|
||||
VFEYZ0K1JJTBu7iZgX0jOnQ/AHjqTkX7atFFQ75ueMn+Gbty07vpQU/057BH05vJEWXP0vbALE2T
|
||||
NwSXGlkaGy4OllBSVah7an8KZauNvwAGF23/a+ct7UPrxpXvke8BpbBh1MuGUBv1uuU+jTDeU/9L
|
||||
O4+5MywC0tooXLJBir9CYyFbe7hCRNgFxnpZGFciNWQO90jRLG+nq6IY41jNxGA/+AMAAP//AwCd
|
||||
ZJbYUAUAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec36591c8486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:27 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '3819'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '3856'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999242'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_e91c7ccb367243348f19a41989b6992d
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Scorer. You''re an
|
||||
expert scorer, specialized in scoring titles.\nYour personal goal is: Score
|
||||
the title\nTo give my best complete final answer to the task respond using the
|
||||
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
Your final answer must be the great and the most complete as possible, it must
|
||||
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
|
||||
{"role": "user", "content": "\nCurrent Task: Score the title ''The impact of
|
||||
AI in the future of work'' on a scale from 1 to 5.\n\nThis is the expected criteria
|
||||
for your final answer: Your best answer to your coworker asking you this, accounting
|
||||
for the context shared.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\n\nThis is the context you''re working with:\nThe
|
||||
task requires evaluating the title based on factors such as relevance, impact,
|
||||
and clarity. The score should be an integer between 1 to 5, where 1 is poor
|
||||
and 5 is excellent.\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"}, {"role":
|
||||
"assistant", "content": "Thought: I need to delegate the task of scoring the
|
||||
title \"The impact of AI in the future of work\" to the Scorer. I will provide
|
||||
them with all necessary context to execute the task.\n\nAction: Delegate work
|
||||
to coworker\nAction Input: {\"task\": \"Score the title ''The impact of AI in
|
||||
the future of work'' on a scale from 1 to 5.\", \"context\": \"The task requires
|
||||
evaluating the title based on factors such as relevance, impact, and clarity.
|
||||
The score should be an integer between 1 to 5, where 1 is poor and 5 is excellent.\",
|
||||
\"coworker\": \"Scorer\"}\nObservation: I encountered an error: Action ''Delegate
|
||||
work to coworker'' don''t exist, these are the only available Actions:\n\nMoving
|
||||
on then. I MUST either use a tool (use one at time) OR give my best final answer
|
||||
not both at the same time. When responding, I must use the following format:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, should
|
||||
be one of []\nAction Input: the input to the action, dictionary enclosed in
|
||||
curly braces\nObservation: the result of the action\n```\nThis Thought/Action/Action
|
||||
Input/Result can repeat N times. Once I know the final answer, I must return
|
||||
the following format:\n\n```\nThought: I now can give a great answer\nFinal
|
||||
Answer: Your final answer must be the great and the most complete as possible,
|
||||
it must be outcome described\n\n```"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2627'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=N_ULh2gJbt2dWumbG8h6_rw_QD0TBQ3f1NYS.FsG3w0-1756165097-1.0.1.1-fGULs7H8u7wOLv7QAwRQdlWcZr2zj6dkLXQk5xPa.7SBOn9qj1nh6.VDONMzgxqO2telES4KZZPzeC2G4YFJXV5Q4hemTm4jcMsXA10XFYM;
|
||||
_cfuvid=_3IyqhyxR3x9q67TqmIC1eWhmeXZIMbk_6ChUHMngHM-1756165097262-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//nFVNb9xGDL37VxC6uDG0W8fxOsbejKAfBooWKHwp6sDgzlAS49GMOqR2
|
||||
rQb57wVHXu+mSYCiFwEaDsk3j4/kxxOAin21hsp1qK4fwuLd9ebNJk43dPHTdHH3yx+/Xvrh8a7T
|
||||
H27jm6aqzSNtPpDTvdfSpX4IpJzibHaZUMmivn67unp9tXp9/rYY+uQpmFs76OIyLS7OLy4X59eL
|
||||
86tnxy6xI6nW8OcJAMDH8jWI0dNTtYbzen/Skwi2VK1fLgFUOQU7qVCERTFqVR+MLkWlWFDfdWls
|
||||
O13DLcS0A4cRWt4SILQGHTDKjvJ9/JEjBrgpf2u4S0BbDCMqgXYEyhoITu86Au4HdAqpgZtb4FjM
|
||||
zahjJjvbpfx4WsOOIBJ50AQuRWFPGTIF2mJ0U/0cowaMHlzAzDot4WfKdColHoqQSE9R1/fxPi7g
|
||||
7Oz3Z2+C71bfr16tz87g7gUYC3TcdmHaJ7HYAjdZuWHHGOA2KoXALZUIN7evzAehxw8pg6aBnaHn
|
||||
qJRJtLwreZxOBRoUXdA2hS3HFpRcF1NILTsMEDB6cTjQsoD5nIY5g8tjASBjERJoZ6Q3DTkV8Lyl
|
||||
LAQc/SiamaRQMuTUkAinKBYp+B17qmHXseugx0cSYAUE5Z7CVFx6wsixbcajTAnoaQgp0/I+Asw0
|
||||
3s7V+wqHMrYtiRpmy5/G6Pe1FjbWrNyGT9nxYKK3DMJtNI4xaphAM0ZpUu5LFY0EGdARbCbAUVOP
|
||||
ahx+SBupoXSO/Uba2RE4VGqTkTALA4NStgtd2llkz9Z2GIo/YCYYKFsy8ku4iRimv0uFOhKCIVkD
|
||||
GPGuw9iSzMq3nGECjgWkZeHYhLGoYkiB3WT0ZqlhMwpHMh3OaPYPalJ2x4S+m+X7DVW6QJhnmafo
|
||||
WGgJtwqeMznDMSSOKpBGLfH3lSv63CeVgZwxDKWpnxS++7LjXpke5mJJkWLppJgA+w23I+tUm26M
|
||||
HhOOc6auTSCrIMImJ/SAo2fjoQaOLozeLh+EiEFqEB09RX1mpKVIGQNkQk9ZXpqHfGmffaNYDx1I
|
||||
ThGoH0KarLlBM0UvS+vx37YWLNT/Y9yAuJRL6wA9OQrBQq8Kq6mBVaEcPf1l0yxM4HAw99KcZZo/
|
||||
1dCkvMPsF9pxnGkSGQl2rN1+QNUvA4zm9x9Jf4/yRXXL42mcqRkFbRnEMYQjA8aYFE3VZQ+8f7Z8
|
||||
epn8IbVDThv5l2vVcGTpHjKhpGhTXjQNVbF+OgF4XzbM+NnSqIac+kEfND1SSbe6ejPHqw477WC9
|
||||
uL54tmpSDAfD9eWq/krAB0+KHORoSVUOXUf+4HrYaKa1dGQ4OXr2l3C+Fnt+Osf2v4Q/GJyjQck/
|
||||
DJk8u8+ffLiWydrwW9deaC6AK6G8ZUcPypStFJ4aHMO8jiuZRKl/aDi2lIfM805uhofLq03TnNO5
|
||||
u65OPp38AwAA//8DAJ7ERcucCAAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec4fbebf8486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:31 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '4218'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '4238'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999394'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_28f00852660542e88719133a722fce84
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
|
||||
I need to delegate the task of scoring the title \"The impact of AI in the future
|
||||
of work\" to the Scorer. I will provide them with all necessary context to execute
|
||||
the task.\n\nAction: Delegate work to coworker\nAction Input: {\"task\": \"Score
|
||||
the title ''The impact of AI in the future of work'' on a scale from 1 to 5.\",
|
||||
\"context\": \"The task requires evaluating the title based on factors such
|
||||
as relevance, impact, and clarity. The score should be an integer between 1
|
||||
to 5, where 1 is poor and 5 is excellent.\", \"coworker\": \"Scorer\"}\nObservation:
|
||||
To evaluate the title ''The impact of AI in the future of work'', we need to
|
||||
consider relevancy, impact, and clarity. Here''s the assessment:\n\n- **Relevance
|
||||
(5/5):** The title is highly relevant, as Artificial Intelligence (AI) is a
|
||||
major topic of interest in today''s fast-evolving technological landscape. The
|
||||
future of work is a crucial subject that affects diverse industries and professions
|
||||
worldwide, which makes it a timely and meaningful subject to explore.\n \n-
|
||||
**Impact (5/5):** The title suggests a profound impact since AI is anticipated
|
||||
to significantly transform the workspace by automating jobs, creating new job
|
||||
categories, and altering how traditional jobs are performed. Analyzing these
|
||||
potential changes can greatly inform and influence policymakers, businesses,
|
||||
and the workforce.\n \n- **Clarity (5/5):** The title is clear and concise.
|
||||
It directly points out the subject (AI) and the specific context (the future
|
||||
of work) it impacts. There''s no ambiguity, making it accessible to a broad
|
||||
audience, including professionals, students, and general readers interested
|
||||
in technology''s influence on employment trends.\n\nOverall, the title ''The
|
||||
impact of AI in the future of work'' scores an excellent 5 out of 5. It adequately
|
||||
captures a complex, forward-thinking issue with clarity, relevance, and significant
|
||||
impact potential."}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5147'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=N_ULh2gJbt2dWumbG8h6_rw_QD0TBQ3f1NYS.FsG3w0-1756165097-1.0.1.1-fGULs7H8u7wOLv7QAwRQdlWcZr2zj6dkLXQk5xPa.7SBOn9qj1nh6.VDONMzgxqO2telES4KZZPzeC2G4YFJXV5Q4hemTm4jcMsXA10XFYM;
|
||||
_cfuvid=_3IyqhyxR3x9q67TqmIC1eWhmeXZIMbk_6ChUHMngHM-1756165097262-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFJda9wwEHz3r1j0fA72fcXxW1oIlPahlFAKdTA6eW2rkSVFWt+1Pe6/
|
||||
F9mXs9Ok0BeBNDujmd09RgBMViwHJlpOorMqfp/tVvXTzVP7ZU2Zv91/fWc/Hu7Ft893+0+/2SIw
|
||||
zO4HCnpmXQnTWYUkjR5h4ZATBtX0erNNt5s0TQegMxWqQGssxWsTL5PlOk6yONmeia2RAj3L4XsE
|
||||
AHAczmBRV/iT5ZAsnl869J43yPJLEQBzRoUXxr2XnrgmtphAYTShHlzft6ZvWsrhA2hzgMdwUItQ
|
||||
S80VcO0P6K4KfTdcb4drDsdCAxTMC+OwYDlsCn2a6zuse89DPN0rNQO41oZ4aM+Q7OGMnC5ZlGms
|
||||
Mzv/F5XVUkvflg65Nzr49mQsG9BTBPAw9Kx/0QZmnekslWQecfguTbLtKMimMU3wMj2DZIirGS1N
|
||||
rhdvKJYVEpfKz/rOBBctVhN3GhLvK2lmQDTL/drOW9pjdqmb/5GfACHQElaldVhJ8TLyVOYwrPG/
|
||||
yi59Hgwzj24vBZYk0YVZVFjzXo0bxvwvT9iVtdQNOuvkuGa1LbPkZrPdrFZix6JT9AcAAP//AwBy
|
||||
ib1VbwMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec6aa9b18486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:32 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '556'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '614'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29998767'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 2ms
|
||||
x-request-id:
|
||||
- req_92650754cfb84bb9bff203c3b554da6e
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
|
||||
I need to delegate the task of scoring the title \"The impact of AI in the future
|
||||
of work\" to the Scorer. I will provide them with all necessary context to execute
|
||||
the task.\n\nAction: Delegate work to coworker\nAction Input: {\"task\": \"Score
|
||||
the title ''The impact of AI in the future of work'' on a scale from 1 to 5.\",
|
||||
\"context\": \"The task requires evaluating the title based on factors such
|
||||
as relevance, impact, and clarity. The score should be an integer between 1
|
||||
to 5, where 1 is poor and 5 is excellent.\", \"coworker\": \"Scorer\"}\nObservation:
|
||||
{\n \"score\": 5\n}"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3822'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=CW_cKQGYWY3cL.S6Xo5z0cmkmWHy5Q50OA_KjPEijNk-1735926034530-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLBjpswEL3zFSOfQ0VIIFlu1UpV01MPbaWqrJBjBvCusS17yLaK8u+V
|
||||
IRvYdiv1Ysl+857fm5lzBMBkzQpgouMkeqvi+/0x++Z2h/RR3X/++N3Lr+r4Kdkc9N3hdGKrwDDH
|
||||
RxT0wnonTG8VkjR6goVDThhU17ssX+dZmmYj0JsaVaC1luKtidMk3cbJPk7yK7EzUqBnBfyIAADO
|
||||
4xks6hp/sgKS1ctLj97zFllxKwJgzqjwwrj30hPXxFYzKIwm1KPrL50Z2o4KOIA2z/AUDuoQGqm5
|
||||
Aq79M7pSfxhv78dbAedSA5TMC+OwZAVkpb4s5R02g+chnR6UWgBca0M8dGcM9nBFLrcoyrTWmaP/
|
||||
g8oaqaXvKofcGx1sezKWjeglAngYWza86gKzzvSWKjJPOH63T/NJj81DmtF0fQXJEFcL1na3ekOv
|
||||
qpG4VH7RdCa46LCeqfOE+FBLswCiReq/3bylPSWXuv0f+RkQAi1hXVmHtRSvE89lDsMO/6vs1uXR
|
||||
MPPoTlJgRRJdmESNDR/UtF7M//KEfdVI3aKzTk471thqn9xlebbZiCOLLtFvAAAA//8DAGD1uEts
|
||||
AwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eef2f6d51dc0d-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:40:25 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=OccMxrmCRox_7bIJtedFlhoPOuuUvZzIFuNich2H1JY-1756165225-1.0.1.1-GZu0FYiwF3GrVFp6V5lDMCNsqJ3lPFJoATnPc50dACgSXHbkzGM7.MHv7GvUoDKXSyRy2Rxf5hGsRH_FPp.tBEzcki6qQDXIBhWde1NEJkE;
|
||||
path=/; expires=Tue, 26-Aug-25 00:10:25 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=aj4gjWj2skMQt34sZXxfiLoWjDQdxXvxh0I9isfN5II-1756165225584-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '510'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '623'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999099'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_18884789d50842b7bcfffb3752d3fd0f
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -561,634 +561,4 @@ interactions:
|
||||
- req_c8bb2af8f4a63658c3c9316045a413e1
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3194'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA4xUTW/bMAy951cQuvSSFEnbpI1vxYYBxbC1w3rZ5iJQJNrWKouCRLstivz3QY4T
|
||||
u/sAdjEMPvLxPYrS6wRAGC0yEKqSrGpvZ++utud36tZ8jtWHL/rO8O26/fbykb/P3ScnpqmCtj9R
|
||||
8aHqVFHtLbKhHlYBJWNiXVwuV4vVcr5edkBNGm0qKz3PLmh2Nj+7mM2vZvNVX1iRURhFBj8mAACv
|
||||
3TdJdBqfRQbz6SFSY4yyRJEdkwBEIJsiQsZoIkvHYjqAihyj61TfV9SUFWdwAw5RAxNotFhKRuAK
|
||||
gWV8BCogKgrGlfuYYYspk5oAXxUFDFPwgVqj+5QangxX0PV5ZpBbanhUKp0eyCVDJZ0+zV3urlUa
|
||||
XQbvDxKeKDymTorSH4ZDCtw433AGr7lIJLnIIBedlFGbk/sKwdReKk4Wrm/AuA4uGm4CplhiPQFy
|
||||
ICEqaRGKQDUsUsvlaS6mkIvexL7F/UG1cS3ZFiNgK20j+TCb0rToRjaHsUgHxjGWGLphImyRnxAd
|
||||
LLrEJWxlRJ20HB2cRPCUTspI2xuZQkCLrXQKp11d4gwYGSy2aI+a+3ENcwm52I13IGDRRJlW0DXW
|
||||
jgDpHLFMQ+6276FHdsd9s1T6QNv4W6kojDOx2gSUkVzarcjkRYfuJgAP3V43b1ZV+EC15w3TI3bt
|
||||
VperPZ8YbtKALhYXPcrE0g7A5bq/Dm8JNxpZGhtHV0MoqSrUQ+lwj2SjDY2Aycj2n3L+xr23blz5
|
||||
P/QDoBR6Rr3xAbVRby0PaQHTS/OvtOOYO8EiYmiNwg0bDOkoNBaysftHQMSXyFhvCuNKDD6Y/UtQ
|
||||
+M3VfL1cLc/P1VZMdpNfAAAA//8DAFYiBXASBQAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec034af08486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:17 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=N_ULh2gJbt2dWumbG8h6_rw_QD0TBQ3f1NYS.FsG3w0-1756165097-1.0.1.1-fGULs7H8u7wOLv7QAwRQdlWcZr2zj6dkLXQk5xPa.7SBOn9qj1nh6.VDONMzgxqO2telES4KZZPzeC2G4YFJXV5Q4hemTm4jcMsXA10XFYM;
|
||||
path=/; expires=Tue, 26-Aug-25 00:08:17 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=_3IyqhyxR3x9q67TqmIC1eWhmeXZIMbk_6ChUHMngHM-1756165097262-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '2201'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '2275'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999242'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_8253241bc24547018d1600b0ec310060
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Scorer. You''re an
|
||||
expert scorer, specialized in scoring titles.\nYour personal goal is: Score
|
||||
the title\nTo give my best complete final answer to the task respond using the
|
||||
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
Your final answer must be the great and the most complete as possible, it must
|
||||
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
|
||||
{"role": "user", "content": "\nCurrent Task: Score the title ''The impact of
|
||||
AI in the future of work'' on a scale from 1 to 5.\n\nThis is the expected criteria
|
||||
for your final answer: Your best answer to your coworker asking you this, accounting
|
||||
for the context shared.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\n\nThis is the context you''re working with:\nThe
|
||||
task involves evaluating the given title and providing an integer score between
|
||||
1 and 5 based on the title''s potential impact, relevance, and interest level.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}, {"role": "assistant", "content":
|
||||
"Thought: I need to delegate the task of scoring the title to our Scorer, providing
|
||||
them with context about the title and the task at hand.\n\nAction: Delegate
|
||||
work to coworker\nAction Input: {\"task\": \"Score the title ''The impact of
|
||||
AI in the future of work'' on a scale from 1 to 5.\", \"context\": \"The task
|
||||
involves evaluating the given title and providing an integer score between 1
|
||||
and 5 based on the title''s potential impact, relevance, and interest level.\",
|
||||
\"coworker\": \"Scorer\"}\nObservation: I encountered an error: Action ''Delegate
|
||||
work to coworker'' don''t exist, these are the only available Actions:\n\nMoving
|
||||
on then. I MUST either use a tool (use one at time) OR give my best final answer
|
||||
not both at the same time. When responding, I must use the following format:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, should
|
||||
be one of []\nAction Input: the input to the action, dictionary enclosed in
|
||||
curly braces\nObservation: the result of the action\n```\nThis Thought/Action/Action
|
||||
Input/Result can repeat N times. Once I know the final answer, I must return
|
||||
the following format:\n\n```\nThought: I now can give a great answer\nFinal
|
||||
Answer: Your final answer must be the great and the most complete as possible,
|
||||
it must be outcome described\n\n```"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2548'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=N_ULh2gJbt2dWumbG8h6_rw_QD0TBQ3f1NYS.FsG3w0-1756165097-1.0.1.1-fGULs7H8u7wOLv7QAwRQdlWcZr2zj6dkLXQk5xPa.7SBOn9qj1nh6.VDONMzgxqO2telES4KZZPzeC2G4YFJXV5Q4hemTm4jcMsXA10XFYM;
|
||||
_cfuvid=_3IyqhyxR3x9q67TqmIC1eWhmeXZIMbk_6ChUHMngHM-1756165097262-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA5xVTY/bRgy9+1cQuvRiL7zfW98WKVoYCNoiSHpoHRjjESUxOxoKQ8peI9j/XnC0
|
||||
lrxpChS9CNJw+PhIPlJfZwAFlcUKCt849W0XFu8edtcfPskfhO/p+Vjq49Ofx19/2/10uPz0Cxdz
|
||||
8+DdF/R68rrw3HYBlTgOZp/QKRrq5f3t3eXd7fLH+2xoucRgbnWnixteXC2vbhbLh8Xy7tWxYfIo
|
||||
xQr+mgEAfM1PoxhLfC5WsJyfTloUcTUWq/ESQJE42EnhREjURS3mk9FzVIyZ9ceG+7rRFazhQCEA
|
||||
7l3onSJog6CkAWHnBEvgCKQCHZsnuQDUds7rHBIG3LvocQ4ulkBRMaEoBNxjACfQYcpoNe0xQo79
|
||||
rBebuIkL+H2EW2e4FXwc424Kex/CAFfwuAaKGanqtU9oZwdOT5sCpK9rFBVwUJL4XoQ4GuWGD+an
|
||||
6JvIgWtCGfLU5KJUnNoM0QXnUeZwaMg3QIYjVEeqyLuoOS+lFsMRlDvycCBt4EAlLhI631CsjWcg
|
||||
76z1cmGpfTjVZchJ+qwUA2+obsLxVDi1Gj2uoXECO8QIFE01QrEOx1zOOpmG7JVh7xJxL+CkQ69y
|
||||
qsEcKFahx+iNyxfegQlAhpYkLLGimFnGshdNhAPH9alZ761ZRpTktfokEOhpSBmcarIunBdlbHTZ
|
||||
o93JLU58sDC+T8RCeszxPUePKULC2qXS7I/rHyQzPHXUcqg4mYha95SZKjiwccIQ7HsofMXpvMND
|
||||
doIu+SYr6h1HoRJT9mhQ8FSo+UngXeI9lWgd9pwwI45azxhnIxH5AN7FLF1wUNs0g4tywLSJP1N0
|
||||
AR7z1wpuYfH/tDsJYrhe9WEcKn07U5bVWbVHDTs1ftNoDgHfiJ7iKB1Th6BXTjKgq0xDbNgca86h
|
||||
xkILuB33+k0C2HaBjy1GvTjfLgmrXpwtt9iHcGZwMbIOE2J77fOr5WXcZIHrLvFOvnEtTLzSbG0q
|
||||
ONrWEuWuyNaXGcDnvDH7N0uw6BK3nW6VnzCHu72+GvCKaUdP1qvl7atVWV2YDPfX9/PvAG5LVEdB
|
||||
zpZu4Z1vsJxcpw3t+pL4zDA7S/ufdL6HPaROsf4v8JPBe+wUy22XsCT/NuXpWkLbTP92bSxzJlwI
|
||||
pj153CphslaUWLk+DL+XQo6i2G4rijWmLtHwj6m67c3drqqWuPQPxexl9jcAAAD//wMAzZYQhWwH
|
||||
AAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec121c168486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:21 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '3958'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '4032'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999412'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_bd8ada32c9474dad85877e84f73b632b
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
|
||||
I need to delegate the task of scoring the title to our Scorer, providing them
|
||||
with context about the title and the task at hand.\n\nAction: Delegate work
|
||||
to coworker\nAction Input: {\"task\": \"Score the title ''The impact of AI in
|
||||
the future of work'' on a scale from 1 to 5.\", \"context\": \"The task involves
|
||||
evaluating the given title and providing an integer score between 1 and 5 based
|
||||
on the title''s potential impact, relevance, and interest level.\", \"coworker\":
|
||||
\"Scorer\"}\nObservation: 5 - The title \"The impact of AI in the future of
|
||||
work\" is highly impactful, relevant, and interesting due to the transformative
|
||||
potential of AI technologies in various job sectors and its relevance to ongoing
|
||||
discussions about the future of employment."}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3994'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=N_ULh2gJbt2dWumbG8h6_rw_QD0TBQ3f1NYS.FsG3w0-1756165097-1.0.1.1-fGULs7H8u7wOLv7QAwRQdlWcZr2zj6dkLXQk5xPa.7SBOn9qj1nh6.VDONMzgxqO2telES4KZZPzeC2G4YFJXV5Q4hemTm4jcMsXA10XFYM;
|
||||
_cfuvid=_3IyqhyxR3x9q67TqmIC1eWhmeXZIMbk_6ChUHMngHM-1756165097262-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLBbtQwEL3nK0Y+b6pkd5OmuRUQEgiJIgE9kCry2pPErGNbtsOCVvvv
|
||||
yMl2k9JW4uLDvHnj997MMQIggpMSCOuoZ72R8dtit/levOkG/u72k9x/zL7d8DuXfjl8vr++I6vA
|
||||
0LufyPwj64rp3kj0QqsJZhapxzA1vc7yNM/SJB2BXnOUgdYaH291vE7W2zgp4iQ/EzstGDpSwo8I
|
||||
AOA4vkGi4viblJCsHis9OkdbJOWlCYBYLUOFUOeE81R5sppBppVHNar+2umh7XwJH0DpA+zD4zuE
|
||||
RigqgSp3QHtVqUq9Hwu3Y6GEY6UAKuKYtliRErJKnZY/WGwGR4NBNUi5AKhS2tMQ0Ojt4YycLm6k
|
||||
bo3VO/cPlTRCCdfVFqnTKih3XhsyoqcI4GFMbXgSBDFW98bXXu9x/K7Ybqd5ZN7TjK7TM+i1p3LB
|
||||
yrPVC/Nqjp4K6Ra5E0ZZh3ymzkuiAxd6AUQL18/VvDR7ci5U+z/jZ4AxNB55bSxywZ46ntsshjN+
|
||||
re2S8iiYOLS/BMPaC7RhExwbOsjpwoj74zz2dSNUi9ZYMZ1ZY+oiucnybLNhOxKdor8AAAD//wMA
|
||||
Q4ZWRG8DAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec2bee738486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:22 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '879'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '923'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999055'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_5699dd0d905247f6b4b41d76411079e0
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
|
||||
I need to delegate the task of scoring the title to our Scorer, providing them
|
||||
with context about the title and the task at hand.\n\nAction: Delegate work
|
||||
to coworker\nAction Input: {\"task\": \"Score the title ''The impact of AI in
|
||||
the future of work'' on a scale from 1 to 5.\", \"context\": \"The task involves
|
||||
evaluating the given title and providing an integer score between 1 and 5 based
|
||||
on the title''s potential impact, relevance, and interest level.\", \"coworker\":
|
||||
\"Scorer\"}\nObservation: {\n \"score\": 5\n}"}], "model": "gpt-4o", "stop":
|
||||
["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3760'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=N_ULh2gJbt2dWumbG8h6_rw_QD0TBQ3f1NYS.FsG3w0-1756165097-1.0.1.1-fGULs7H8u7wOLv7QAwRQdlWcZr2zj6dkLXQk5xPa.7SBOn9qj1nh6.VDONMzgxqO2telES4KZZPzeC2G4YFJXV5Q4hemTm4jcMsXA10XFYM;
|
||||
_cfuvid=_3IyqhyxR3x9q67TqmIC1eWhmeXZIMbk_6ChUHMngHM-1756165097262-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFJBbtswELzrFQuerUB2LVXVLQnQoufmEKAKBJpaSawpLkFSdgvDfy9I
|
||||
u5bSpkAuBMjZGc7s7ikBYLJlFTAxcC9Go9LHcrf9dnh+mO7p+VN+3PdPGvcH9+XD7rHs2CowaPcD
|
||||
hf/DuhM0GoVekr7AwiL3GFTXH/NiXeTrIovASC2qQOuNT7eUbrLNNs3KNCuuxIGkQMcq+J4AAJzi
|
||||
GSzqFn+yCqJMfBnROd4jq25FAMySCi+MOyed59qz1QwK0h51dP000NQPvoKvoOkI+3D4AaGTmivg
|
||||
2h3R3tX6c7zex2sFp1oD1MwJslizCvJan5f6FrvJ8RBPT0otAK41eR7aE5O9XJHzLYui3ljaub+o
|
||||
rJNauqGxyB3p4Nt5Miyi5wTgJfZsetUGZiyNxjee9hi/K7P8osfmKc3oZn0FPXmuFqxNsXpDr2nR
|
||||
c6ncoutMcDFgO1PnEfGplbQAkkXqf928pX1JLnX/HvkZEAKNx7YxFlspXieeyyyGJf5f2a3L0TBz
|
||||
aA9SYOMl2jCJFjs+qct+MffLeRybTuoerbHysmSdabbFrusyzETJknPyGwAA//8DAAH6vKZtAwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eed9bdf8aeb2a-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:39:21 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1085'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1107'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999114'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_1b900c7238124d04835cb54adf6d7aeb
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1028,643 +1028,4 @@ interactions:
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"input": ["Example: Toy Cars(Math Example): An example used to illustrate
|
||||
addition by counting toy cars."], "model": "text-embedding-3-small", "encoding_format":
|
||||
"base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '172'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1SayxKyyral+/spVqwudUIuSmauHncQkOQmYkVFBSAiKKJcEsgT590r8N9xqqpj
|
||||
A1ExYY45xjfzP//1119/d3lTFuPf//z196sexr//x3bslo3Z3//89T//9ddff/31n7/X/+/Mss3L
|
||||
261+V7/Tf2/W71u5/P3PX+x/H/m/J/3z199GidoTsqiXc6z1VOAqVIG3cv6hWV69X6PVmpZJIGeG
|
||||
Ljjqn7CVmAbrIAqdxU01A72LRSRyzxx7wtqhhPiJszzR7rJmNt5ZAV4PM8KnOEjy7j4JIrCyF4+P
|
||||
EpdSrqvADFFdWTjvOTefzfulAL6rIazT4aOuBzS0ULmMJr7QgoL5CpgMKm9HndgDjiPSFYIEMfZP
|
||||
OAwno1nUN8dCsdNSknFE6Yk8OxrIeL0gWms30WLcLzxwGsTjhCOywwY49lAcz0ci3dCN8tTrvkC+
|
||||
5pGHhNxXh+6GM6ivionlopLBaHwkAx33s4hzP0uatatXEdVt3xCj1M4OL/F+gQaN/eJYlUlEVVJp
|
||||
CClH19vRvqW0FDsJkFPwwfHBvOZrEEkQfa+WQqQy7ihHglGC/Wuf4Ys1CM7KsE0L2WhF0yGVS0py
|
||||
oEroeryzxKidrzq7SVKil+z02HH7tV9vPPiC9W0gotkVymd8/jDITjt7EpPb2M9BkDGiWdDJW+Is
|
||||
pJzRnGPw5c2aaNv1LH4Uaig7DhLO2lBqhL1eScBdFmaCB7PIeXEXsOiY8Qou67BW5/mcfmFvqDbJ
|
||||
09TuhcXxakj1x4P4h8sJrFZFGZC3mYTl9GFSLtYEHk7M5UrczzfL16vaTWgVpNcEjX2QC0Omzuh+
|
||||
WjC5F3dZ5bbrgbbl8uRKh2+/lPuqQlrj73CS6cAZ7ezFwyftTJK/+TBin/2ng69Zbogf3s903a0D
|
||||
C5P7fiDGAXMqvYmHCu3Fe0pUt4mb5dl/JQRiVE2cL0r5nKPQQq4KSmxl54Wu73bOEI+EGZvyWXMW
|
||||
cv5WQAsvO6+6IUQXHAw8jNUr56H3lKqcF/AJTCTmNvHZ86GOfhAlyLwtBlHGQ9svmewbKD+rA1ZM
|
||||
S+1ZculEeL9FDTkKbtmze8bjobFjPS85tJJKhtisEdTWdIKOKed8rE0THGK/JXinXMHqJHcXPtVz
|
||||
M4nc22z47f5D1aje+BxncSRkKA8BfYk3j3OBkXOckc8QGDXEGmfd83W6PDpRwK6G02SsKT/mc4fi
|
||||
9XQm4YteGi4x8hbSM+SwHH8UIDxHX0It4r4T/Mp6xL4+QYocU0nIkRFhv3Cur/ypJxPUCljTHftF
|
||||
j8OgYmyBr7MmhlIi7elfSGQN55791F0HC+rNGMdHqJLwiNsDGa0DDgjX9XSIcQUNP42wW7prtIzX
|
||||
pUZj+MxwnJ3nvH19MvunB1in7NOZ7eLpwiJa2ak9YM6huzWaIBeYV68wJTcSHrPji3VqvfFv/WbW
|
||||
/qZwDLUJyxaL6cpZ2R5MeHcierNbnElmJRddBlIQ+0HFZhkKeYbXa14Q3YJuxJKoz6DccZjcU4tX
|
||||
e8h5DLScviVYQrw6p/regzyJWWyH06eZX71vgdIXLZItVdgLdz5Q0PPpOcRPvAPd9LZCY9hm3pAV
|
||||
Vr4mluyiVy1FWFmCXUMfXyzB90BqchQi2q9OkZdQTg9v4hDWiLizHWio6mFEnE6vm7Ve9jYctITB
|
||||
UqsOdJLY6/CrP29ubZnysdYOqLspBpaWWQPC42sq8LRIKzHCdnTGvohCxN/3e6Luwj6axN2VhyZj
|
||||
KSTop6gRQhc8gQD6/TQv1wMdd2zjofr9bDzuzfTOEtmsAZ/xikku9HrOc1a4R2f5hLDq0m+z8rZl
|
||||
wcaQTWLVkU7XrmItxDbTizipj3tKvY8CJl/0cGCxhM5DcUzBpo/kLDSduhyrWvvVC/bG2VbpHB1r
|
||||
KKZdgc2+zcBy1toEnZa6JGf/a0fzxTgOMHwfj7gw/TpaKj4N4TRdF2+3uz4BldYoEZIdH2PpcBnB
|
||||
ylmhCJs2U4kVtiNY7tO0wmIZhymy2Lkn1G4qVE7XHivj0jf0W11ryKnakYSTLDcrhLOG/NNTxuab
|
||||
6en8eo0ZtM2IJyfCmqpQHDQfPeuITgBkx0i4T6SGE2FvOObeer4eq7pAJiAM8bj30aF9YVdg+fon
|
||||
nBifnbMeM2FA6+UjeztQv8B8QCiGfZsZHndUH9GsDG4Br91lmECzi3I2ctcSLHaKcMK9DWc9XUQI
|
||||
4joY8TWVGUoSW4pRxIoHrLo97vly5W2U7NgYy9bAgtlJ3hBSLrniY5p+6Fjy1IfnWb7hm/UsI5Z1
|
||||
HxY0dryHpaT9gtXOWg+qXHrf9EToJ+3TFGjrJ/gMnIvKCsZx/6sfrMv6GA2POe8gc8+xJ/ToRef7
|
||||
JOzhYmeI6MC5OFRvEgNp0fmyfV5QJ/XDMhB764M4PTdEozJoPGwSxpy+yavrV4LbDq3v9kO0cRmi
|
||||
+SIlEG79ceIPbaWu0hy7aML3h4f8pY7WJVBSpKSd+ace1z6L9+jRcTlWkuTcr8fqW4BPxnoeAiFW
|
||||
eYkNBnSqtZrcinuVs6mkl8gch4mcCkmM1sOuT8CvPiNdMxoKdtKEgAF9XJjSkFO9DwxoSMwF6yA/
|
||||
OXO2q1NEE3giR7+eAWXmHYSbXkz78VqAFbGchnbw+MayfH7Q9THbIvym1hf74Z2jvVXBBM5FffMA
|
||||
gY9oeb3GGdLnbSHYotiZI3ctIG3TowdbpY9ojuQZ3Uu3wZ7RQad7v482BBTM0wM0bS/kO6lC4/wI
|
||||
J053lXw9u76L4mZeiDTJFWCf/eMLbmqnYc04xPmcQlOESP5U2E1OaTQd1EpE013ksO6vej78Wf9L
|
||||
LxN73DfNvJ40C+0m+0KM8WCp9FO8KvTJ4hrH3k7NhU/59FF4IacJ+PXSr+37uId+Ib2wN0mVQ5dA
|
||||
meHmL4kDsmM+XgEzA7deEnLSIMlXfCID8KbrC5vp8aCucxDtkZIQQk66q0dUMOcQxU2kE3mX6f2a
|
||||
7YIOGkRssQ5qxVnkIYnhzgtMIjfzrqfjmV1/eoCTMnacKfKqEh3tbJoIyGwqWM8uBjtkvKev6T/6
|
||||
ueTFPRRGG06gwp+eCvqhhudK53BqVyWdH6slAn2pc4Ljx8lZVxekENajP61h6/QFp7z/3V+07HTO
|
||||
Oae88vCxX1QiebsHXW4U+3A5uDciLY+Yst3NTOFN/sQTD5mDuiKeshDfghuRXfoB3bGuCnAOjZe3
|
||||
yKcW0Is5a8DeGwaW4/ACptCfJzT35TgxHvNwqPpmniDN3nCi/vrs5938ShDU5hQHfoTAZL5sCEp0
|
||||
Bn/qk9f62oMwniRsqLLvrM1L6KDspCrxwlFXl2Pd8fB65ecJNaR06Bgxys9/egxkCoeMZ3aGWjyr
|
||||
WMn0Qz8GkfWEn8NgkKIO5Yg/1g8XbfWAPZsE6kwu3R7uT4qL9U6XoqV7LiEsq0Qj2gNe8w5HjQcv
|
||||
k21Nk3JI1VkezhCiRz17B7dJKAld0MJNX7f1MCjnFLkGwRnusd5PnNNxxlME3EOVCfY/lkNPFxDD
|
||||
JeODif3p86f6dkCYrPj3POVLZHGxqD7vd2zS4UNnhmXj3/2eGOWQOgM+PyDs8oeC/TG704X1Kh9d
|
||||
u/NAwlKj/fR8P/eQDPiANY+cosUpAx6daqMmsuDmPVuxgY0YwjfE8b9DvxToqIAhnkNyexqfqNWb
|
||||
twe7w10kzlFt+mGvzylyzmqLf/6TTufehxiHJ6JvfpAX9MVCQukOJC6LVz4+VjmEmWCwWI7rkX7q
|
||||
r+fD2dE3/ehZh86RXIFVqIOJ89dXvshDGYuc0nE49N5BPzIsHMSb8h2w9OpzSs92wIJJYlqiLZ0T
|
||||
rRdNHVBDrwAr4WT0lGCigOYMCU5M6+VQaY5b5FL3S4rWNnIuA2MIPp+1nARruDiCd3nwiFvKM7Fa
|
||||
5QOGrd/8+gnB/bhS6kXPWJT6mk5sWC4REdR1ho/jwSGYvC50mXGxguzoxthIkotDX5+sgsc0Z4ja
|
||||
M3a+SANhD6x6+UwjIzLOciwhD10tWYn3PcoqOxRyBk4yaYlSR01DCzj70Po+SmIt8xsswenMQseU
|
||||
EiIZhxBwXsDEsObedBJbxe65LQ9AE32TCSoLjNaSry34XHGOLciXlP+m7y+YHXPxpsylzgx2gQG9
|
||||
KX95fKsc1Tk/xB1arWHBMkFN8z2doQsco/pOQ+iV0XwKcQHceDZIsaRpPmeo78AOPSOihZMZsdLw
|
||||
LsSxrhQSq8dvNMxn/4sU7rMjpgZPzhBbLIMiQ5Wng0tdsFzVKvnTH0XusjTLHIU+hNpFw8qWD9bm
|
||||
tfuC/AwHIpnyiS4X6cRDrY7eHkkrU/35KRhH5xv2OJw6HUQWC9/wY5PS41tn2fIg+lonZapR0vWr
|
||||
j2PmV6/EbASejge1q1Gav2Jsxouk8naV/tEbfKJF4iyJZbFilEAZb9+njotzD3/9jWCg3sGkf64M
|
||||
3Po/NkxLV/uLurYwTL8nb594EMyJFTBoCAXRA7urGgmR+zVgcJjPxK1DOWfDk8qg/VBY2F/mJ1js
|
||||
sokRXMY91t8EOOSXh2I9OW18QY14CGwWjrWUENMCb2fq6nUPeD97YT29ThERzL0P5es1IrrGXfKB
|
||||
s/09Wtv+Q3TyYtXRfJwM8C69lrjI4KPldFmf6PDx99j9HquIIBaVENKvj8NJfvTUjzIePKNVwGrf
|
||||
HqLJu1Ye+HZGiD3IQHX88YT9UFrEM7rCIaEnJlCraYQlZASAO13EFtzLCWPsiprKioc2hubkZVga
|
||||
PwSQi74k8A0+Kw4YsOu/haDbEJfujciFDOga2zUDlXdHJkTpwxm9iDNQdzLKiXmtvDozM88DemY4
|
||||
D47zMVrIua6hVgdvbNuC3VNpLp7QScqQYFlf1MU/vWOYyOURSyHe0T9++1ZLPnY9popWvSHujydN
|
||||
l6dXq+wUsXugdFZO/HH5bv68nUFvyDY2sqesUqdQEtDinU30yhyiH3+Ck5/V3m3ze4vSmQm0BW3C
|
||||
Se18HXI4kALsyq9KNO79yid5PRZAa8IdNrLCUNcbT7+wyqsdyerIaJZP2ddQ1V9vfJQQpWtkoQme
|
||||
A+eFldqxGs6PMg2o3N7BkXyumxXspBh9322Lj/FSUSKvRx7SNjsSb+u3pOS/NXzqcYSzTS/4Oco6
|
||||
8dlMdyzLp5ZSabhr8BUavUfp+RbRJbBXSG6nAAcuPVJBHT8JdM6P0BNie6cun+obbqngQy47O6CU
|
||||
s6LNL3g3svGnSJiu1Yy6nkmIEV5uzmq2exeC880lxzRgwaqQxUMb78A6fXrqvOMfT+jn5ouoFus3
|
||||
0xBjC4ZCeyOmL8oqN0VcCM/qV/bY8WBGgp1NLTgewpYY+91dXdzM5uFTTyJvnxUtmPvKT+Gtej0J
|
||||
Tv1AXY23PKCTaofE6VuYr4l0H+DxeJ+wTG9CsyyRNcOimQpimZYOOHhwBvirvxN9LirZ/ATUg97B
|
||||
6kmt+kXpcAIds1awFyaZOh8znoFBxurYNv1Hs/nNGgb7GRG5Mu1/+6lkOsQkXdKEzp+CE5HKiQ45
|
||||
FZdXtKT6/gkvpVuRoJBzwFLTFNGWx0jwNJ50EDTHgp/rxcZadnPy9aLvGaA9w8vEcuk5n/f6nKEJ
|
||||
oxPxOLxXF7tKn3DLE5v+js4ffqKHloDPfblz6MZD4MYTiSrEl5zylrzCk2qF2JF13LOCoqXIXSiD
|
||||
tbAdIvIts1WcSLJseY/mw7FkC5RdTk/y428Ca7QKrMQbJjoj5upcIOsr7rwpx0fKvpq1TxMLmt5c
|
||||
bnniDL7ZLvgiUahiEmZn3NC20VbYP3cZPhIo9ewcBSLyfFHBWNYDlV2dUwqHxuUndusvS4FkBaLg
|
||||
bWG85eslP5wlgJE1kos2ImcRDMVH0qDcsbHl0VFgTBec5LH1HhtZmPV3w8C75wNP8MWHOpT8KsLa
|
||||
cndY4dK4mX781CCZSOxML/rVLgYGco96xJs/oJP+jlxov78r/j0v1EkuLSji2cWXN+Ns/qsL4aa/
|
||||
nuBS11kl9jr96hsby2w6lHoPCQnF2OHf+i+cHfiQqWKfWMnLBJxbWskhyHgdu8pSRDPDwgS6Gl9i
|
||||
hz7naGD2VYKC9DCSjXf04+udK0A+rNibk/ZL19tezOCWVyaIzm0+3/lHhhhJjIl2Rfd8mfBuhWNk
|
||||
GNhGnt8sPz+08ePpq8pBPy+RlcLamvqJscni9DgYW7Dx3UnYKTxYV38vgs1//3hzxN35xyr6X5Wb
|
||||
dlVSqbx4IBo8KZaIDZTYzs/f/+FLxrKvnaWrxT96RLb+4FD/bBmgUDmXYOErUX7jt3AM3sq0CC5o
|
||||
hsTIXRha02vL00ouLFhTxI1v4dOb8VQhBbsaFNHM4mLcaxG98VCCX2uasCPkOFoWrPpiz7EKlhlw
|
||||
byg+By3YeNpPr/P1W2YZnG/Kkajp8dIvfXpeYfx0dSJb8LvxcUcBjJxJExc/jhHnpu4fPkTSZFTo
|
||||
1n+qP7zYs4VB3fxrAok3j8T2+Gc/O1kE/6z/CdS6KsjrkQUXZM/ervJeDjHvWBP3PVN4IyOW6rgE
|
||||
igT8/B0Qt4ze+UrOQYbo875gt1aDiJ6uqwYzc/SJ42dJv3DKe4byh39jfeMz3xzEFvzpi7Hx4rmr
|
||||
QAqDz1nG8cbvu2PJlgCw/Ixd9TjmCz5/E9gkRY4di47NKs/2BNx4NYjB+cCZx9QRIXpUM3GE5pvP
|
||||
TpW68FJQltjJy+qHw47VwGKBw8QsfunM37K3QNYeWWLK+uT80XNGTiWc2Lun2mnNjoGW27bEbHYL
|
||||
pfo7f0JzGBMsVxIHJkE/WGDcNYeJt3eaunYVtECyizsSWM8rXZ+vyYZ1MgKiZPq1GfTXywNuJBie
|
||||
GCb7Zv4WzxS+Qb9i6QEFSg7ytwZPjX3jGJXPnDYfVMGXauX4UljUGVQylz+ei60HiOn67OsM6Y8n
|
||||
nRahCBrBDwoN7guFxSpBWjQvTrI/bH6U6Bu/GNuP5EPhRFOsMyJwRoZ/MEhNJED0XTioXWJE3n7j
|
||||
uxPqkeVs840O5m0qYZW8aN71VZrCpNwNWHv1t3zmtaL7M78y7O6qrqs/71F1MnIsA4drRhw8Wdiw
|
||||
eTQxqq+qhNfcUAzN5eXtTckGv/wGf/7ZUK+yM4dHr4R6rd1xfLjgP34J1QkBHvw9X00PRDBo/Jec
|
||||
Nv60PKZb+ONr3iG+YvAn3zxXYhK1usgNzytlB9+T5W39+B3NYSCXPz+95SEBDGB31cDm14hXavt/
|
||||
9/fjYQ6xrL12oGftTIFr95RJUFgzXbe8CDa/h4tw+kYLBHb568fYPiR9Q423Yvz0mRjjclLH8Giw
|
||||
4h06OfbGva6uS3S04SSVIvnND1fjfZyglb15TxyXqZmVWYaQIO1BvOypOZPxsQpoDmTzJ+3Jmfey
|
||||
uIdVUV+I3YZSL6wuWIFVVB72UWKCwXrOE4y4fUB+/JeOF1DBwM6WP/xmbEbRB8/YbyZaWEtPugrW
|
||||
4Oqk7vZ81NHCuakCux4m5Fr4E9h4TCZqcUzwredODn+6rAw8ZHGHXXQ2opW3JQsVIcdufj3Jf/NP
|
||||
6MZ+NvnIcynLK7c9rEQGEKzBWaVnKzJg9318SYFucrRMZyiBLX/jMkxE54+/Oy8Kh6WlO4L+3e5n
|
||||
+LD3DTm5YhPRw66Z4Jyr13/zcWXZx/BSDi4JQDT3a2RxE6JcfMX+Cyx0ucqPFk278Io9x6zV5aJo
|
||||
LcyNasa60Ksq9/pc/+2n1PdujsinfsyQm+GHhGHb97/8CG3eTLCX6aVK73wgodMq2eR4VGs6l/tD
|
||||
ByP9PWKzMk8OPVY1i3KztoiHzm9nFVFuAP2hm9h5emrPxtquBNSoDuTa7Nb+zzz38/FXcpK4LH/7
|
||||
Z0uDHUhDb1fcHyqtl30N7fY4kRMteGddItlG8YrP087NS4f3ccHA6Nycpy1vOlTvrxoU0NPHib3H
|
||||
Dvd+KTO63ybBe8WfZ06XcFZQKsL9tBMiDqzqm3EP1ys747u7Hh32bPsD9AE8bZ8nKu3Lo48cds9j
|
||||
GTRGP2Zg9GEpMRVRmz2Ixt88c+uXk7j5yYXarA+ipDjjYDt/+eWrri++5ObXa07L/Sf8M3+WCdc1
|
||||
VCWdBoN9FGzznXu/OEVkoE93Wf/MG/nN/wIDq8yfPLNQg1/B9v9IuPGxpZoLBd4iTSKRBftm/eWz
|
||||
H5/0XmLg8M1r10FW40dysp5MNKhkX0AtulymGXkDXV+9v0d//3YF/Ne//vrrf/12GLTdrXxtGwPG
|
||||
chn/47+3CvyH8B9Dm71ef7YhTENWlX//8+8dCH9/+q79jP977J7le/j7n784+Gevwd9jN2av//f4
|
||||
v7af+q9//R8AAAD//wMAY2Pk+OAgAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974f08951aa4251c-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:57:45 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=L28ZVHkSMOSpzu9MoQzdDgOYGHI9uQZnBcGj5Txg9e8-1756166265-1.0.1.1-9494dtPfVD9FwZG8RVJ6EnsQqAuzb2nnmeyRQLpPrX5IK2bR3KadJFV3K8HTAJa0lU9lIhCtu4fezMScnlMYOGAO0zGsIWMyozgrpePziWg;
|
||||
path=/; expires=Tue, 26-Aug-25 00:27:45 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=NaNzk_g04ozIFjR4zD0Joz9IJWntjKPTzifJUuegmAo-1756166265356-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '69'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-74b9c68cd4-sld2z
|
||||
x-envoy-upstream-service-time:
|
||||
- '183'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999977'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_f779ba78781c452aa13e52ea0ca6a682
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"input": ["Example: Using Fingers(Math Example): An interactive way to
|
||||
teach addition using fingers as counting tools."], "model": "text-embedding-3-small",
|
||||
"encoding_format": "base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '186'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=L28ZVHkSMOSpzu9MoQzdDgOYGHI9uQZnBcGj5Txg9e8-1756166265-1.0.1.1-9494dtPfVD9FwZG8RVJ6EnsQqAuzb2nnmeyRQLpPrX5IK2bR3KadJFV3K8HTAJa0lU9lIhCtu4fezMScnlMYOGAO0zGsIWMyozgrpePziWg;
|
||||
_cfuvid=NaNzk_g04ozIFjR4zD0Joz9IJWntjKPTzifJUuegmAo-1756166265356-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1R6SROySrPm/vsVJ86WviGTVHJ2CIjIVCiD2NHRASoIDghIQdWN+9878P3idvfG
|
||||
BZZaVmXmM2T+57/++uvvtmhul+/f//z197Mevn//j+XZNf/mf//z1//8119//fXXf/5e/7+Vt1dx
|
||||
u17rd/Vb/nuzfl9v89///MX/95P/u+ifv/4+7q12rFMtdNkoCLF6MLQjPr2iJ6Mxv4lhdz6V440L
|
||||
2oZt+SpE3fqj4XQ4Dw2TQT+qqlXkwaHICpdOl3elDvaT4O0W6mhqonMNt7N3widmYTZrrpIDn14I
|
||||
Nl/tjvFW+M3AfT4fOLqah0KUXHGA+Lb7EizNpsGCioTIGXcpdvcacafkIteArnZBLg/njMj2eMxU
|
||||
E102eHPaqBEbvEeGsnaNiEMUpR+K88qEM6oK4im9485pWg5yn1g+PivjhdFMxzVszTQfBbGMDD4V
|
||||
LQob4e0HpZCqxcgSFEAhfWzsNaxA3zQ9jfB6SFesV3Bg8yF/TGgVqdy4mpSDyztM19SNnj1xJndW
|
||||
T56rxlQz4bIl4U3XItHjVzdwpU2B4xntitl/3I8Aj/YzynzUMkH00QCllSd4u2NOL3yuuq2Ij01E
|
||||
gmK4ugMeeVBtGj3IfjCdhueVoIbQFTlscPLdZSjSLDUuvY7gZtq70naURZXbPYzxHdOqmDxla0K6
|
||||
X1d4f0oYo8/yGECNhxU53PBYsMu9qEA+8wXeSVe/kTykWDDsXBlbe83oJa5IJ3XzGh18vEhGQe3L
|
||||
y0L1fkzIYUt3xmSbm04V/E4nW5qbPR/yjxxOh/OIz/t65XbAOxM6cDQj4eP9KAZSfWqV1ttbsOLM
|
||||
DPHokwzq4ZyIBDe7i9EqrW6pI+/E5PYsqfs9gNLC9BB47GaZatBKsiw4lzQeucxAxtQEwgAPs3BH
|
||||
4dwHxvy8oEzFZjyTrC620dTMxkMlTikTbMvIYCziPFjf088Igjr2k8zNJtKzWcDO+xZEwkZwU+Ao
|
||||
9oP1jCiiTJUrWPKDxPprZtS/8Dzyg3s7ytfvnomB8+ngvsq3IyGG3dPY83TALjt+q5d3LRjE0QgH
|
||||
0VNwGZHA5bvqSCH9ym6g1NMRUZtDHHDh7UK0c2wysRE3gborlIgYp5WEWL+9p8g+cN2oRFcWtSWy
|
||||
TXDSTiD4yeGIr42bCPtaT/E1bC/GLPZPCs6lL7GxT7Y9f7GyUN1bISa26MzuSOm5RdvLTsDuuPYj
|
||||
4UDfOfqEmU7Cct02gjQngUrm3gvmzT53pQRzF7TEH/bCYjTmQz5MMNhvMnJfxBVEyiqqbqdzQWw1
|
||||
q10qhscKGDLqQAYiGNParF7woTYjucFSly/iXau6D/cbrJppbwxKMx5hXcozTi71vaBzG2TKqjRD
|
||||
nDhRZYzqWgmgczcuDp7l0RAcoRL/3JfPd9Sd8iOvoejsxCMHpyoaz4QPVb9US5zwrcOE8RoqarHa
|
||||
fYLX00X9tHUEUNF82eAChrqg+6GlQLYPlyTyd11MnZxk6rNYZyQS2Bu1yhY8sNm3IC7kzKDmcNRV
|
||||
PrxsyfF2gWK2dbOD9kB9Epw4L5ptq7qoPvDNL18i4cFzOkhF8CKRPxiFpJ+QhQzgPXL4rt2GiR2C
|
||||
Xz3BeIkP0a+HEIRLs8Wm27uGFILGq1l/I8Sfnk0zb/tAhHy9ibD92osGXV+3Ovh8tCX67XKJunbm
|
||||
HeAgqDGelNkgkc1yUG0uGHmQ9gabPWbDyu/O40poh36OXm4HuLJkstv7h4Z/6gcO4hPdE7+TrsaS
|
||||
/xRiKz6SdK8ZjXD+Xi/wScmLaNvog4gKB12VOCseH46r9QKXA4eqq1KTKw+biGcylUGhvBdw1vXF
|
||||
BnPfxbCaBB2XvvxkLA5qXv3Vb7x3nIhJ4pFHJxAV7KbMQKxvigxkjTOIdbe+xszpSox09Gkw7hLf
|
||||
YGru2tDJeI8Dp06KaXplnPqr/6dCexVTMpUxWs+fBh9uOCikoFMq5Deci/HhJfZULtY58vtswKZ3
|
||||
ttB8+w4B1IXtkYNGL8VQaK0ITtW5AfWQiGg7mIMa4OEQrMqzW4zX54oCq/cO1o15ZPOkW+YvnvAp
|
||||
P62LobdDR71qx4gcTXtgkzaH8i9fyfZ23jdzWZx1lHWajDdq07JJ5fmHOjthSmJGcc/TkXTwHToL
|
||||
m1V+R/N8tCd1imslUISDUVCdBSma1VkOVD7iIwpkBHTnIjsQHcll3534HSVXzwAfsk0bMX015epL
|
||||
yg74HKzf/ZRqR1NVtl0QcP1qV4jPTeuAmudbrK97O6Kf81zDYa9DwBDhmtmKvAAJfqvjY81yg1/w
|
||||
VoGvtyM362qxmZ1dHnjqiSRTs9ro27GpVPl0eC/73zC+FIcQttyqwZuZyw12xVqMTlw4E89/5Mv/
|
||||
hxeIfcnjjZhI6Mvf2BEeiaRj414QYx6n1QjVNB1I2rmvaAreawVIKefklI+S218HJVPzW7YnJy1/
|
||||
N1/AN/MPHmEhvRbEOz881VoZmHiPy97gj5oRQzf6jBzfzGTCb//JlS9x9qoiNPuVf0OC3kzB9PLU
|
||||
aKr92oKy3VukNJ1VTzfJ5ajan7NLEg9LLsND4KDGTp3l+x5oer+xBRxPN/ioH8yCXd46QCoHBb5W
|
||||
D7sX/DzV1KG/XonV6iPqNrMyQJ9WJinXKyeaPK5VlHL9UHHqjXHx42Nqb3QpDlnLR6/L4+EgYZ3v
|
||||
sRtvymhOEtNUf/np1c+q//E5NOihS8pr3RiTxI6e+vs+9632/fB1Vjnit5qMj/096SUaWBOIq0sZ
|
||||
UNsvClasbzYc8fZInOmQ9QLajx2oaLPGtslFBbUO2AMze0XYfnzcRsKv24jaercbif58Fe1NCFtV
|
||||
PyUc2SRMdyUQ7w/VTY4dcTjTj6hDM09lzXEXyHJ2KsRVS1vVfb4f2Cn12uV5xaqBs0oL+52kGiRV
|
||||
+0DZZxstUCOjRq/7gyrK9M5bsl+Nu559u/Si+ofiFDw9fDLo9FImSO3rhyQvIUG0OPtH9FgbHbHr
|
||||
4lnQck5btKoqGzvgP40x6zwe8lNYE2s4e7149l85eGe0x87bK13aV/xLFdKqwJc60JkU2EMH6R5V
|
||||
I2ztsZ/OhypTfSHXsU7Jq2eqpAZQpZcNtvXp6xLF2Obq997wRAtjxSDP+6zBwvewv5JrNm0kO4aO
|
||||
BmXAxmeKhJNdpepVCyMSJzu9mNtDE0J54j3sGCw1GKfbFXrMz/04zfNg0Lm1MpimaBM0dzogGj2C
|
||||
FrlefiCmRj5LPqQmtGodYeNZXfu+ofJLtenhgf1bWhdztkeAkucTEe2sC81QZjRQ4bgTgs1pc42m
|
||||
+HBo1T/1odu4rui/NQu4Mrkt+XpCfSkfQtU03HlkkLls8i6Rrj6VIAqYvAmML47uo+quXuugzzpv
|
||||
iZ+dB+trohLv4ZwZ1ZkVw+lY9SQ3ndidVVXnock+JvaR8y7oTq1itTn5Pt58ah/RIfjkAGqzIdYl
|
||||
mxumYkkG+uiOxNsYlTF3WjOCJRhKIKz8sRmbwFBgt0MtNpqH1Qjc0/aQfDiqGJvOqhmWfIN6596J
|
||||
Ufotm9y0f4Bm0GmEXWsgiStuFNm7Nif2tch76rKGR+R6egV0o1jNjAL9guRSWPB4e+sJj1AF9Oxv
|
||||
8WEvCQ1DNohKernEJAxf+4LhwbJBUHgNx0jTGjEhzxo+1Qhjde7rXqoddlSPPaRYH7MWMR3FL7V7
|
||||
zUds5fePOwTbhw6Hb3wn+cIn521v8dB/4YmNLli7805ba+C4uB2RlJz7Hx6rzrhNxxW/WjU/fFWU
|
||||
d7bCVvCE5inN1wAN0agT7aZXxUTHdwvnujXG9b4u3c9FwYCyg13hWPnUjJzju64khuDgzfcNUaMY
|
||||
fg42HNfYUps4orf+7anHb+Zjs8xTRLhSeSHirglOWLtCI061Wr1+rk9ysDW/GLeO6YCoH2viRjD1
|
||||
TPT3Cmxke0PSvp8Qq06lBgt+j3yZi2w6ZD4HH0OQyd7ATsGW85afTnkn+1rR+0FYqRyyO0HHG/H9
|
||||
dmknSwF8rHwgpqd/3MlK9h4yqdIE0oABPXWFUvAbcEfxHTNj2joqKOtTEhFj3noF/cYZ//s83lc2
|
||||
59Jc6Af0vN087IeKW8wyqi6qJWwU4q/uW1c6GfkIVbT2sVkkdjR1VU6RxBfrgE92dSTJtyyAPOH7
|
||||
5by1gg89Yq5Xk6QTL9nvmz98bSjsBv/w7bN9lhnErr8it3Xlu7xhKMoffXxiFmGsv60eSFpVLgnQ
|
||||
SSj6MnUzZJeFMPILn6RzIDsQfcogGBhpI/a8XgO08GtibSht7k0ix2BDuMaxGrwQC8VDDm3XPQNe
|
||||
PG5dCmk4woI/ZOHj7veM+xx98az/6psxxR1REPpeE5x+6nM0+49PCMt9k5NmZwUjV2WEpb4TfK0N
|
||||
V7hSTVdXL2kgrj92zbzobbV/nndYi9SLMdehraCFDxMj+7LmU7duDcg8Gjg6zAc2G9Z5hPuXC8hm
|
||||
s8+NaYvPFCWSQbGtrR+MorFIgStPNxycklUxtFF7/O0HB1nSRXM8d7WiXactudJLj2gixJwq2akc
|
||||
0Kx69rR7ZhnkL/cZrF+r2fhm75eGGqGyifV+1ZHg7PEAKUl1bNE37VkioQkcUzaxdo4fjFWG7IHE
|
||||
mTFO6l0eMf2ETPjpYf0ZkX4uWdHC4+XCuMQfY0w0TVXjzmvsIPFtTG1UHVHc7gxsx23YDLuqssCN
|
||||
eIYLNEjRdP3Mk9pU21eAIpiaSbmKPJB7fCTn0dfdKXCLG4Quz2HHuMjFJKeIoqpIxkBJ1Jb9+DY8
|
||||
iWIEa2W8IGYDxylO6UX4+jyaBo9HngMfd0/sXNZdQ3vDqaEGWQ2krW248/d0v6mLvhn75f6WfOGQ
|
||||
4O6CgDbbT8GmnTugBR+Cx8Z6R9Q3FQ+W+kMMLRyiMVnLEyDp1mNrQ4/9kl8i7HeDhf3tu2ck8LWL
|
||||
+tp0wsjze7+RHmnuoJ/eWfKp+PEv8J5pGnBbfo3mg3zjkWLscuIv99nrKzmDyP/uyK4ZN2zKs/4G
|
||||
RqnauLyvKzS/6o5H/Ce+ksOgGxHRozKE26c5EM9N+Z424sZD3WUGshVnvRCP2tNBivRdY4yzKZqm
|
||||
QcvQKq0oPgmqYQjWJIbAe+X5Fw/N3Pq7CyreabDEMxikMuQAvpqsBcSWkfu5rFY5VK7OY4df8S7b
|
||||
6+MNRbxSkC1p5mhUnJlXl3wdqyUfaLKeKOzDV4nd9GIZglAFCoxn3sTZ49M3cyIdONSevZHciuFq
|
||||
PE4Ne6ji8+qQAH0ezcQrQQUvKT8Qd0Nv7qANgoWM+1kjftJajC54rqqi7eFk9fAiAeYefvWcWN+1
|
||||
ZEzGmjxQvjYiYjwrtZkPlOSQ2uVn5JWPzr4uCo9qq1YRcW7nxBC67JXCOruFo4wVHk25MsdAz3hL
|
||||
jNfeaaaYGIPqyZ4Z3HLv2LMv/xlAG80Vxpwps4lctgPsNg+TFFiqXTYc7AAWPkH8934o6JkPLZU6
|
||||
sY639U12F3+CgjWOCXbBEKJx4RPK/nOT8O7hBBHbjpOoRmc7xnHKOc2iF0ERIvUdgPulBam/G0fN
|
||||
QHmNQuY2DblvDR4EOfKI++RjV6ykwIKqOI0Lnq6b1rsUOlx5heINSvR+qQcjWHFm4TzXpYL8/J8G
|
||||
VXS5z0NEvXvYqrCZEL6qFlewRV+rK0U84fN636CpOK8sZdEv2HC9L5uOW4WCFecWdi9EQZPg7EOY
|
||||
49cLe4y0xR/9/vMTsH9v3Vkts0l1j0Qke3oMm3kYswvKR2IQfH8fGpZQM1X6SYzxj3/O4bACyHau
|
||||
g03tmkfTe5/ocM1fNTHDyexZtnccILdbE0xyOrlUcWZRVXyTjMitzIhRXr4oaZGciPPhGrbEQ4Z+
|
||||
69VTSdw//obHeSPeHLZWLx5vpQdX6ylhZ8wnl9r3Bd8fbhpglluFlFAvBiuK9tgRDk30q4/K9upR
|
||||
Yv/02I/fMLBLcrN736Wb702BZ6KMWHu8H9FcHnyKlvqN9cuTd2c4f1K4R7DG2s45usw5fhzkb8zr
|
||||
SFfroJizu/iATwAO3hsv25iGft3Cgvd4x51JQYRPFavPAmUjNcqyF5+5mK3D+Hsi+L7WmPDT78PG
|
||||
DX5+QCQQ8e0AGdmV2EXZ9A/HNUdYkcIhdiluIl5pHROtFP5E3L2GXakqqKjaPJ/9m7+kyVeBfuJj
|
||||
7MluzHrn+LHRoif+6NWpklYyLHo+ENeC0dCu8O0/+thK2LcfwrUdoAsnFti8GA+DP16FHIjidfhg
|
||||
PDQmLnoAalGJsD7FJRqficWB8DzfiNM3FzQC71D0YY8uGC+D7U6d3g5QnkSP6LzqMTH+iBbU2TsL
|
||||
vqHSR3RMhssf/PPPqdTTZxIA7Lf7I8bEaJt5hzY1SMePjY+r9RhNB6Xgf35/MIWvTzEy9g3RpzyY
|
||||
OIzVnUF95aUARpWJfTX79oP29jv49Re8cvSLQcOGroYJHxM3DCqD/vRZc8I+2RRdF3U/PaF3eCC+
|
||||
sROj6Qn1DURjOJOdEleM2q85hofivMmep5ZB449oqgvfJtY0OZHkHO823Eq8+fnN7oC+6woWPzng
|
||||
BwTNIB3MGJWvLybO5jm5TAtlHVpWc9iviwmx1d59waY4qiNq409P/KdrKuH1KGAnurKip7x8Q1pr
|
||||
JmRbvTFil32oIGuyfRwq5bsfZ+cSo2IK3gTfplsxfOgX/uj3bZCNDfNzSQSnNQeys1ZuwZ+FvYLo
|
||||
YDFsFsba/SrcboRP617JZvF72uku2NAbbYq3QRb0P/2BzgfPW/oFH2MqwOOUuLbOeOc/cDSYW6sG
|
||||
3LYp2e3LirFFn6rpiz7++MXz8SHr4NdCgF0pOTfDmFMTyISTxW/QGmGtNw9IZIeMwrZjxhcZkgeS
|
||||
ppwC4ZP0Eetv0gt9YaVh2/Me0eyiMPzDb9Os09B3nFYDjOgh4JOiz80Qd28FKb5FyM5NP8b4wGUN
|
||||
OuobYi/9k7aSJPl3PjgKjLsx9+1H/6P3tUU/0Vf15eH1zRE29OneE9LUjnqIb3usqWbfz4/1gYdo
|
||||
635wsPQTvtxT8wDl7zc2v76Lhmx3tSGPv+NPH0fvtLimyuJ3Evy+nRlTExIrh0LuRnJ1txE79aEG
|
||||
rZamI1riZwZ8sxQGTjku/lnfZWPVqevrSR1Xb+HUz5ZwtFR+3q2X/tLMehmco7rgB3G1h45+eIKy
|
||||
hx5gnzzlhd9xIjyd6504J352p6Pmporex5+Fj+oRswQkw2s6dqMar9ZoDA+PEQTPc8nuGngN88uo
|
||||
/eNvlMt+pewFHuwe0zCSe8jQdNeNQI3SZ0Ws1awU3eXGj/A0Nhp2X5ZRDGkWj6gbMfudZzPdej9T
|
||||
tIkbR7r43ZKwyy/o53d5Qr+KxiZiKZye2XkEd3Vv6F1/6qCfFUr09fn868cEsOaDEzH71CiEYLJt
|
||||
uDY39Ef/fY96bYPd5PdxteDbuH5nRxgtSHHS32r0VcuQ/voDBJtsaDo48DIs/tCoFuKAmLK7e3/8
|
||||
9E1FDDYv/B0WfCBWxats8UtysMOKkUjeBC59T20N7lW0MT7ipp87BTxkPb7aj582NDl1MmxlWAer
|
||||
H38sD9sJ+E96Jdvp8DXmNfAWWvg5KcBICkq/Ko9ajDOyX+8NxK82bxv49Ebw1r6/CurQ0APWhLuR
|
||||
2/JnxBw7kUE9iO9x1SVfYzpZ+xrCdtSJmSZqP2d7BioUU0hK834tmA0iB58yMrFdiveIt5JNoG5l
|
||||
bj32541jzIl0BoXu5iAgGf9u2P0aXEBI64IEFO0N0nrXFr3h1BHr1CVIoEIXgrSfyNL/a/oXTrVK
|
||||
XfhBgBRvYMt+A/Trn0RRm7uMK+lDvTzBGF8L/vQ//clZVwtvVVNyqV8/QnCoqZPjWkfF5Ia8A+db
|
||||
fxq5fvWOhtpIeTDKlR2sF//u55+hBY8wfux2rvRcNZY6cIlL9J14Lfj3/qqD2oQ3bLz2XTOI7kVG
|
||||
NAoKbIqagGZLyM0/fjX9+SU/f27hR3jvnenPz+gAwg9HNPy6FfNHMCmMpThi4zDPjH4YnoDf6nLA
|
||||
L/g5k8aOf3yUWEdQosk3s/zXfw0YZD0i5eIdLf1AnLnCq1j06/DD44WP2S5/9s0H5DEZ8a/e0sHZ
|
||||
PpTjptthN+uGgn6po8BPb3nHY1qIx63GrV3ZyZZ+xIv1F+N+g+GhAbnMrxBNi58B3NbAAZclTjQ/
|
||||
le8NFvwgGzJFSDpIXw1kGPakmM81++P/bd93nfifz4bR9TsL4e/fVMB//euvv/7Xb8Lg1V5vz2Uw
|
||||
4Hubv//x36MC/yH9x/DKn88/YwjjkFe3v//59wTC35++fX2+//vbPm7v4e9//hL/PWvw97f95s//
|
||||
9/m/lp/6r3/9HwAAAP//AwDb9NaU4CAAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974f0896accb251c-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:57:45 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '86'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-74b9c68cd4-l7mvz
|
||||
x-envoy-upstream-service-time:
|
||||
- '120'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999974'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_3118ad98ecf3443a93df58599e50e305
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"input": ["Example: Using Fingers(Math Example): An interactive way to
|
||||
teach addition using fingers as counting tools."], "model": "text-embedding-3-small",
|
||||
"encoding_format": "base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '186'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=L28ZVHkSMOSpzu9MoQzdDgOYGHI9uQZnBcGj5Txg9e8-1756166265-1.0.1.1-9494dtPfVD9FwZG8RVJ6EnsQqAuzb2nnmeyRQLpPrX5IK2bR3KadJFV3K8HTAJa0lU9lIhCtu4fezMScnlMYOGAO0zGsIWMyozgrpePziWg;
|
||||
_cfuvid=NaNzk_g04ozIFjR4zD0Joz9IJWntjKPTzifJUuegmAo-1756166265356-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1R6SROySrPm/vsVJ86WviGTVHJ2CIjIVCiD2NHRASoIDghIQdWN+9878P3idvfG
|
||||
BZZaVmXmM2T+57/++uvvtmhul+/f//z197Mevn//j+XZNf/mf//z1//8119//fXXf/5e/7+Vt1dx
|
||||
u17rd/Vb/nuzfl9v89///MX/95P/u+ifv/4+7q12rFMtdNkoCLF6MLQjPr2iJ6Mxv4lhdz6V440L
|
||||
2oZt+SpE3fqj4XQ4Dw2TQT+qqlXkwaHICpdOl3elDvaT4O0W6mhqonMNt7N3widmYTZrrpIDn14I
|
||||
Nl/tjvFW+M3AfT4fOLqah0KUXHGA+Lb7EizNpsGCioTIGXcpdvcacafkIteArnZBLg/njMj2eMxU
|
||||
E102eHPaqBEbvEeGsnaNiEMUpR+K88qEM6oK4im9485pWg5yn1g+PivjhdFMxzVszTQfBbGMDD4V
|
||||
LQob4e0HpZCqxcgSFEAhfWzsNaxA3zQ9jfB6SFesV3Bg8yF/TGgVqdy4mpSDyztM19SNnj1xJndW
|
||||
T56rxlQz4bIl4U3XItHjVzdwpU2B4xntitl/3I8Aj/YzynzUMkH00QCllSd4u2NOL3yuuq2Ij01E
|
||||
gmK4ugMeeVBtGj3IfjCdhueVoIbQFTlscPLdZSjSLDUuvY7gZtq70naURZXbPYzxHdOqmDxla0K6
|
||||
X1d4f0oYo8/yGECNhxU53PBYsMu9qEA+8wXeSVe/kTykWDDsXBlbe83oJa5IJ3XzGh18vEhGQe3L
|
||||
y0L1fkzIYUt3xmSbm04V/E4nW5qbPR/yjxxOh/OIz/t65XbAOxM6cDQj4eP9KAZSfWqV1ttbsOLM
|
||||
DPHokwzq4ZyIBDe7i9EqrW6pI+/E5PYsqfs9gNLC9BB47GaZatBKsiw4lzQeucxAxtQEwgAPs3BH
|
||||
4dwHxvy8oEzFZjyTrC620dTMxkMlTikTbMvIYCziPFjf088Igjr2k8zNJtKzWcDO+xZEwkZwU+Ao
|
||||
9oP1jCiiTJUrWPKDxPprZtS/8Dzyg3s7ytfvnomB8+ngvsq3IyGG3dPY83TALjt+q5d3LRjE0QgH
|
||||
0VNwGZHA5bvqSCH9ym6g1NMRUZtDHHDh7UK0c2wysRE3gborlIgYp5WEWL+9p8g+cN2oRFcWtSWy
|
||||
TXDSTiD4yeGIr42bCPtaT/E1bC/GLPZPCs6lL7GxT7Y9f7GyUN1bISa26MzuSOm5RdvLTsDuuPYj
|
||||
4UDfOfqEmU7Cct02gjQngUrm3gvmzT53pQRzF7TEH/bCYjTmQz5MMNhvMnJfxBVEyiqqbqdzQWw1
|
||||
q10qhscKGDLqQAYiGNParF7woTYjucFSly/iXau6D/cbrJppbwxKMx5hXcozTi71vaBzG2TKqjRD
|
||||
nDhRZYzqWgmgczcuDp7l0RAcoRL/3JfPd9Sd8iOvoejsxCMHpyoaz4QPVb9US5zwrcOE8RoqarHa
|
||||
fYLX00X9tHUEUNF82eAChrqg+6GlQLYPlyTyd11MnZxk6rNYZyQS2Bu1yhY8sNm3IC7kzKDmcNRV
|
||||
PrxsyfF2gWK2dbOD9kB9Epw4L5ptq7qoPvDNL18i4cFzOkhF8CKRPxiFpJ+QhQzgPXL4rt2GiR2C
|
||||
Xz3BeIkP0a+HEIRLs8Wm27uGFILGq1l/I8Sfnk0zb/tAhHy9ibD92osGXV+3Ovh8tCX67XKJunbm
|
||||
HeAgqDGelNkgkc1yUG0uGHmQ9gabPWbDyu/O40poh36OXm4HuLJkstv7h4Z/6gcO4hPdE7+TrsaS
|
||||
/xRiKz6SdK8ZjXD+Xi/wScmLaNvog4gKB12VOCseH46r9QKXA4eqq1KTKw+biGcylUGhvBdw1vXF
|
||||
BnPfxbCaBB2XvvxkLA5qXv3Vb7x3nIhJ4pFHJxAV7KbMQKxvigxkjTOIdbe+xszpSox09Gkw7hLf
|
||||
YGru2tDJeI8Dp06KaXplnPqr/6dCexVTMpUxWs+fBh9uOCikoFMq5Deci/HhJfZULtY58vtswKZ3
|
||||
ttB8+w4B1IXtkYNGL8VQaK0ITtW5AfWQiGg7mIMa4OEQrMqzW4zX54oCq/cO1o15ZPOkW+YvnvAp
|
||||
P62LobdDR71qx4gcTXtgkzaH8i9fyfZ23jdzWZx1lHWajDdq07JJ5fmHOjthSmJGcc/TkXTwHToL
|
||||
m1V+R/N8tCd1imslUISDUVCdBSma1VkOVD7iIwpkBHTnIjsQHcll3534HSVXzwAfsk0bMX015epL
|
||||
yg74HKzf/ZRqR1NVtl0QcP1qV4jPTeuAmudbrK97O6Kf81zDYa9DwBDhmtmKvAAJfqvjY81yg1/w
|
||||
VoGvtyM362qxmZ1dHnjqiSRTs9ro27GpVPl0eC/73zC+FIcQttyqwZuZyw12xVqMTlw4E89/5Mv/
|
||||
hxeIfcnjjZhI6Mvf2BEeiaRj414QYx6n1QjVNB1I2rmvaAreawVIKefklI+S218HJVPzW7YnJy1/
|
||||
N1/AN/MPHmEhvRbEOz881VoZmHiPy97gj5oRQzf6jBzfzGTCb//JlS9x9qoiNPuVf0OC3kzB9PLU
|
||||
aKr92oKy3VukNJ1VTzfJ5ajan7NLEg9LLsND4KDGTp3l+x5oer+xBRxPN/ioH8yCXd46QCoHBb5W
|
||||
D7sX/DzV1KG/XonV6iPqNrMyQJ9WJinXKyeaPK5VlHL9UHHqjXHx42Nqb3QpDlnLR6/L4+EgYZ3v
|
||||
sRtvymhOEtNUf/np1c+q//E5NOihS8pr3RiTxI6e+vs+9632/fB1Vjnit5qMj/096SUaWBOIq0sZ
|
||||
UNsvClasbzYc8fZInOmQ9QLajx2oaLPGtslFBbUO2AMze0XYfnzcRsKv24jaercbif58Fe1NCFtV
|
||||
PyUc2SRMdyUQ7w/VTY4dcTjTj6hDM09lzXEXyHJ2KsRVS1vVfb4f2Cn12uV5xaqBs0oL+52kGiRV
|
||||
+0DZZxstUCOjRq/7gyrK9M5bsl+Nu559u/Si+ofiFDw9fDLo9FImSO3rhyQvIUG0OPtH9FgbHbHr
|
||||
4lnQck5btKoqGzvgP40x6zwe8lNYE2s4e7149l85eGe0x87bK13aV/xLFdKqwJc60JkU2EMH6R5V
|
||||
I2ztsZ/OhypTfSHXsU7Jq2eqpAZQpZcNtvXp6xLF2Obq997wRAtjxSDP+6zBwvewv5JrNm0kO4aO
|
||||
BmXAxmeKhJNdpepVCyMSJzu9mNtDE0J54j3sGCw1GKfbFXrMz/04zfNg0Lm1MpimaBM0dzogGj2C
|
||||
FrlefiCmRj5LPqQmtGodYeNZXfu+ofJLtenhgf1bWhdztkeAkucTEe2sC81QZjRQ4bgTgs1pc42m
|
||||
+HBo1T/1odu4rui/NQu4Mrkt+XpCfSkfQtU03HlkkLls8i6Rrj6VIAqYvAmML47uo+quXuugzzpv
|
||||
iZ+dB+trohLv4ZwZ1ZkVw+lY9SQ3ndidVVXnock+JvaR8y7oTq1itTn5Pt58ah/RIfjkAGqzIdYl
|
||||
mxumYkkG+uiOxNsYlTF3WjOCJRhKIKz8sRmbwFBgt0MtNpqH1Qjc0/aQfDiqGJvOqhmWfIN6596J
|
||||
Ufotm9y0f4Bm0GmEXWsgiStuFNm7Nif2tch76rKGR+R6egV0o1jNjAL9guRSWPB4e+sJj1AF9Oxv
|
||||
8WEvCQ1DNohKernEJAxf+4LhwbJBUHgNx0jTGjEhzxo+1Qhjde7rXqoddlSPPaRYH7MWMR3FL7V7
|
||||
zUds5fePOwTbhw6Hb3wn+cIn521v8dB/4YmNLli7805ba+C4uB2RlJz7Hx6rzrhNxxW/WjU/fFWU
|
||||
d7bCVvCE5inN1wAN0agT7aZXxUTHdwvnujXG9b4u3c9FwYCyg13hWPnUjJzju64khuDgzfcNUaMY
|
||||
fg42HNfYUps4orf+7anHb+Zjs8xTRLhSeSHirglOWLtCI061Wr1+rk9ysDW/GLeO6YCoH2viRjD1
|
||||
TPT3Cmxke0PSvp8Qq06lBgt+j3yZi2w6ZD4HH0OQyd7ATsGW85afTnkn+1rR+0FYqRyyO0HHG/H9
|
||||
dmknSwF8rHwgpqd/3MlK9h4yqdIE0oABPXWFUvAbcEfxHTNj2joqKOtTEhFj3noF/cYZ//s83lc2
|
||||
59Jc6Af0vN087IeKW8wyqi6qJWwU4q/uW1c6GfkIVbT2sVkkdjR1VU6RxBfrgE92dSTJtyyAPOH7
|
||||
5by1gg89Yq5Xk6QTL9nvmz98bSjsBv/w7bN9lhnErr8it3Xlu7xhKMoffXxiFmGsv60eSFpVLgnQ
|
||||
SSj6MnUzZJeFMPILn6RzIDsQfcogGBhpI/a8XgO08GtibSht7k0ix2BDuMaxGrwQC8VDDm3XPQNe
|
||||
PG5dCmk4woI/ZOHj7veM+xx98az/6psxxR1REPpeE5x+6nM0+49PCMt9k5NmZwUjV2WEpb4TfK0N
|
||||
V7hSTVdXL2kgrj92zbzobbV/nndYi9SLMdehraCFDxMj+7LmU7duDcg8Gjg6zAc2G9Z5hPuXC8hm
|
||||
s8+NaYvPFCWSQbGtrR+MorFIgStPNxycklUxtFF7/O0HB1nSRXM8d7WiXactudJLj2gixJwq2akc
|
||||
0Kx69rR7ZhnkL/cZrF+r2fhm75eGGqGyifV+1ZHg7PEAKUl1bNE37VkioQkcUzaxdo4fjFWG7IHE
|
||||
mTFO6l0eMf2ETPjpYf0ZkX4uWdHC4+XCuMQfY0w0TVXjzmvsIPFtTG1UHVHc7gxsx23YDLuqssCN
|
||||
eIYLNEjRdP3Mk9pU21eAIpiaSbmKPJB7fCTn0dfdKXCLG4Quz2HHuMjFJKeIoqpIxkBJ1Jb9+DY8
|
||||
iWIEa2W8IGYDxylO6UX4+jyaBo9HngMfd0/sXNZdQ3vDqaEGWQ2krW248/d0v6mLvhn75f6WfOGQ
|
||||
4O6CgDbbT8GmnTugBR+Cx8Z6R9Q3FQ+W+kMMLRyiMVnLEyDp1mNrQ4/9kl8i7HeDhf3tu2ck8LWL
|
||||
+tp0wsjze7+RHmnuoJ/eWfKp+PEv8J5pGnBbfo3mg3zjkWLscuIv99nrKzmDyP/uyK4ZN2zKs/4G
|
||||
RqnauLyvKzS/6o5H/Ce+ksOgGxHRozKE26c5EM9N+Z424sZD3WUGshVnvRCP2tNBivRdY4yzKZqm
|
||||
QcvQKq0oPgmqYQjWJIbAe+X5Fw/N3Pq7CyreabDEMxikMuQAvpqsBcSWkfu5rFY5VK7OY4df8S7b
|
||||
6+MNRbxSkC1p5mhUnJlXl3wdqyUfaLKeKOzDV4nd9GIZglAFCoxn3sTZ49M3cyIdONSevZHciuFq
|
||||
PE4Ne6ji8+qQAH0ezcQrQQUvKT8Qd0Nv7qANgoWM+1kjftJajC54rqqi7eFk9fAiAeYefvWcWN+1
|
||||
ZEzGmjxQvjYiYjwrtZkPlOSQ2uVn5JWPzr4uCo9qq1YRcW7nxBC67JXCOruFo4wVHk25MsdAz3hL
|
||||
jNfeaaaYGIPqyZ4Z3HLv2LMv/xlAG80Vxpwps4lctgPsNg+TFFiqXTYc7AAWPkH8934o6JkPLZU6
|
||||
sY639U12F3+CgjWOCXbBEKJx4RPK/nOT8O7hBBHbjpOoRmc7xnHKOc2iF0ERIvUdgPulBam/G0fN
|
||||
QHmNQuY2DblvDR4EOfKI++RjV6ykwIKqOI0Lnq6b1rsUOlx5heINSvR+qQcjWHFm4TzXpYL8/J8G
|
||||
VXS5z0NEvXvYqrCZEL6qFlewRV+rK0U84fN636CpOK8sZdEv2HC9L5uOW4WCFecWdi9EQZPg7EOY
|
||||
49cLe4y0xR/9/vMTsH9v3Vkts0l1j0Qke3oMm3kYswvKR2IQfH8fGpZQM1X6SYzxj3/O4bACyHau
|
||||
g03tmkfTe5/ocM1fNTHDyexZtnccILdbE0xyOrlUcWZRVXyTjMitzIhRXr4oaZGciPPhGrbEQ4Z+
|
||||
69VTSdw//obHeSPeHLZWLx5vpQdX6ylhZ8wnl9r3Bd8fbhpglluFlFAvBiuK9tgRDk30q4/K9upR
|
||||
Yv/02I/fMLBLcrN736Wb702BZ6KMWHu8H9FcHnyKlvqN9cuTd2c4f1K4R7DG2s45usw5fhzkb8zr
|
||||
SFfroJizu/iATwAO3hsv25iGft3Cgvd4x51JQYRPFavPAmUjNcqyF5+5mK3D+Hsi+L7WmPDT78PG
|
||||
DX5+QCQQ8e0AGdmV2EXZ9A/HNUdYkcIhdiluIl5pHROtFP5E3L2GXakqqKjaPJ/9m7+kyVeBfuJj
|
||||
7MluzHrn+LHRoif+6NWpklYyLHo+ENeC0dCu8O0/+thK2LcfwrUdoAsnFti8GA+DP16FHIjidfhg
|
||||
PDQmLnoAalGJsD7FJRqficWB8DzfiNM3FzQC71D0YY8uGC+D7U6d3g5QnkSP6LzqMTH+iBbU2TsL
|
||||
vqHSR3RMhssf/PPPqdTTZxIA7Lf7I8bEaJt5hzY1SMePjY+r9RhNB6Xgf35/MIWvTzEy9g3RpzyY
|
||||
OIzVnUF95aUARpWJfTX79oP29jv49Re8cvSLQcOGroYJHxM3DCqD/vRZc8I+2RRdF3U/PaF3eCC+
|
||||
sROj6Qn1DURjOJOdEleM2q85hofivMmep5ZB449oqgvfJtY0OZHkHO823Eq8+fnN7oC+6woWPzng
|
||||
BwTNIB3MGJWvLybO5jm5TAtlHVpWc9iviwmx1d59waY4qiNq409P/KdrKuH1KGAnurKip7x8Q1pr
|
||||
JmRbvTFil32oIGuyfRwq5bsfZ+cSo2IK3gTfplsxfOgX/uj3bZCNDfNzSQSnNQeys1ZuwZ+FvYLo
|
||||
YDFsFsba/SrcboRP617JZvF72uku2NAbbYq3QRb0P/2BzgfPW/oFH2MqwOOUuLbOeOc/cDSYW6sG
|
||||
3LYp2e3LirFFn6rpiz7++MXz8SHr4NdCgF0pOTfDmFMTyISTxW/QGmGtNw9IZIeMwrZjxhcZkgeS
|
||||
ppwC4ZP0Eetv0gt9YaVh2/Me0eyiMPzDb9Os09B3nFYDjOgh4JOiz80Qd28FKb5FyM5NP8b4wGUN
|
||||
OuobYi/9k7aSJPl3PjgKjLsx9+1H/6P3tUU/0Vf15eH1zRE29OneE9LUjnqIb3usqWbfz4/1gYdo
|
||||
635wsPQTvtxT8wDl7zc2v76Lhmx3tSGPv+NPH0fvtLimyuJ3Evy+nRlTExIrh0LuRnJ1txE79aEG
|
||||
rZamI1riZwZ8sxQGTjku/lnfZWPVqevrSR1Xb+HUz5ZwtFR+3q2X/tLMehmco7rgB3G1h45+eIKy
|
||||
hx5gnzzlhd9xIjyd6504J352p6Pmporex5+Fj+oRswQkw2s6dqMar9ZoDA+PEQTPc8nuGngN88uo
|
||||
/eNvlMt+pewFHuwe0zCSe8jQdNeNQI3SZ0Ws1awU3eXGj/A0Nhp2X5ZRDGkWj6gbMfudZzPdej9T
|
||||
tIkbR7r43ZKwyy/o53d5Qr+KxiZiKZye2XkEd3Vv6F1/6qCfFUr09fn868cEsOaDEzH71CiEYLJt
|
||||
uDY39Ef/fY96bYPd5PdxteDbuH5nRxgtSHHS32r0VcuQ/voDBJtsaDo48DIs/tCoFuKAmLK7e3/8
|
||||
9E1FDDYv/B0WfCBWxats8UtysMOKkUjeBC59T20N7lW0MT7ipp87BTxkPb7aj582NDl1MmxlWAer
|
||||
H38sD9sJ+E96Jdvp8DXmNfAWWvg5KcBICkq/Ko9ajDOyX+8NxK82bxv49Ebw1r6/CurQ0APWhLuR
|
||||
2/JnxBw7kUE9iO9x1SVfYzpZ+xrCdtSJmSZqP2d7BioUU0hK834tmA0iB58yMrFdiveIt5JNoG5l
|
||||
bj32541jzIl0BoXu5iAgGf9u2P0aXEBI64IEFO0N0nrXFr3h1BHr1CVIoEIXgrSfyNL/a/oXTrVK
|
||||
XfhBgBRvYMt+A/Trn0RRm7uMK+lDvTzBGF8L/vQ//clZVwtvVVNyqV8/QnCoqZPjWkfF5Ia8A+db
|
||||
fxq5fvWOhtpIeTDKlR2sF//u55+hBY8wfux2rvRcNZY6cIlL9J14Lfj3/qqD2oQ3bLz2XTOI7kVG
|
||||
NAoKbIqagGZLyM0/fjX9+SU/f27hR3jvnenPz+gAwg9HNPy6FfNHMCmMpThi4zDPjH4YnoDf6nLA
|
||||
L/g5k8aOf3yUWEdQosk3s/zXfw0YZD0i5eIdLf1AnLnCq1j06/DD44WP2S5/9s0H5DEZ8a/e0sHZ
|
||||
PpTjptthN+uGgn6po8BPb3nHY1qIx63GrV3ZyZZ+xIv1F+N+g+GhAbnMrxBNi58B3NbAAZclTjQ/
|
||||
le8NFvwgGzJFSDpIXw1kGPakmM81++P/bd93nfifz4bR9TsL4e/fVMB//euvv/7Xb8Lg1V5vz2Uw
|
||||
4Hubv//x36MC/yH9x/DKn88/YwjjkFe3v//59wTC35++fX2+//vbPm7v4e9//hL/PWvw97f95s//
|
||||
9/m/lp/6r3/9HwAAAP//AwDb9NaU4CAAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974f1417bd93ed39-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 26 Aug 2025 00:05:36 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '67'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-6855f85975-l2j69
|
||||
x-envoy-upstream-service-time:
|
||||
- '85'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999974'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_415fc597e216418bba813b4bb2f0e130
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -3,6 +3,11 @@ from datetime import datetime, timedelta
|
||||
import requests
|
||||
from unittest.mock import MagicMock, patch, call
|
||||
from crewai.cli.authentication.main import AuthenticationCommand
|
||||
from crewai.cli.authentication.constants import (
|
||||
AUTH0_AUDIENCE,
|
||||
AUTH0_CLIENT_ID,
|
||||
AUTH0_DOMAIN
|
||||
)
|
||||
from crewai.cli.constants import (
|
||||
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_DOMAIN,
|
||||
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_CLIENT_ID,
|
||||
@@ -17,6 +22,16 @@ class TestAuthenticationCommand:
|
||||
@pytest.mark.parametrize(
|
||||
"user_provider,expected_urls",
|
||||
[
|
||||
(
|
||||
"auth0",
|
||||
{
|
||||
"device_code_url": f"https://{AUTH0_DOMAIN}/oauth/device/code",
|
||||
"token_url": f"https://{AUTH0_DOMAIN}/oauth/token",
|
||||
"client_id": AUTH0_CLIENT_ID,
|
||||
"audience": AUTH0_AUDIENCE,
|
||||
"domain": AUTH0_DOMAIN,
|
||||
},
|
||||
),
|
||||
(
|
||||
"workos",
|
||||
{
|
||||
@@ -29,6 +44,9 @@ class TestAuthenticationCommand:
|
||||
),
|
||||
],
|
||||
)
|
||||
@patch(
|
||||
"crewai.cli.authentication.main.AuthenticationCommand._determine_user_provider"
|
||||
)
|
||||
@patch("crewai.cli.authentication.main.AuthenticationCommand._get_device_code")
|
||||
@patch(
|
||||
"crewai.cli.authentication.main.AuthenticationCommand._display_auth_instructions"
|
||||
@@ -41,9 +59,11 @@ class TestAuthenticationCommand:
|
||||
mock_poll,
|
||||
mock_display,
|
||||
mock_get_device,
|
||||
mock_determine_provider,
|
||||
user_provider,
|
||||
expected_urls,
|
||||
):
|
||||
mock_determine_provider.return_value = user_provider
|
||||
mock_get_device.return_value = {
|
||||
"device_code": "test_code",
|
||||
"user_code": "123456",
|
||||
@@ -54,6 +74,7 @@ class TestAuthenticationCommand:
|
||||
mock_console_print.assert_called_once_with(
|
||||
"Signing in to CrewAI Enterprise...\n", style="bold blue"
|
||||
)
|
||||
mock_determine_provider.assert_called_once()
|
||||
mock_get_device.assert_called_once()
|
||||
mock_display.assert_called_once_with(
|
||||
{"device_code": "test_code", "user_code": "123456"}
|
||||
@@ -61,17 +82,9 @@ class TestAuthenticationCommand:
|
||||
mock_poll.assert_called_once_with(
|
||||
{"device_code": "test_code", "user_code": "123456"},
|
||||
)
|
||||
assert (
|
||||
self.auth_command.oauth2_provider.get_client_id()
|
||||
== expected_urls["client_id"]
|
||||
)
|
||||
assert (
|
||||
self.auth_command.oauth2_provider.get_audience()
|
||||
== expected_urls["audience"]
|
||||
)
|
||||
assert (
|
||||
self.auth_command.oauth2_provider._get_domain() == expected_urls["domain"]
|
||||
)
|
||||
assert self.auth_command.oauth2_provider.get_client_id() == expected_urls["client_id"]
|
||||
assert self.auth_command.oauth2_provider.get_audience() == expected_urls["audience"]
|
||||
assert self.auth_command.oauth2_provider._get_domain() == expected_urls["domain"]
|
||||
|
||||
@patch("crewai.cli.authentication.main.webbrowser")
|
||||
@patch("crewai.cli.authentication.main.console.print")
|
||||
@@ -93,6 +106,14 @@ class TestAuthenticationCommand:
|
||||
@pytest.mark.parametrize(
|
||||
"user_provider,jwt_config",
|
||||
[
|
||||
(
|
||||
"auth0",
|
||||
{
|
||||
"jwks_url": f"https://{AUTH0_DOMAIN}/.well-known/jwks.json",
|
||||
"issuer": f"https://{AUTH0_DOMAIN}/",
|
||||
"audience": AUTH0_AUDIENCE,
|
||||
},
|
||||
),
|
||||
(
|
||||
"workos",
|
||||
{
|
||||
@@ -114,18 +135,14 @@ class TestAuthenticationCommand:
|
||||
jwt_config,
|
||||
has_expiration,
|
||||
):
|
||||
from crewai.cli.authentication.providers.auth0 import Auth0Provider
|
||||
from crewai.cli.authentication.providers.workos import WorkosProvider
|
||||
from crewai.cli.authentication.main import Oauth2Settings
|
||||
|
||||
if user_provider == "workos":
|
||||
self.auth_command.oauth2_provider = WorkosProvider(
|
||||
settings=Oauth2Settings(
|
||||
provider=user_provider,
|
||||
client_id="test-client-id",
|
||||
domain=CREWAI_ENTERPRISE_DEFAULT_OAUTH2_DOMAIN,
|
||||
audience=jwt_config["audience"],
|
||||
)
|
||||
)
|
||||
if user_provider == "auth0":
|
||||
self.auth_command.oauth2_provider = Auth0Provider(settings=Oauth2Settings(provider=user_provider, client_id="test-client-id", domain=AUTH0_DOMAIN, audience=jwt_config["audience"]))
|
||||
elif user_provider == "workos":
|
||||
self.auth_command.oauth2_provider = WorkosProvider(settings=Oauth2Settings(provider=user_provider, client_id="test-client-id", domain=CREWAI_ENTERPRISE_DEFAULT_OAUTH2_DOMAIN, audience=jwt_config["audience"]))
|
||||
|
||||
token_data = {"access_token": "test_access_token", "id_token": "test_id_token"}
|
||||
|
||||
@@ -217,6 +234,83 @@ class TestAuthenticationCommand:
|
||||
]
|
||||
mock_console_print.assert_has_calls(expected_calls)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"api_response,expected_provider",
|
||||
[
|
||||
({"provider": "auth0"}, "auth0"),
|
||||
({"provider": "workos"}, "workos"),
|
||||
({"provider": "none"}, "workos"), # Default to workos for any other value
|
||||
(
|
||||
{},
|
||||
"workos",
|
||||
), # Default to workos if no provider key is sent in the response
|
||||
],
|
||||
)
|
||||
@patch("crewai.cli.authentication.main.PlusAPI")
|
||||
@patch("crewai.cli.authentication.main.console.print")
|
||||
@patch("builtins.input", return_value="test@example.com")
|
||||
def test_determine_user_provider_success(
|
||||
self,
|
||||
mock_input,
|
||||
mock_console_print,
|
||||
mock_plus_api,
|
||||
api_response,
|
||||
expected_provider,
|
||||
):
|
||||
mock_api_instance = MagicMock()
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 200
|
||||
mock_response.json.return_value = api_response
|
||||
mock_api_instance._make_request.return_value = mock_response
|
||||
mock_plus_api.return_value = mock_api_instance
|
||||
|
||||
result = self.auth_command._determine_user_provider()
|
||||
|
||||
mock_input.assert_called_once()
|
||||
|
||||
mock_plus_api.assert_called_once_with("")
|
||||
mock_api_instance._make_request.assert_called_once_with(
|
||||
"GET", "/crewai_plus/api/v1/me/provider?email=test%40example.com"
|
||||
)
|
||||
|
||||
assert result == expected_provider
|
||||
|
||||
@patch("crewai.cli.authentication.main.PlusAPI")
|
||||
@patch("crewai.cli.authentication.main.console.print")
|
||||
@patch("builtins.input", return_value="test@example.com")
|
||||
def test_determine_user_provider_error(
|
||||
self, mock_input, mock_console_print, mock_plus_api
|
||||
):
|
||||
mock_api_instance = MagicMock()
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 500
|
||||
mock_api_instance._make_request.return_value = mock_response
|
||||
mock_plus_api.return_value = mock_api_instance
|
||||
|
||||
with pytest.raises(SystemExit):
|
||||
self.auth_command._determine_user_provider()
|
||||
|
||||
mock_input.assert_called_once()
|
||||
|
||||
mock_plus_api.assert_called_once_with("")
|
||||
mock_api_instance._make_request.assert_called_once_with(
|
||||
"GET", "/crewai_plus/api/v1/me/provider?email=test%40example.com"
|
||||
)
|
||||
|
||||
mock_console_print.assert_has_calls(
|
||||
[
|
||||
call(
|
||||
"Enter your CrewAI Enterprise account email: ",
|
||||
style="bold blue",
|
||||
end="",
|
||||
),
|
||||
call(
|
||||
"Error: Failed to authenticate with crewai enterprise. Ensure that you are using the latest crewai version and please try again. If the problem persists, contact support@crewai.com.",
|
||||
style="red",
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
@patch("requests.post")
|
||||
def test_get_device_code(self, mock_post):
|
||||
mock_response = MagicMock()
|
||||
@@ -229,9 +323,7 @@ class TestAuthenticationCommand:
|
||||
|
||||
self.auth_command.oauth2_provider = MagicMock()
|
||||
self.auth_command.oauth2_provider.get_client_id.return_value = "test_client"
|
||||
self.auth_command.oauth2_provider.get_authorize_url.return_value = (
|
||||
"https://example.com/device"
|
||||
)
|
||||
self.auth_command.oauth2_provider.get_authorize_url.return_value = "https://example.com/device"
|
||||
self.auth_command.oauth2_provider.get_audience.return_value = "test_audience"
|
||||
|
||||
result = self.auth_command._get_device_code()
|
||||
@@ -274,12 +366,12 @@ class TestAuthenticationCommand:
|
||||
) as mock_tool_login,
|
||||
):
|
||||
self.auth_command.oauth2_provider = MagicMock()
|
||||
self.auth_command.oauth2_provider.get_token_url.return_value = (
|
||||
"https://example.com/token"
|
||||
)
|
||||
self.auth_command.oauth2_provider.get_token_url.return_value = "https://example.com/token"
|
||||
self.auth_command.oauth2_provider.get_client_id.return_value = "test_client"
|
||||
|
||||
self.auth_command._poll_for_token(device_code_data)
|
||||
self.auth_command._poll_for_token(
|
||||
device_code_data
|
||||
)
|
||||
|
||||
mock_post.assert_called_once_with(
|
||||
"https://example.com/token",
|
||||
@@ -314,7 +406,9 @@ class TestAuthenticationCommand:
|
||||
"interval": 0.1, # Short interval for testing
|
||||
}
|
||||
|
||||
self.auth_command._poll_for_token(device_code_data)
|
||||
self.auth_command._poll_for_token(
|
||||
device_code_data
|
||||
)
|
||||
|
||||
mock_console_print.assert_any_call(
|
||||
"Timeout: Failed to get the token. Please try again.", style="bold red"
|
||||
@@ -335,4 +429,15 @@ class TestAuthenticationCommand:
|
||||
device_code_data = {"device_code": "test_device_code", "interval": 1}
|
||||
|
||||
with pytest.raises(requests.HTTPError):
|
||||
self.auth_command._poll_for_token(device_code_data)
|
||||
self.auth_command._poll_for_token(
|
||||
device_code_data
|
||||
)
|
||||
# @patch(
|
||||
# "crewai.cli.authentication.main.AuthenticationCommand._determine_user_provider"
|
||||
# )
|
||||
# def test_login_with_auth0(self, mock_determine_provider):
|
||||
# from crewai.cli.authentication.providers.auth0 import Auth0Provider
|
||||
# from crewai.cli.authentication.main import Oauth2Settings
|
||||
|
||||
# self.auth_command.oauth2_provider = Auth0Provider(settings=Oauth2Settings(provider="auth0", client_id=AUTH0_CLIENT_ID, domain=AUTH0_DOMAIN, audience=AUTH0_AUDIENCE))
|
||||
# self.auth_command.login()
|
||||
|
||||
@@ -1,14 +1,17 @@
|
||||
import json
|
||||
import jwt
|
||||
import unittest
|
||||
from datetime import datetime, timedelta
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from cryptography.fernet import Fernet
|
||||
|
||||
from crewai.cli.authentication.utils import validate_jwt_token
|
||||
from crewai.cli.authentication.utils import TokenManager, validate_jwt_token
|
||||
|
||||
|
||||
@patch("crewai.cli.authentication.utils.PyJWKClient", return_value=MagicMock())
|
||||
@patch("crewai.cli.authentication.utils.jwt")
|
||||
class TestUtils(unittest.TestCase):
|
||||
class TestValidateToken(unittest.TestCase):
|
||||
def test_validate_jwt_token(self, mock_jwt, mock_pyjwkclient):
|
||||
mock_jwt.decode.return_value = {"exp": 1719859200}
|
||||
|
||||
@@ -102,3 +105,119 @@ class TestUtils(unittest.TestCase):
|
||||
issuer="https://mock_issuer",
|
||||
audience="app_id_xxxx",
|
||||
)
|
||||
|
||||
|
||||
class TestTokenManager(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.token_manager = TokenManager()
|
||||
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.read_secure_file")
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.save_secure_file")
|
||||
@patch("crewai.cli.authentication.utils.TokenManager._get_or_create_key")
|
||||
def test_get_or_create_key_existing(self, mock_get_or_create, mock_save, mock_read):
|
||||
mock_key = Fernet.generate_key()
|
||||
mock_get_or_create.return_value = mock_key
|
||||
|
||||
token_manager = TokenManager()
|
||||
result = token_manager.key
|
||||
|
||||
self.assertEqual(result, mock_key)
|
||||
|
||||
@patch("crewai.cli.authentication.utils.Fernet.generate_key")
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.read_secure_file")
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.save_secure_file")
|
||||
def test_get_or_create_key_new(self, mock_save, mock_read, mock_generate):
|
||||
mock_key = b"new_key"
|
||||
mock_read.return_value = None
|
||||
mock_generate.return_value = mock_key
|
||||
|
||||
result = self.token_manager._get_or_create_key()
|
||||
|
||||
self.assertEqual(result, mock_key)
|
||||
mock_read.assert_called_once_with("secret.key")
|
||||
mock_generate.assert_called_once()
|
||||
mock_save.assert_called_once_with("secret.key", mock_key)
|
||||
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.save_secure_file")
|
||||
def test_save_tokens(self, mock_save):
|
||||
access_token = "test_token"
|
||||
expires_at = int((datetime.now() + timedelta(seconds=3600)).timestamp())
|
||||
|
||||
self.token_manager.save_tokens(access_token, expires_at)
|
||||
|
||||
mock_save.assert_called_once()
|
||||
args = mock_save.call_args[0]
|
||||
self.assertEqual(args[0], "tokens.enc")
|
||||
decrypted_data = self.token_manager.fernet.decrypt(args[1])
|
||||
data = json.loads(decrypted_data)
|
||||
self.assertEqual(data["access_token"], access_token)
|
||||
expiration = datetime.fromisoformat(data["expiration"])
|
||||
self.assertEqual(expiration, datetime.fromtimestamp(expires_at))
|
||||
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.read_secure_file")
|
||||
def test_get_token_valid(self, mock_read):
|
||||
access_token = "test_token"
|
||||
expiration = (datetime.now() + timedelta(hours=1)).isoformat()
|
||||
data = {"access_token": access_token, "expiration": expiration}
|
||||
encrypted_data = self.token_manager.fernet.encrypt(json.dumps(data).encode())
|
||||
mock_read.return_value = encrypted_data
|
||||
|
||||
result = self.token_manager.get_token()
|
||||
|
||||
self.assertEqual(result, access_token)
|
||||
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.read_secure_file")
|
||||
def test_get_token_expired(self, mock_read):
|
||||
access_token = "test_token"
|
||||
expiration = (datetime.now() - timedelta(hours=1)).isoformat()
|
||||
data = {"access_token": access_token, "expiration": expiration}
|
||||
encrypted_data = self.token_manager.fernet.encrypt(json.dumps(data).encode())
|
||||
mock_read.return_value = encrypted_data
|
||||
|
||||
result = self.token_manager.get_token()
|
||||
|
||||
self.assertIsNone(result)
|
||||
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.get_secure_storage_path")
|
||||
@patch("builtins.open", new_callable=unittest.mock.mock_open)
|
||||
@patch("crewai.cli.authentication.utils.os.chmod")
|
||||
def test_save_secure_file(self, mock_chmod, mock_open, mock_get_path):
|
||||
mock_path = MagicMock()
|
||||
mock_get_path.return_value = mock_path
|
||||
filename = "test_file.txt"
|
||||
content = b"test_content"
|
||||
|
||||
self.token_manager.save_secure_file(filename, content)
|
||||
|
||||
mock_path.__truediv__.assert_called_once_with(filename)
|
||||
mock_open.assert_called_once_with(mock_path.__truediv__.return_value, "wb")
|
||||
mock_open().write.assert_called_once_with(content)
|
||||
mock_chmod.assert_called_once_with(mock_path.__truediv__.return_value, 0o600)
|
||||
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.get_secure_storage_path")
|
||||
@patch(
|
||||
"builtins.open", new_callable=unittest.mock.mock_open, read_data=b"test_content"
|
||||
)
|
||||
def test_read_secure_file_exists(self, mock_open, mock_get_path):
|
||||
mock_path = MagicMock()
|
||||
mock_get_path.return_value = mock_path
|
||||
mock_path.__truediv__.return_value.exists.return_value = True
|
||||
filename = "test_file.txt"
|
||||
|
||||
result = self.token_manager.read_secure_file(filename)
|
||||
|
||||
self.assertEqual(result, b"test_content")
|
||||
mock_path.__truediv__.assert_called_once_with(filename)
|
||||
mock_open.assert_called_once_with(mock_path.__truediv__.return_value, "rb")
|
||||
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.get_secure_storage_path")
|
||||
def test_read_secure_file_not_exists(self, mock_get_path):
|
||||
mock_path = MagicMock()
|
||||
mock_get_path.return_value = mock_path
|
||||
mock_path.__truediv__.return_value.exists.return_value = False
|
||||
filename = "test_file.txt"
|
||||
|
||||
result = self.token_manager.read_secure_file(filename)
|
||||
|
||||
self.assertIsNone(result)
|
||||
mock_path.__truediv__.assert_called_once_with(filename)
|
||||
|
||||
@@ -3,7 +3,6 @@ import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
from crewai.cli.config import (
|
||||
Settings,
|
||||
@@ -11,8 +10,6 @@ from crewai.cli.config import (
|
||||
CLI_SETTINGS_KEYS,
|
||||
DEFAULT_CLI_SETTINGS,
|
||||
)
|
||||
from crewai.cli.shared.token_manager import TokenManager
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
|
||||
class TestSettings(unittest.TestCase):
|
||||
@@ -69,8 +66,7 @@ class TestSettings(unittest.TestCase):
|
||||
for key in user_settings.keys():
|
||||
self.assertEqual(getattr(settings, key), None)
|
||||
|
||||
@patch("crewai.cli.config.TokenManager")
|
||||
def test_reset_settings(self, mock_token_manager):
|
||||
def test_reset_settings(self):
|
||||
user_settings = {key: f"value_for_{key}" for key in USER_SETTINGS_KEYS}
|
||||
cli_settings = {key: f"value_for_{key}" for key in CLI_SETTINGS_KEYS}
|
||||
|
||||
@@ -78,11 +74,6 @@ class TestSettings(unittest.TestCase):
|
||||
config_path=self.config_path, **user_settings, **cli_settings
|
||||
)
|
||||
|
||||
mock_token_manager.return_value = MagicMock()
|
||||
TokenManager().save_tokens(
|
||||
"aaa.bbb.ccc", (datetime.now() + timedelta(seconds=36000)).timestamp()
|
||||
)
|
||||
|
||||
settings.reset()
|
||||
|
||||
for key in user_settings.keys():
|
||||
@@ -90,8 +81,6 @@ class TestSettings(unittest.TestCase):
|
||||
for key in cli_settings.keys():
|
||||
self.assertEqual(getattr(settings, key), DEFAULT_CLI_SETTINGS.get(key))
|
||||
|
||||
mock_token_manager.return_value.clear_tokens.assert_called_once()
|
||||
|
||||
def test_dump_new_settings(self):
|
||||
settings = Settings(
|
||||
config_path=self.config_path, tool_repository_username="user1"
|
||||
@@ -1,138 +0,0 @@
|
||||
import json
|
||||
import unittest
|
||||
from datetime import datetime, timedelta
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from cryptography.fernet import Fernet
|
||||
|
||||
from crewai.cli.shared.token_manager import TokenManager
|
||||
|
||||
|
||||
class TestTokenManager(unittest.TestCase):
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager._get_or_create_key")
|
||||
def setUp(self, mock_get_key):
|
||||
mock_get_key.return_value = Fernet.generate_key()
|
||||
self.token_manager = TokenManager()
|
||||
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.read_secure_file")
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.save_secure_file")
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager._get_or_create_key")
|
||||
def test_get_or_create_key_existing(self, mock_get_or_create, mock_save, mock_read):
|
||||
mock_key = Fernet.generate_key()
|
||||
mock_get_or_create.return_value = mock_key
|
||||
|
||||
token_manager = TokenManager()
|
||||
result = token_manager.key
|
||||
|
||||
self.assertEqual(result, mock_key)
|
||||
|
||||
@patch("crewai.cli.shared.token_manager.Fernet.generate_key")
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.read_secure_file")
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.save_secure_file")
|
||||
def test_get_or_create_key_new(self, mock_save, mock_read, mock_generate):
|
||||
mock_key = b"new_key"
|
||||
mock_read.return_value = None
|
||||
mock_generate.return_value = mock_key
|
||||
|
||||
result = self.token_manager._get_or_create_key()
|
||||
|
||||
self.assertEqual(result, mock_key)
|
||||
mock_read.assert_called_once_with("secret.key")
|
||||
mock_generate.assert_called_once()
|
||||
mock_save.assert_called_once_with("secret.key", mock_key)
|
||||
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.save_secure_file")
|
||||
def test_save_tokens(self, mock_save):
|
||||
access_token = "test_token"
|
||||
expires_at = int((datetime.now() + timedelta(seconds=3600)).timestamp())
|
||||
|
||||
self.token_manager.save_tokens(access_token, expires_at)
|
||||
|
||||
mock_save.assert_called_once()
|
||||
args = mock_save.call_args[0]
|
||||
self.assertEqual(args[0], "tokens.enc")
|
||||
decrypted_data = self.token_manager.fernet.decrypt(args[1])
|
||||
data = json.loads(decrypted_data)
|
||||
self.assertEqual(data["access_token"], access_token)
|
||||
expiration = datetime.fromisoformat(data["expiration"])
|
||||
self.assertEqual(expiration, datetime.fromtimestamp(expires_at))
|
||||
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.read_secure_file")
|
||||
def test_get_token_valid(self, mock_read):
|
||||
access_token = "test_token"
|
||||
expiration = (datetime.now() + timedelta(hours=1)).isoformat()
|
||||
data = {"access_token": access_token, "expiration": expiration}
|
||||
encrypted_data = self.token_manager.fernet.encrypt(json.dumps(data).encode())
|
||||
mock_read.return_value = encrypted_data
|
||||
|
||||
result = self.token_manager.get_token()
|
||||
|
||||
self.assertEqual(result, access_token)
|
||||
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.read_secure_file")
|
||||
def test_get_token_expired(self, mock_read):
|
||||
access_token = "test_token"
|
||||
expiration = (datetime.now() - timedelta(hours=1)).isoformat()
|
||||
data = {"access_token": access_token, "expiration": expiration}
|
||||
encrypted_data = self.token_manager.fernet.encrypt(json.dumps(data).encode())
|
||||
mock_read.return_value = encrypted_data
|
||||
|
||||
result = self.token_manager.get_token()
|
||||
|
||||
self.assertIsNone(result)
|
||||
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.get_secure_storage_path")
|
||||
@patch("builtins.open", new_callable=unittest.mock.mock_open)
|
||||
@patch("crewai.cli.shared.token_manager.os.chmod")
|
||||
def test_save_secure_file(self, mock_chmod, mock_open, mock_get_path):
|
||||
mock_path = MagicMock()
|
||||
mock_get_path.return_value = mock_path
|
||||
filename = "test_file.txt"
|
||||
content = b"test_content"
|
||||
|
||||
self.token_manager.save_secure_file(filename, content)
|
||||
|
||||
mock_path.__truediv__.assert_called_once_with(filename)
|
||||
mock_open.assert_called_once_with(mock_path.__truediv__.return_value, "wb")
|
||||
mock_open().write.assert_called_once_with(content)
|
||||
mock_chmod.assert_called_once_with(mock_path.__truediv__.return_value, 0o600)
|
||||
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.get_secure_storage_path")
|
||||
@patch(
|
||||
"builtins.open", new_callable=unittest.mock.mock_open, read_data=b"test_content"
|
||||
)
|
||||
def test_read_secure_file_exists(self, mock_open, mock_get_path):
|
||||
mock_path = MagicMock()
|
||||
mock_get_path.return_value = mock_path
|
||||
mock_path.__truediv__.return_value.exists.return_value = True
|
||||
filename = "test_file.txt"
|
||||
|
||||
result = self.token_manager.read_secure_file(filename)
|
||||
|
||||
self.assertEqual(result, b"test_content")
|
||||
mock_path.__truediv__.assert_called_once_with(filename)
|
||||
mock_open.assert_called_once_with(mock_path.__truediv__.return_value, "rb")
|
||||
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.get_secure_storage_path")
|
||||
def test_read_secure_file_not_exists(self, mock_get_path):
|
||||
mock_path = MagicMock()
|
||||
mock_get_path.return_value = mock_path
|
||||
mock_path.__truediv__.return_value.exists.return_value = False
|
||||
filename = "test_file.txt"
|
||||
|
||||
result = self.token_manager.read_secure_file(filename)
|
||||
|
||||
self.assertIsNone(result)
|
||||
mock_path.__truediv__.assert_called_once_with(filename)
|
||||
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.get_secure_storage_path")
|
||||
def test_clear_tokens(self, mock_get_path):
|
||||
mock_path = MagicMock()
|
||||
mock_get_path.return_value = mock_path
|
||||
|
||||
self.token_manager.clear_tokens()
|
||||
|
||||
mock_path.__truediv__.assert_called_once_with("tokens.enc")
|
||||
mock_path.__truediv__.return_value.unlink.assert_called_once_with(
|
||||
missing_ok=True
|
||||
)
|
||||
@@ -4,14 +4,13 @@ import unittest
|
||||
import unittest.mock
|
||||
from datetime import datetime, timedelta
|
||||
from contextlib import contextmanager
|
||||
from pathlib import Path
|
||||
from unittest import mock
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from pytest import raises
|
||||
|
||||
from crewai.cli.shared.token_manager import TokenManager
|
||||
from crewai.cli.authentication.utils import TokenManager
|
||||
from crewai.cli.tools.main import ToolCommand
|
||||
|
||||
|
||||
@@ -28,18 +27,12 @@ def in_temp_dir():
|
||||
|
||||
@pytest.fixture
|
||||
def tool_command():
|
||||
# Create a temporary directory for each test to avoid token storage conflicts
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
# Mock the secure storage path to use the temp directory
|
||||
with patch.object(
|
||||
TokenManager, "get_secure_storage_path", return_value=Path(temp_dir)
|
||||
):
|
||||
TokenManager().save_tokens(
|
||||
"test-token", (datetime.now() + timedelta(seconds=36000)).timestamp()
|
||||
)
|
||||
tool_command = ToolCommand()
|
||||
with patch.object(tool_command, "login"):
|
||||
yield tool_command
|
||||
TokenManager().save_tokens(
|
||||
"test-token", (datetime.now() + timedelta(seconds=36000)).timestamp()
|
||||
)
|
||||
tool_command = ToolCommand()
|
||||
with patch.object(tool_command, "login"):
|
||||
yield tool_command
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.subprocess.run")
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
"""Test Agent creation and execution basic functionality."""
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
from concurrent.futures import Future
|
||||
@@ -27,6 +26,7 @@ from crewai.tasks.conditional_task import ConditionalTask
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
from crewai.types.usage_metrics import UsageMetrics
|
||||
from crewai.utilities import Logger
|
||||
from crewai.utilities.events import (
|
||||
CrewTrainCompletedEvent,
|
||||
CrewTrainStartedEvent,
|
||||
@@ -36,6 +36,7 @@ from crewai.utilities.events.crew_events import (
|
||||
CrewTestCompletedEvent,
|
||||
CrewTestStartedEvent,
|
||||
)
|
||||
from crewai.utilities.events.event_listener import EventListener
|
||||
from crewai.utilities.rpm_controller import RPMController
|
||||
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
|
||||
|
||||
@@ -51,7 +52,6 @@ from crewai.utilities.events.memory_events import (
|
||||
)
|
||||
from crewai.memory.external.external_memory import ExternalMemory
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def ceo():
|
||||
return Agent(
|
||||
@@ -311,6 +311,7 @@ def test_crew_creation(researcher, writer):
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_sync_task_execution(researcher, writer):
|
||||
|
||||
tasks = [
|
||||
Task(
|
||||
description="Give me a list of 5 interesting ideas to explore for an article, what makes them unique and interesting.",
|
||||
@@ -366,7 +367,7 @@ def test_hierarchical_process(researcher, writer):
|
||||
|
||||
assert (
|
||||
result.raw
|
||||
== "1. **The Rise of Autonomous AI Agents in Daily Life** \n As artificial intelligence technology progresses, the integration of autonomous AI agents into everyday life becomes increasingly prominent. These agents, capable of making decisions without human intervention, are reshaping industries from healthcare to finance. Exploring case studies where autonomous AI has successfully decreased operational costs or improved efficiency can reveal not only the benefits but also the ethical implications of delegating decision-making to machines. This topic offers an exciting opportunity to dive into the AI landscape, showcasing current developments such as AI assistants and autonomous vehicles.\n\n2. **Ethical Implications of Generative AI in Creative Industries** \n The surge of generative AI tools in creative fields, such as art, music, and writing, has sparked a heated debate about authorship and originality. This article could investigate how these tools are being used by artists and creators, examining both the potential for innovation and the risk of devaluing traditional art forms. Highlighting perspectives from creators, legal experts, and ethicists could provide a comprehensive overview of the challenges faced, including copyright concerns and the emotional impact on human artists. This discussion is vital as the creative landscape evolves alongside technological advancements, making it ripe for exploration.\n\n3. **AI in Climate Change Mitigation: Current Solutions and Future Potential** \n As the world grapples with climate change, AI technology is increasingly being harnessed to develop innovative solutions for sustainability. From predictive analytics that optimize energy consumption to machine learning algorithms that improve carbon capture methods, AI's potential in environmental science is vast. This topic invites an exploration of existing AI applications in climate initiatives, with a focus on groundbreaking research and initiatives aimed at reducing humanity's carbon footprint. Highlighting successful projects and technology partnerships can illustrate the positive impact AI can have on global climate efforts, inspiring further exploration and investment in this area.\n\n4. **The Future of Work: How AI is Reshaping Employment Landscapes** \n The discussions around AI's impact on the workforce are both urgent and complex, as advances in automation and machine learning continue to transform the job market. This article could delve into the current trends of AI-driven job displacement alongside opportunities for upskilling and the creation of new job roles. By examining case studies of companies that integrate AI effectively and the resulting workforce adaptations, readers can gain valuable insights into preparing for a future where humans and AI collaborate. This exploration highlights the importance of policies that promote workforce resilience in the face of change.\n\n5. **Decentralized AI: Exploring the Role of Blockchain in AI Development** \n As blockchain technology sweeps through various sectors, its application in AI development presents a fascinating topic worth examining. Decentralized AI could address issues of data privacy, security, and democratization in AI models by allowing users to retain ownership of data while benefiting from AI's capabilities. This article could analyze how decentralized networks are disrupting traditional AI development models, featuring innovative projects that harness the synergy between blockchain and AI. Highlighting potential pitfalls and the future landscape of decentralized AI could stimulate discussion among technologists, entrepreneurs, and policymakers alike."
|
||||
== "**1. The Rise of Autonomous AI Agents in Daily Life** \nAs artificial intelligence technology progresses, the integration of autonomous AI agents into everyday life becomes increasingly prominent. These agents, capable of making decisions without human intervention, are reshaping industries from healthcare to finance. Exploring case studies where autonomous AI has successfully decreased operational costs or improved efficiency can reveal not only the benefits but also the ethical implications of delegating decision-making to machines. This topic offers an exciting opportunity to dive into the AI landscape, showcasing current developments such as AI assistants and autonomous vehicles.\n\n**2. Ethical Implications of Generative AI in Creative Industries** \nThe surge of generative AI tools in creative fields, such as art, music, and writing, has sparked a heated debate about authorship and originality. This article could investigate how these tools are being used by artists and creators, examining both the potential for innovation and the risk of devaluing traditional art forms. Highlighting perspectives from creators, legal experts, and ethicists could provide a comprehensive overview of the challenges faced, including copyright concerns and the emotional impact on human artists. This discussion is vital as the creative landscape evolves alongside technological advancements, making it ripe for exploration.\n\n**3. AI in Climate Change Mitigation: Current Solutions and Future Potential** \nAs the world grapples with climate change, AI technology is increasingly being harnessed to develop innovative solutions for sustainability. From predictive analytics that optimize energy consumption to machine learning algorithms that improve carbon capture methods, AI's potential in environmental science is vast. This topic invites an exploration of existing AI applications in climate initiatives, with a focus on groundbreaking research and initiatives aimed at reducing humanity's carbon footprint. Highlighting successful projects and technology partnerships can illustrate the positive impact AI can have on global climate efforts, inspiring further exploration and investment in this area.\n\n**4. The Future of Work: How AI is Reshaping Employment Landscapes** \nThe discussions around AI's impact on the workforce are both urgent and complex, as advances in automation and machine learning continue to transform the job market. This article could delve into the current trends of AI-driven job displacement alongside opportunities for upskilling and the creation of new job roles. By examining case studies of companies that integrate AI effectively and the resulting workforce adaptations, readers can gain valuable insights into preparing for a future where humans and AI collaborate. This exploration highlights the importance of policies that promote workforce resilience in the face of change.\n\n**5. Decentralized AI: Exploring the Role of Blockchain in AI Development** \nAs blockchain technology sweeps through various sectors, its application in AI development presents a fascinating topic worth examining. Decentralized AI could address issues of data privacy, security, and democratization in AI models by allowing users to retain ownership of data while benefiting from AI's capabilities. This article could analyze how decentralized networks are disrupting traditional AI development models, featuring innovative projects that harness the synergy between blockchain and AI. Highlighting potential pitfalls and the future landscape of decentralized AI could stimulate discussion among technologists, entrepreneurs, and policymakers alike.\n\nThese topics not only reflect current trends but also probe deeper into ethical and practical considerations, making them timely and relevant for contemporary audiences."
|
||||
)
|
||||
|
||||
|
||||
@@ -849,7 +850,6 @@ def test_crew_verbose_output(researcher, writer, capsys):
|
||||
),
|
||||
]
|
||||
|
||||
# Test with verbose=True
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=tasks,
|
||||
@@ -857,25 +857,46 @@ def test_crew_verbose_output(researcher, writer, capsys):
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
crew.kickoff()
|
||||
captured = capsys.readouterr()
|
||||
|
||||
# Verify the crew executed successfully and verbose was set
|
||||
assert result is not None
|
||||
assert crew.verbose is True
|
||||
|
||||
# Test with verbose=False
|
||||
crew_quiet = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=tasks,
|
||||
process=Process.sequential,
|
||||
verbose=False,
|
||||
# Filter out event listener logs (lines starting with '[')
|
||||
filtered_output = "\n".join(
|
||||
line for line in captured.out.split("\n") if not line.startswith("[")
|
||||
)
|
||||
|
||||
result_quiet = crew_quiet.kickoff()
|
||||
expected_strings = [
|
||||
"🤖 Agent Started",
|
||||
"Agent: Researcher",
|
||||
"Task: Research AI advancements.",
|
||||
"✅ Agent Final Answer",
|
||||
"Agent: Researcher",
|
||||
"🤖 Agent Started",
|
||||
"Agent: Senior Writer",
|
||||
"Task: Write about AI in healthcare.",
|
||||
"✅ Agent Final Answer",
|
||||
"Agent: Senior Writer",
|
||||
]
|
||||
|
||||
# Verify the crew executed successfully and verbose was not set
|
||||
assert result_quiet is not None
|
||||
assert crew_quiet.verbose is False
|
||||
for expected_string in expected_strings:
|
||||
assert (
|
||||
expected_string in filtered_output
|
||||
), f"Expected '{expected_string}' in output, but it was not found."
|
||||
|
||||
# Now test with verbose set to False
|
||||
crew.verbose = False
|
||||
crew._logger = Logger(verbose=False)
|
||||
event_listener = EventListener()
|
||||
event_listener.verbose = False
|
||||
event_listener.formatter.verbose = False
|
||||
crew.kickoff()
|
||||
captured = capsys.readouterr()
|
||||
filtered_output = "\n".join(
|
||||
line
|
||||
for line in captured.out.split("\n")
|
||||
if not line.startswith("[") and line.strip() and not line.startswith("\x1b")
|
||||
)
|
||||
assert filtered_output == ""
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -938,6 +959,7 @@ def test_cache_hitting_between_agents(researcher, writer, ceo):
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_api_calls_throttling(capsys):
|
||||
|
||||
from crewai.tools import tool
|
||||
|
||||
@tool
|
||||
@@ -1513,6 +1535,7 @@ async def test_async_kickoff_for_each_async_empty_input():
|
||||
|
||||
|
||||
def test_set_agents_step_callback():
|
||||
|
||||
researcher_agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents",
|
||||
@@ -1541,6 +1564,7 @@ def test_set_agents_step_callback():
|
||||
|
||||
|
||||
def test_dont_set_agents_step_callback_if_already_set():
|
||||
|
||||
def agent_callback(_):
|
||||
pass
|
||||
|
||||
@@ -1638,47 +1662,42 @@ def test_task_with_no_arguments():
|
||||
|
||||
|
||||
def test_code_execution_flag_adds_code_tool_upon_kickoff():
|
||||
try:
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
except (ImportError, Exception):
|
||||
pytest.skip("crewai_tools not available or cannot be imported")
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
|
||||
# Mock Docker validation for the entire test
|
||||
with patch.object(Agent, "_validate_docker_installation"):
|
||||
programmer = Agent(
|
||||
role="Programmer",
|
||||
goal="Write code to solve problems.",
|
||||
backstory="You're a programmer who loves to solve problems with code.",
|
||||
allow_delegation=False,
|
||||
allow_code_execution=True,
|
||||
)
|
||||
programmer = Agent(
|
||||
role="Programmer",
|
||||
goal="Write code to solve problems.",
|
||||
backstory="You're a programmer who loves to solve problems with code.",
|
||||
allow_delegation=False,
|
||||
allow_code_execution=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="How much is 2 + 2?",
|
||||
expected_output="The result of the sum as an integer.",
|
||||
agent=programmer,
|
||||
)
|
||||
task = Task(
|
||||
description="How much is 2 + 2?",
|
||||
expected_output="The result of the sum as an integer.",
|
||||
agent=programmer,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[programmer], tasks=[task])
|
||||
crew = Crew(agents=[programmer], tasks=[task])
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
)
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
)
|
||||
|
||||
with patch.object(
|
||||
Task, "execute_sync", return_value=mock_task_output
|
||||
) as mock_execute_sync:
|
||||
crew.kickoff()
|
||||
with patch.object(
|
||||
Task, "execute_sync", return_value=mock_task_output
|
||||
) as mock_execute_sync:
|
||||
crew.kickoff()
|
||||
|
||||
# Get the tools that were actually used in execution
|
||||
_, kwargs = mock_execute_sync.call_args
|
||||
used_tools = kwargs["tools"]
|
||||
# Get the tools that were actually used in execution
|
||||
_, kwargs = mock_execute_sync.call_args
|
||||
used_tools = kwargs["tools"]
|
||||
|
||||
# Verify that exactly one tool was used and it was a CodeInterpreterTool
|
||||
assert len(used_tools) == 1, "Should have exactly one tool"
|
||||
assert isinstance(
|
||||
used_tools[0], CodeInterpreterTool
|
||||
), "Tool should be CodeInterpreterTool"
|
||||
# Verify that exactly one tool was used and it was a CodeInterpreterTool
|
||||
assert len(used_tools) == 1, "Should have exactly one tool"
|
||||
assert isinstance(
|
||||
used_tools[0], CodeInterpreterTool
|
||||
), "Tool should be CodeInterpreterTool"
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -1762,10 +1781,10 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
|
||||
assert result.raw == "Howdy!"
|
||||
|
||||
assert result.token_usage == UsageMetrics(
|
||||
total_tokens=2390,
|
||||
prompt_tokens=2264,
|
||||
completion_tokens=126,
|
||||
successful_requests=4,
|
||||
total_tokens=1673,
|
||||
prompt_tokens=1562,
|
||||
completion_tokens=111,
|
||||
successful_requests=3,
|
||||
cached_prompt_tokens=0,
|
||||
)
|
||||
|
||||
@@ -2009,6 +2028,7 @@ def test_crew_inputs_interpolate_both_agents_and_tasks():
|
||||
|
||||
|
||||
def test_crew_inputs_interpolate_both_agents_and_tasks_diff():
|
||||
|
||||
agent = Agent(
|
||||
role="{topic} Researcher",
|
||||
goal="Express hot takes on {topic}.",
|
||||
@@ -2040,6 +2060,7 @@ def test_crew_inputs_interpolate_both_agents_and_tasks_diff():
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_does_not_interpolate_without_inputs():
|
||||
|
||||
agent = Agent(
|
||||
role="{topic} Researcher",
|
||||
goal="Express hot takes on {topic}.",
|
||||
@@ -2173,6 +2194,7 @@ def test_task_same_callback_both_on_task_and_crew():
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_tools_with_custom_caching():
|
||||
|
||||
from crewai.tools import tool
|
||||
|
||||
@tool
|
||||
@@ -2452,6 +2474,7 @@ def test_multiple_conditional_tasks(researcher, writer):
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_using_contextual_memory():
|
||||
|
||||
math_researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="You research about math.",
|
||||
@@ -2537,8 +2560,8 @@ def test_memory_events_are_emitted():
|
||||
|
||||
crew.kickoff()
|
||||
|
||||
assert len(events["MemorySaveStartedEvent"]) == 3
|
||||
assert len(events["MemorySaveCompletedEvent"]) == 3
|
||||
assert len(events["MemorySaveStartedEvent"]) == 6
|
||||
assert len(events["MemorySaveCompletedEvent"]) == 6
|
||||
assert len(events["MemorySaveFailedEvent"]) == 0
|
||||
assert len(events["MemoryQueryStartedEvent"]) == 3
|
||||
assert len(events["MemoryQueryCompletedEvent"]) == 3
|
||||
@@ -2549,6 +2572,7 @@ def test_memory_events_are_emitted():
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_using_contextual_memory_with_long_term_memory():
|
||||
|
||||
math_researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="You research about math.",
|
||||
@@ -2578,6 +2602,7 @@ def test_using_contextual_memory_with_long_term_memory():
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_warning_long_term_memory_without_entity_memory():
|
||||
|
||||
math_researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="You research about math.",
|
||||
@@ -2613,6 +2638,7 @@ def test_warning_long_term_memory_without_entity_memory():
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_long_term_memory_with_memory_flag():
|
||||
|
||||
math_researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="You research about math.",
|
||||
@@ -2646,6 +2672,7 @@ def test_long_term_memory_with_memory_flag():
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_using_contextual_memory_with_short_term_memory():
|
||||
|
||||
math_researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="You research about math.",
|
||||
@@ -2675,6 +2702,7 @@ def test_using_contextual_memory_with_short_term_memory():
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_disabled_memory_using_contextual_memory():
|
||||
|
||||
math_researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="You research about math.",
|
||||
@@ -2801,6 +2829,7 @@ def test_crew_output_file_validation_failures():
|
||||
|
||||
|
||||
def test_manager_agent(researcher, writer):
|
||||
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -3828,9 +3857,7 @@ def test_task_tools_preserve_code_execution_tools():
|
||||
"""
|
||||
from typing import Type
|
||||
|
||||
# Mock embedchain initialization to prevent race conditions in parallel CI execution
|
||||
with patch("embedchain.client.Client.setup"):
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools import BaseTool
|
||||
@@ -4150,7 +4177,7 @@ def test_crew_with_failing_task_guardrails():
|
||||
expected_output="A properly formatted report",
|
||||
agent=researcher,
|
||||
guardrail=strict_format_guardrail,
|
||||
guardrail_max_retries=3,
|
||||
max_retries=3,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
@@ -4196,7 +4223,7 @@ def test_crew_guardrail_feedback_in_context():
|
||||
expected_output="A response containing the keyword 'IMPORTANT'",
|
||||
agent=researcher,
|
||||
guardrail=format_guardrail,
|
||||
guardrail_max_retries=2,
|
||||
max_retries=2,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[researcher], tasks=[task])
|
||||
@@ -4432,6 +4459,7 @@ def test_crew_copy_with_memory():
|
||||
original_entity_id = id(crew._entity_memory) if crew._entity_memory else None
|
||||
original_external_id = id(crew._external_memory) if crew._external_memory else None
|
||||
|
||||
|
||||
try:
|
||||
crew_copy = crew.copy()
|
||||
|
||||
@@ -4481,6 +4509,7 @@ def test_crew_copy_with_memory():
|
||||
or crew_copy._external_memory is None
|
||||
), "Copied _external_memory should be None if not originally present"
|
||||
|
||||
|
||||
except pydantic_core.ValidationError as e:
|
||||
if "Input should be an instance of" in str(e) and ("Memory" in str(e)):
|
||||
pytest.fail(
|
||||
@@ -4697,7 +4726,6 @@ def test_reset_agent_knowledge_with_only_agent_knowledge(researcher, writer):
|
||||
[mock_ks_research, mock_ks_writer]
|
||||
)
|
||||
|
||||
|
||||
def test_default_crew_name(researcher, writer):
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
@@ -4738,20 +4766,12 @@ def test_ensure_exchanged_messages_are_propagated_to_external_memory():
|
||||
crew.kickoff()
|
||||
|
||||
expected_messages = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": "You are Researcher. You're an expert in research and you love to learn new things.\nYour personal goal is: You research about math.\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": "\nCurrent Task: Research a topic to teach a kid aged 6 about math.\n\nThis is the expected criteria for your final answer: A topic, explanation, angle, and examples.\nyou MUST return the actual complete content as the final answer, not a summary.\n\n# Useful context: \nExternal memories:\n\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:",
|
||||
},
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": "I now can give a great answer \nFinal Answer: \n\n**Topic: Understanding Shapes (Geometry)**\n\n**Explanation:** \nShapes are everywhere around us! They are the special forms that we can see in everyday objects. Teaching a 6-year-old about shapes is not only fun but also a way to help them think about the world around them and develop their spatial awareness. We will focus on basic shapes: circle, square, triangle, and rectangle. Understanding these shapes helps kids recognize and describe their environment.\n\n**Angle:** \nLet’s make learning about shapes an adventure! We can turn it into a treasure hunt where the child has to find objects around the house or outside that match the shapes we learn. This hands-on approach helps make the learning stick!\n\n**Examples:** \n1. **Circle:** \n - Explanation: A circle is round and has no corners. It looks like a wheel or a cookie! \n - Activity: Find objects that are circles, such as a clock, a dinner plate, or a ball. Draw a big circle on a paper and then try to draw smaller circles inside it.\n\n2. **Square:** \n - Explanation: A square has four equal sides and four corners. It looks like a box! \n - Activity: Look for squares in books, in windows, or in building blocks. Try to build a tall tower using square blocks!\n\n3. **Triangle:** \n - Explanation: A triangle has three sides and three corners. It looks like a slice of pizza or a roof! \n - Activity: Use crayons to draw a big triangle and then find things that are shaped like a triangle, like a slice of cheese or a traffic sign.\n\n4. **Rectangle:** \n - Explanation: A rectangle has four sides but only opposite sides are equal. It’s like a stretched square! \n - Activity: Search for rectangles, such as a book cover or a door. You can cut out rectangles from colored paper and create a collage!\n\nBy relating the shapes to fun activities and using real-world examples, we not only make learning more enjoyable but also help the child better remember and understand the concept of shapes in math. This foundation forms the basis of their future learning in geometry!",
|
||||
},
|
||||
{'role': 'system', 'content': "You are Researcher. You're an expert in research and you love to learn new things.\nYour personal goal is: You research about math.\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!"},
|
||||
{'role': 'user', 'content': '\nCurrent Task: Research a topic to teach a kid aged 6 about math.\n\nThis is the expected criteria for your final answer: A topic, explanation, angle, and examples.\nyou MUST return the actual complete content as the final answer, not a summary.\n\n# Useful context: \nExternal memories:\n\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:'},
|
||||
{'role': 'assistant', 'content': 'I now can give a great answer \nFinal Answer: \n\n**Topic: Understanding Shapes (Geometry)**\n\n**Explanation:** \nShapes are everywhere around us! They are the special forms that we can see in everyday objects. Teaching a 6-year-old about shapes is not only fun but also a way to help them think about the world around them and develop their spatial awareness. We will focus on basic shapes: circle, square, triangle, and rectangle. Understanding these shapes helps kids recognize and describe their environment.\n\n**Angle:** \nLet’s make learning about shapes an adventure! We can turn it into a treasure hunt where the child has to find objects around the house or outside that match the shapes we learn. This hands-on approach helps make the learning stick!\n\n**Examples:** \n1. **Circle:** \n - Explanation: A circle is round and has no corners. It looks like a wheel or a cookie! \n - Activity: Find objects that are circles, such as a clock, a dinner plate, or a ball. Draw a big circle on a paper and then try to draw smaller circles inside it.\n\n2. **Square:** \n - Explanation: A square has four equal sides and four corners. It looks like a box! \n - Activity: Look for squares in books, in windows, or in building blocks. Try to build a tall tower using square blocks!\n\n3. **Triangle:** \n - Explanation: A triangle has three sides and three corners. It looks like a slice of pizza or a roof! \n - Activity: Use crayons to draw a big triangle and then find things that are shaped like a triangle, like a slice of cheese or a traffic sign.\n\n4. **Rectangle:** \n - Explanation: A rectangle has four sides but only opposite sides are equal. It’s like a stretched square! \n - Activity: Search for rectangles, such as a book cover or a door. You can cut out rectangles from colored paper and create a collage!\n\nBy relating the shapes to fun activities and using real-world examples, we not only make learning more enjoyable but also help the child better remember and understand the concept of shapes in math. This foundation forms the basis of their future learning in geometry!'}
|
||||
]
|
||||
external_memory_save.assert_called_once_with(
|
||||
value=ANY,
|
||||
metadata={"description": ANY, "messages": expected_messages},
|
||||
agent=ANY,
|
||||
)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user