mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-06 06:38:29 +00:00
Compare commits
12 Commits
devin/1755
...
0.165.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6068fe941f | ||
|
|
2a0cefc98b | ||
|
|
a4f65e4870 | ||
|
|
a1b3edd79c | ||
|
|
80b3d9689a | ||
|
|
ec03a53121 | ||
|
|
2fdf3f3a6a | ||
|
|
1d3d7ebf5e | ||
|
|
2c2196f415 | ||
|
|
c9f30b175c | ||
|
|
a17b93a7f8 | ||
|
|
0d3e462791 |
@@ -709,7 +709,7 @@
|
||||
"icon": "globe"
|
||||
},
|
||||
{
|
||||
"anchor": "법정",
|
||||
"anchor": "포럼",
|
||||
"href": "https://community.crewai.com",
|
||||
"icon": "discourse"
|
||||
},
|
||||
@@ -719,7 +719,7 @@
|
||||
"icon": "robot"
|
||||
},
|
||||
{
|
||||
"anchor": "출시",
|
||||
"anchor": "릴리스",
|
||||
"href": "https://github.com/crewAIInc/crewAI/releases",
|
||||
"icon": "tag"
|
||||
}
|
||||
@@ -734,22 +734,22 @@
|
||||
"pages": ["ko/introduction", "ko/installation", "ko/quickstart"]
|
||||
},
|
||||
{
|
||||
"group": "안내서",
|
||||
"group": "가이드",
|
||||
"pages": [
|
||||
{
|
||||
"group": "전략",
|
||||
"pages": ["ko/guides/concepts/evaluating-use-cases"]
|
||||
},
|
||||
{
|
||||
"group": "Agents",
|
||||
"group": "에이전트 (Agents)",
|
||||
"pages": ["ko/guides/agents/crafting-effective-agents"]
|
||||
},
|
||||
{
|
||||
"group": "Crews",
|
||||
"group": "크루 (Crews)",
|
||||
"pages": ["ko/guides/crews/first-crew"]
|
||||
},
|
||||
{
|
||||
"group": "Flows",
|
||||
"group": "플로우 (Flows)",
|
||||
"pages": [
|
||||
"ko/guides/flows/first-flow",
|
||||
"ko/guides/flows/mastering-flow-state"
|
||||
@@ -797,7 +797,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "도구",
|
||||
"group": "도구 (Tools)",
|
||||
"pages": [
|
||||
"ko/tools/overview",
|
||||
{
|
||||
@@ -887,7 +887,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "클라우드 & 저장",
|
||||
"group": "클라우드 & 스토리지",
|
||||
"pages": [
|
||||
"ko/tools/cloud-storage/overview",
|
||||
"ko/tools/cloud-storage/s3readertool",
|
||||
@@ -909,7 +909,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "오브저버빌리티",
|
||||
"group": "Observability",
|
||||
"pages": [
|
||||
"ko/observability/overview",
|
||||
"ko/observability/arize-phoenix",
|
||||
@@ -927,7 +927,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "익히다",
|
||||
"group": "학습",
|
||||
"pages": [
|
||||
"ko/learn/overview",
|
||||
"ko/learn/llm-selection-guide",
|
||||
@@ -951,13 +951,13 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "원격측정",
|
||||
"group": "Telemetry",
|
||||
"pages": ["ko/telemetry"]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"tab": "기업",
|
||||
"tab": "엔터프라이즈",
|
||||
"groups": [
|
||||
{
|
||||
"group": "시작 안내",
|
||||
@@ -997,7 +997,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "사용 안내서",
|
||||
"group": "How-To Guides",
|
||||
"pages": [
|
||||
"ko/enterprise/guides/build-crew",
|
||||
"ko/enterprise/guides/deploy-crew",
|
||||
|
||||
@@ -35,6 +35,22 @@ crewai tool install <tool-name>
|
||||
|
||||
This installs the tool and adds it to `pyproject.toml`.
|
||||
|
||||
You can use the tool by importing it and adding it to your agents:
|
||||
|
||||
```python
|
||||
from your_tool.tool import YourTool
|
||||
|
||||
custom_tool = YourTool()
|
||||
|
||||
researcher = Agent(
|
||||
role='Market Research Analyst',
|
||||
goal='Provide up-to-date market analysis of the AI industry',
|
||||
backstory='An expert analyst with a keen eye for market trends.',
|
||||
tools=[custom_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Creating and Publishing Tools
|
||||
|
||||
To create a new tool project:
|
||||
|
||||
@@ -1,65 +1,65 @@
|
||||
---
|
||||
title: 소개
|
||||
description: 함께 협력하여 복잡한 작업을 해결하는 AI 에이전트 팀 구축
|
||||
description: 함께 협력하여 복잡한 작업을 해결하는 AI agent 팀 구축
|
||||
icon: handshake
|
||||
---
|
||||
|
||||
# CrewAI란 무엇인가?
|
||||
|
||||
**CrewAI는 완전히 독립적으로, LangChain이나 기타 agent 프레임워크에 의존하지 않고 처음부터 스크래치로 개발된 가볍고 매우 빠른 Python 프레임워크입니다.**
|
||||
**CrewAI는 LangChain이나 기타 agent 프레임워크에 의존하지 않고, 완전히 독립적으로 처음부터 스크래치로 개발된 가볍고 매우 빠른 Python 프레임워크입니다.**
|
||||
|
||||
CrewAI는 고수준의 간편함과 정밀한 저수준 제어를 모두 제공하여, 어떤 시나리오에도 맞춤화된 자율 AI agent를 만드는 데 이상적입니다:
|
||||
|
||||
- **[CrewAI Crews](/ko/guides/crews/first-crew)**: 자율성과 협업 지능을 극대화하여, 각 agent가 특정 역할, 도구, 목표를 가진 AI 팀을 만들 수 있습니다.
|
||||
- **[CrewAI Flows](/ko/guides/flows/first-flow)**: 세밀한 이벤트 기반 제어와 단일 LLM 호출을 통한 정확한 작업 오케스트레이션을 가능하게 하며 Crews를 네이티브로 지원합니다.
|
||||
- **[CrewAI Flows](/ko/guides/flows/first-flow)**: 이벤트 기반의 세밀한 제어와 단일 LLM 호출을 통한 정확한 작업 orchestration을 지원하며, Crews와 네이티브로 통합됩니다.
|
||||
|
||||
10만 명이 넘는 개발자가 커뮤니티 과정을 통해 인증을 받았으며, CrewAI는 기업용 AI 자동화의 표준으로 빠르게 자리잡고 있습니다.
|
||||
|
||||
## 크루 작동 방식
|
||||
## Crew의 작동 방식
|
||||
|
||||
<Note>
|
||||
회사가 비즈니스 목표를 달성하기 위해 여러 부서(영업, 엔지니어링, 마케팅 등)가 리더십 아래에서 함께 일하는 것처럼, CrewAI는 복잡한 작업을 달성하기 위해 전문화된 역할의 AI 에이전트들이 협력하는 조직을 만들 수 있도록 도와줍니다.
|
||||
회사가 비즈니스 목표를 달성하기 위해 여러 부서(영업, 엔지니어링, 마케팅 등)가 리더십 아래에서 함께 일하는 것처럼, CrewAI는 복잡한 작업을 달성하기 위해 전문화된 역할의 AI agent들이 협력하는 조직을 만들 수 있도록 도와줍니다.
|
||||
</Note>
|
||||
|
||||
<Frame caption="CrewAI 프레임워크 개요">
|
||||
<Frame caption="CrewAI Framework Overview">
|
||||
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
|
||||
</Frame>
|
||||
|
||||
| 구성 요소 | 설명 | 주요 특징 |
|
||||
|:--------------|:---------------------:|:----------|
|
||||
| **크루** | 최상위 조직 | • AI 에이전트 팀 관리<br/>• 워크플로우 감독<br/>• 협업 보장<br/>• 결과 전달 |
|
||||
| **AI 에이전트** | 전문 팀원 | • 특정 역할 보유(연구원, 작가 등)<br/>• 지정된 도구 사용<br/>• 작업 위임 가능<br/>• 자율적 의사결정 가능 |
|
||||
| **프로세스** | 워크플로우 관리 시스템 | • 협업 패턴 정의<br/>• 작업 할당 제어<br/>• 상호작용 관리<br/>• 효율적 실행 보장 |
|
||||
| **작업** | 개별 할당 | • 명확한 목표 보유<br/>• 특정 도구 사용<br/>• 더 큰 프로세스에 기여<br/>• 실행 가능한 결과 도출 |
|
||||
| 구성 요소 | 설명 | 주요 특징 |
|
||||
|:----------|:----:|:----------|
|
||||
| **Crew** | 최상위 조직 | • AI agent 팀 관리<br/>• workflow 감독<br/>• 협업 보장<br/>• 결과 전달 |
|
||||
| **AI agents** | 전문 팀원 | • 특정 역할 보유(Researcher, Writer 등)<br/>• 지정된 도구 사용<br/>• 작업 위임 가능<br/>• 자율적 의사결정 가능 |
|
||||
| **Process** | workflow 관리 시스템 | • 협업 패턴 정의<br/>• 작업 할당 제어<br/>• 상호작용 관리<br/>• 효율적 실행 보장 |
|
||||
| **Task** | 개별 할당 | • 명확한 목표 보유<br/>• 특정 도구 사용<br/>• 더 큰 프로세스에 기여<br/>• 실행 가능한 결과 도출 |
|
||||
|
||||
### 어떻게 모두 함께 작동하는가
|
||||
### 전체 구조의 동작 방식
|
||||
|
||||
1. **Crew**가 전체 운영을 조직합니다
|
||||
2. **AI Agents**가 자신들의 전문 작업을 수행합니다
|
||||
2. **AI agents**가 자신들의 전문 작업을 수행합니다
|
||||
3. **Process**가 원활한 협업을 보장합니다
|
||||
4. **Tasks**가 완료되어 목표를 달성합니다
|
||||
|
||||
## 주요 기능
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="역할 기반 에이전트" icon="users">
|
||||
연구원, 분석가, 작가 등 다양한 역할, 전문성, 목표를 가진 전문 에이전트를 생성할 수 있습니다
|
||||
<Card title="역할 기반 agent" icon="users">
|
||||
Researcher, Analyst, Writer 등 다양한 역할과 전문성, 목표를 가진 agent를 생성할 수 있습니다
|
||||
</Card>
|
||||
<Card title="유연한 도구" icon="screwdriver-wrench">
|
||||
에이전트에게 외부 서비스 및 데이터 소스와 상호작용할 수 있는 맞춤형 도구와 API를 제공합니다
|
||||
agent에게 외부 서비스 및 데이터 소스와 상호작용할 수 있는 맞춤형 도구와 API를 제공합니다
|
||||
</Card>
|
||||
<Card title="지능형 협업" icon="people-arrows">
|
||||
에이전트가 함께 작업하며, 인사이트를 공유하고 작업을 조율하여 복잡한 목표를 달성합니다
|
||||
agent들이 함께 작업하며, 인사이트를 공유하고 작업을 조율하여 복잡한 목표를 달성합니다
|
||||
</Card>
|
||||
<Card title="작업 관리" icon="list-check">
|
||||
순차적 또는 병렬 워크플로우를 정의할 수 있으며, 에이전트가 작업 의존성을 자동으로 처리합니다
|
||||
순차적 또는 병렬 workflow를 정의할 수 있으며, agent가 작업 의존성을 자동으로 처리합니다
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## 플로우의 작동 원리
|
||||
## Flow의 작동 원리
|
||||
|
||||
<Note>
|
||||
crew는 자율 협업에 탁월한 반면, 플로우는 구조화된 자동화를 제공하여 워크플로우 실행에 대한 세밀한 제어를 제공합니다. 플로우는 조건부 로직, 반복문, 동적 상태 관리를 정확하게 처리하면서 작업이 신뢰성 있게, 안전하게, 효율적으로 실행되도록 보장합니다. 플로우는 crew와 원활하게 통합되어 높은 자율성과 엄격한 제어의 균형을 이룰 수 있게 해줍니다.
|
||||
Crew가 자율 협업에 탁월하다면, Flow는 구조화된 자동화를 제공하여 workflow 실행에 대한 세밀한 제어를 제공합니다. Flow는 조건부 로직, 반복문, 동적 상태 관리를 정확하게 처리하면서 작업이 신뢰성 있게, 안전하게, 효율적으로 실행되도록 보장합니다. Flow는 Crew와 원활하게 통합되어 높은 자율성과 엄격한 제어의 균형을 이룰 수 있게 해줍니다.
|
||||
</Note>
|
||||
|
||||
<Frame caption="CrewAI Framework Overview">
|
||||
@@ -68,41 +68,41 @@ CrewAI는 고수준의 간편함과 정밀한 저수준 제어를 모두 제공
|
||||
|
||||
| 구성 요소 | 설명 | 주요 기능 |
|
||||
|:----------|:-----------:|:------------|
|
||||
| **Flow** | 구조화된 워크플로우 오케스트레이션 | • 실행 경로 관리<br/>• 상태 전환 처리<br/>• 작업 순서 제어<br/>• 신뢰성 있는 실행 보장 |
|
||||
| **Events** | 워크플로우 액션 트리거 | • 특정 프로세스 시작<br/>• 동적 응답 가능<br/>• 조건부 분기 지원<br/>• 실시간 적응 허용 |
|
||||
| **States** | 워크플로우 실행 컨텍스트 | • 실행 데이터 유지<br/>• 데이터 영속성 지원<br/>• 재개 가능성 보장<br/>• 실행 무결성 확보 |
|
||||
| **Crew Support** | 워크플로우 자동화 강화 | • 필요할 때 agency 삽입<br/>• 구조화된 워크플로우 보완<br/>• 자동화와 인텔리전스의 균형<br/>• 적응적 의사결정 지원 |
|
||||
| **Flow** | 구조화된 workflow orchestration | • 실행 경로 관리<br/>• 상태 전환 처리<br/>• 작업 순서 제어<br/>• 신뢰성 있는 실행 보장 |
|
||||
| **Events** | workflow 액션 트리거 | • 특정 프로세스 시작<br/>• 동적 응답 가능<br/>• 조건부 분기 지원<br/>• 실시간 적응 허용 |
|
||||
| **States** | workflow 실행 컨텍스트 | • 실행 데이터 유지<br/>• 데이터 영속성 지원<br/>• 재개 가능성 보장<br/>• 실행 무결성 확보 |
|
||||
| **Crew Support** | workflow 자동화 강화 | • 필요할 때 agency 삽입<br/>• 구조화된 workflow 보완<br/>• 자동화와 인텔리전스의 균형<br/>• 적응적 의사결정 지원 |
|
||||
|
||||
### 주요 기능
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="이벤트 기반 오케스트레이션" icon="bolt">
|
||||
이벤트에 동적으로 반응하여 정밀한 실행 경로 정의
|
||||
<Card title="이벤트 기반 orchestration" icon="bolt">
|
||||
이벤트에 동적으로 반응하여 정밀한 실행 경로를 정의합니다
|
||||
</Card>
|
||||
<Card title="세밀한 제어" icon="sliders">
|
||||
워크플로우 상태와 조건부 실행을 안전하고 효율적으로 관리
|
||||
workflow 상태와 조건부 실행을 안전하고 효율적으로 관리합니다
|
||||
</Card>
|
||||
<Card title="네이티브 Crew 통합" icon="puzzle-piece">
|
||||
Crews와 손쉽게 결합하여 자율성과 지능 강화
|
||||
Crews와 손쉽게 결합하여 자율성과 지능을 강화합니다
|
||||
</Card>
|
||||
<Card title="결정론적 실행" icon="route">
|
||||
명시적 제어 흐름과 오류 처리로 예측 가능한 결과 보장
|
||||
명시적 제어 흐름과 오류 처리로 예측 가능한 결과를 보장합니다
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## 크루(Crews)와 플로우(Flows)를 언제 사용할까
|
||||
## Crew와 Flow를 언제 사용할까
|
||||
|
||||
<Note>
|
||||
[크루](/ko/guides/crews/first-crew)와 [플로우](/ko/guides/flows/first-flow)를 언제 사용할지 이해하는 것은 CrewAI의 잠재력을 애플리케이션에서 극대화하는 데 핵심적입니다.
|
||||
[Crew](/ko/guides/crews/first-crew)와 [Flow](/ko/guides/flows/first-flow)를 언제 사용할지 이해하는 것은 CrewAI의 잠재력을 애플리케이션에서 극대화하는 데 핵심적입니다.
|
||||
</Note>
|
||||
|
||||
| 사용 사례 | 권장 접근 방식 | 이유 |
|
||||
|:---------|:---------------------|:-----|
|
||||
| **개방형 연구** | [크루](/ko/guides/crews/first-crew) | 과제가 창의적인 사고, 탐색, 적응이 필요할 때 |
|
||||
| **콘텐츠 생성** | [크루](/ko/guides/crews/first-crew) | 기사, 보고서, 마케팅 자료 등 협업형 생성 시 |
|
||||
| **의사결정 워크플로우** | [플로우](/ko/guides/flows/first-flow) | 예측 가능하고 감사 가능한 의사결정 경로 및 정밀 제어가 필요할 때 |
|
||||
| **API 오케스트레이션** | [플로우](/ko/guides/flows/first-flow) | 특정 순서로 여러 외부 서비스에 신뢰성 있게 통합할 때 |
|
||||
| **하이브리드 애플리케이션** | 혼합 접근 방식 | [플로우](/ko/guides/flows/first-flow)로 전체 프로세스를 오케스트레이션하고, [크루](/ko/guides/crews/first-crew)로 복잡한 하위 작업을 처리 |
|
||||
| **개방형 연구** | [Crew](/ko/guides/crews/first-crew) | 창의적 사고, 탐색, 적응이 필요한 작업에 적합 |
|
||||
| **콘텐츠 생성** | [Crew](/ko/guides/crews/first-crew) | 기사, 보고서, 마케팅 자료 등 협업형 생성에 적합 |
|
||||
| **의사결정 workflow** | [Flow](/ko/guides/flows/first-flow) | 예측 가능하고 감사 가능한 의사결정 경로 및 정밀 제어가 필요할 때 |
|
||||
| **API orchestration** | [Flow](/ko/guides/flows/first-flow) | 특정 순서로 여러 외부 서비스에 신뢰성 있게 통합할 때 |
|
||||
| **하이브리드 애플리케이션** | 혼합 접근 방식 | [Flow](/ko/guides/flows/first-flow)로 전체 프로세스를 orchestration하고, [Crew](/ko/guides/crews/first-crew)로 복잡한 하위 작업을 처리 |
|
||||
|
||||
### 의사결정 프레임워크
|
||||
|
||||
@@ -112,8 +112,8 @@ CrewAI는 고수준의 간편함과 정밀한 저수준 제어를 모두 제공
|
||||
|
||||
## CrewAI를 선택해야 하는 이유?
|
||||
|
||||
- 🧠 **자율적 운영**: 에이전트가 자신의 역할과 사용 가능한 도구를 바탕으로 지능적인 결정을 내립니다
|
||||
- 📝 **자연스러운 상호작용**: 에이전트가 인간 팀원처럼 소통하고 협업합니다
|
||||
- 🧠 **자율적 운영**: agent가 자신의 역할과 사용 가능한 도구를 바탕으로 지능적인 결정을 내립니다
|
||||
- 📝 **자연스러운 상호작용**: agent가 인간 팀원처럼 소통하고 협업합니다
|
||||
- 🛠️ **확장 가능한 설계**: 새로운 도구, 역할, 기능을 쉽게 추가할 수 있습니다
|
||||
- 🚀 **프로덕션 준비 완료**: 실제 환경에서의 신뢰성과 확장성을 고려하여 구축되었습니다
|
||||
- 🔒 **보안 중심**: 엔터프라이즈 보안 요구 사항을 고려하여 설계되었습니다
|
||||
@@ -134,7 +134,7 @@ CrewAI는 고수준의 간편함과 정밀한 저수준 제어를 모두 제공
|
||||
icon="diagram-project"
|
||||
href="/ko/guides/flows/first-flow"
|
||||
>
|
||||
실행을 정밀하게 제어할 수 있는 구조화된, 이벤트 기반 워크플로우를 만드는 방법을 배워보세요.
|
||||
실행을 정밀하게 제어할 수 있는 구조화된, 이벤트 기반 workflow를 만드는 방법을 배워보세요.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -151,7 +151,7 @@ CrewAI는 고수준의 간편함과 정밀한 저수준 제어를 모두 제공
|
||||
icon="bolt"
|
||||
href="ko/quickstart"
|
||||
>
|
||||
빠른 시작 가이드를 따라 첫 번째 CrewAI 에이전트를 만들고 직접 경험해 보세요.
|
||||
빠른 시작 가이드를 따라 첫 번째 CrewAI agent를 만들고 직접 경험해 보세요.
|
||||
</Card>
|
||||
<Card
|
||||
title="커뮤니티 가입하기"
|
||||
|
||||
@@ -10,7 +10,7 @@ authors = [
|
||||
dependencies = [
|
||||
# Core Dependencies
|
||||
"pydantic>=2.4.2",
|
||||
"openai>=1.13.3",
|
||||
"openai<1.100.0", # TODO: Temporarily pin due to https://github.com/openai/openai-python/issues/2564. Can revert once the issue is fixed.
|
||||
"litellm==1.74.9",
|
||||
"instructor>=1.3.3",
|
||||
# Text Processing
|
||||
@@ -48,7 +48,7 @@ Documentation = "https://docs.crewai.com"
|
||||
Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = ["crewai-tools~=0.62.0"]
|
||||
tools = ["crewai-tools~=0.62.1"]
|
||||
embeddings = [
|
||||
"tiktoken~=0.8.0"
|
||||
]
|
||||
@@ -98,6 +98,11 @@ exclude = ["cli/templates"]
|
||||
[tool.bandit]
|
||||
exclude_dirs = ["src/crewai/cli/templates"]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
markers = [
|
||||
"telemetry: mark test as a telemetry test (don't mock telemetry)",
|
||||
]
|
||||
|
||||
# PyTorch index configuration, since torch 2.5.0 is not compatible with python 3.13
|
||||
[[tool.uv.index]]
|
||||
name = "pytorch-nightly"
|
||||
|
||||
@@ -54,7 +54,7 @@ def _track_install_async():
|
||||
|
||||
_track_install_async()
|
||||
|
||||
__version__ = "0.159.0"
|
||||
__version__ = "0.165.0"
|
||||
__all__ = [
|
||||
"Agent",
|
||||
"Crew",
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import shutil
|
||||
import subprocess
|
||||
import time
|
||||
from typing import Any, Callable, Dict, List, Literal, Optional, Sequence, Tuple, Type, Union, cast
|
||||
from typing import Any, Callable, Dict, List, Literal, Optional, Sequence, Tuple, Type, Union
|
||||
|
||||
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
|
||||
|
||||
@@ -399,13 +399,8 @@ class Agent(BaseAgent):
|
||||
),
|
||||
)
|
||||
|
||||
if tools is not None:
|
||||
agent_tools: List[Union[BaseTool, dict]] = cast(List[Union[BaseTool, dict]], tools)
|
||||
elif self.tools is not None:
|
||||
agent_tools = cast(List[Union[BaseTool, dict]], self.tools)
|
||||
else:
|
||||
agent_tools = []
|
||||
self.create_agent_executor(tools=agent_tools, task=task)
|
||||
tools = tools or self.tools or []
|
||||
self.create_agent_executor(tools=tools, task=task)
|
||||
|
||||
if self.crew and self.crew._train:
|
||||
task_prompt = self._training_handler(task_prompt=task_prompt)
|
||||
@@ -542,14 +537,14 @@ class Agent(BaseAgent):
|
||||
)["output"]
|
||||
|
||||
def create_agent_executor(
|
||||
self, tools: Optional[List[Union[BaseTool, dict]]] = None, task=None
|
||||
self, tools: Optional[List[BaseTool]] = None, task=None
|
||||
) -> None:
|
||||
"""Create an agent executor for the agent.
|
||||
|
||||
Returns:
|
||||
An instance of the CrewAgentExecutor class.
|
||||
"""
|
||||
raw_tools: List[Union[BaseTool, dict]] = tools or self.tools or []
|
||||
raw_tools: List[BaseTool] = tools or self.tools or []
|
||||
parsed_tools = parse_tools(raw_tools)
|
||||
|
||||
prompt = Prompts(
|
||||
@@ -808,7 +803,7 @@ class Agent(BaseAgent):
|
||||
goal=self.goal,
|
||||
backstory=self.backstory,
|
||||
llm=self.llm,
|
||||
tools=[tool for tool in (self.tools or []) if isinstance(tool, BaseTool)],
|
||||
tools=self.tools or [],
|
||||
max_iterations=self.max_iter,
|
||||
max_execution_time=self.max_execution_time,
|
||||
respect_context_window=self.respect_context_window,
|
||||
@@ -846,7 +841,7 @@ class Agent(BaseAgent):
|
||||
goal=self.goal,
|
||||
backstory=self.backstory,
|
||||
llm=self.llm,
|
||||
tools=[tool for tool in (self.tools or []) if isinstance(tool, BaseTool)],
|
||||
tools=self.tools or [],
|
||||
max_iterations=self.max_iter,
|
||||
max_execution_time=self.max_execution_time,
|
||||
respect_context_window=self.respect_context_window,
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from pydantic import PrivateAttr
|
||||
|
||||
@@ -25,11 +25,11 @@ class BaseAgentAdapter(BaseAgent, ABC):
|
||||
self._agent_config = agent_config
|
||||
|
||||
@abstractmethod
|
||||
def configure_tools(self, tools: Optional[List[Union[BaseTool, dict]]] = None) -> None:
|
||||
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
|
||||
"""Configure and adapt tools for the specific agent implementation.
|
||||
|
||||
Args:
|
||||
tools: Optional list of BaseTool instances and raw tool definitions to be configured
|
||||
tools: Optional list of BaseTool instances to be configured
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
from typing import Any, AsyncIterable, Dict, List, Optional
|
||||
|
||||
from pydantic import Field, PrivateAttr
|
||||
|
||||
@@ -22,6 +22,7 @@ from crewai.utilities.events.agent_events import (
|
||||
)
|
||||
|
||||
try:
|
||||
from langchain_core.messages import ToolMessage
|
||||
from langgraph.checkpoint.memory import MemorySaver
|
||||
from langgraph.prebuilt import create_react_agent
|
||||
|
||||
@@ -127,8 +128,7 @@ class LangGraphAgentAdapter(BaseAgentAdapter):
|
||||
tools: Optional[List[BaseTool]] = None,
|
||||
) -> str:
|
||||
"""Execute a task using the LangGraph workflow."""
|
||||
mixed_tools: Optional[List[Union[BaseTool, dict]]] = tools # type: ignore[assignment]
|
||||
self.create_agent_executor(mixed_tools)
|
||||
self.create_agent_executor(tools)
|
||||
|
||||
self.configure_structured_output(task)
|
||||
|
||||
@@ -198,20 +198,17 @@ class LangGraphAgentAdapter(BaseAgentAdapter):
|
||||
)
|
||||
raise
|
||||
|
||||
def create_agent_executor(self, tools: Optional[List[Union[BaseTool, dict]]] = None) -> None:
|
||||
def create_agent_executor(self, tools: Optional[List[BaseTool]] = None) -> None:
|
||||
"""Configure the LangGraph agent for execution."""
|
||||
self.configure_tools(tools)
|
||||
|
||||
def configure_tools(self, tools: Optional[List[Union[BaseTool, dict]]] = None) -> None:
|
||||
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
|
||||
"""Configure tools for the LangGraph agent."""
|
||||
if tools:
|
||||
base_tools = [tool for tool in tools if isinstance(tool, BaseTool)]
|
||||
existing_base_tools = [tool for tool in (self.tools or []) if isinstance(tool, BaseTool)]
|
||||
all_tools = existing_base_tools + base_tools
|
||||
if all_tools:
|
||||
self._tool_adapter.configure_tools(all_tools)
|
||||
available_tools = self._tool_adapter.tools()
|
||||
self._graph.tools = available_tools
|
||||
all_tools = list(self.tools or []) + list(tools or [])
|
||||
self._tool_adapter.configure_tools(all_tools)
|
||||
available_tools = self._tool_adapter.tools()
|
||||
self._graph.tools = available_tools
|
||||
|
||||
def get_delegation_tools(self, agents: List[BaseAgent]) -> List[BaseTool]:
|
||||
"""Implement delegation tools support for LangGraph."""
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Any, List, Optional, Union
|
||||
from typing import Any, List, Optional
|
||||
|
||||
from pydantic import Field, PrivateAttr
|
||||
|
||||
@@ -152,12 +152,10 @@ class OpenAIAgentAdapter(BaseAgentAdapter):
|
||||
|
||||
self.agent_executor = Runner
|
||||
|
||||
def configure_tools(self, tools: Optional[List[Union[BaseTool, dict]]] = None) -> None:
|
||||
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
|
||||
"""Configure tools for the OpenAI Assistant"""
|
||||
if tools:
|
||||
base_tools = [tool for tool in tools if isinstance(tool, BaseTool)]
|
||||
if base_tools:
|
||||
self._tool_adapter.configure_tools(base_tools)
|
||||
self._tool_adapter.configure_tools(tools)
|
||||
if self._tool_adapter.converted_tools:
|
||||
self._openai_agent.tools = self._tool_adapter.converted_tools
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@ import uuid
|
||||
from abc import ABC, abstractmethod
|
||||
from copy import copy as shallow_copy
|
||||
from hashlib import md5
|
||||
from typing import Any, Callable, Dict, List, Optional, TypeVar, Union
|
||||
from typing import Any, Callable, Dict, List, Optional, TypeVar
|
||||
|
||||
from pydantic import (
|
||||
UUID4,
|
||||
@@ -25,6 +25,7 @@ from crewai.security.security_config import SecurityConfig
|
||||
from crewai.tools.base_tool import BaseTool, Tool
|
||||
from crewai.utilities import I18N, Logger, RPMController
|
||||
from crewai.utilities.config import process_config
|
||||
from crewai.utilities.converter import Converter
|
||||
from crewai.utilities.string_utils import interpolate_only
|
||||
|
||||
T = TypeVar("T", bound="BaseAgent")
|
||||
@@ -107,7 +108,7 @@ class BaseAgent(ABC, BaseModel):
|
||||
default=False,
|
||||
description="Enable agent to delegate and ask questions among each other.",
|
||||
)
|
||||
tools: Optional[List[Union[BaseTool, dict]]] = Field(
|
||||
tools: Optional[List[BaseTool]] = Field(
|
||||
default_factory=list, description="Tools at agents' disposal"
|
||||
)
|
||||
max_iter: int = Field(
|
||||
@@ -167,34 +168,29 @@ class BaseAgent(ABC, BaseModel):
|
||||
|
||||
@field_validator("tools")
|
||||
@classmethod
|
||||
def validate_tools(cls, tools: List[Any]) -> List[Union[BaseTool, dict]]:
|
||||
def validate_tools(cls, tools: List[Any]) -> List[BaseTool]:
|
||||
"""Validate and process the tools provided to the agent.
|
||||
|
||||
This method ensures that each tool is either an instance of BaseTool,
|
||||
an object with 'name', 'func', and 'description' attributes, or a
|
||||
raw tool definition (dict) for hosted/server-side tools.
|
||||
This method ensures that each tool is either an instance of BaseTool
|
||||
or an object with 'name', 'func', and 'description' attributes. If the
|
||||
tool meets these criteria, it is processed and added to the list of
|
||||
tools. Otherwise, a ValueError is raised.
|
||||
"""
|
||||
if not tools:
|
||||
return []
|
||||
|
||||
processed_tools: List[Union[BaseTool, dict]] = []
|
||||
processed_tools = []
|
||||
required_attrs = ["name", "func", "description"]
|
||||
for tool in tools:
|
||||
if isinstance(tool, BaseTool):
|
||||
processed_tools.append(tool)
|
||||
elif isinstance(tool, dict):
|
||||
if "name" not in tool:
|
||||
raise ValueError(
|
||||
f"Raw tool definition must have a 'name' field: {tool}"
|
||||
)
|
||||
processed_tools.append(tool)
|
||||
elif all(hasattr(tool, attr) for attr in required_attrs):
|
||||
# Tool has the required attributes, create a Tool instance
|
||||
processed_tools.append(Tool.from_langchain(tool))
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Invalid tool type: {type(tool)}. "
|
||||
"Tool must be an instance of BaseTool, a dict for hosted tools, or "
|
||||
"Tool must be an instance of BaseTool or "
|
||||
"an object with 'name', 'func', and 'description' attributes."
|
||||
)
|
||||
return processed_tools
|
||||
|
||||
@@ -48,7 +48,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
agent: BaseAgent,
|
||||
prompt: dict[str, str],
|
||||
max_iter: int,
|
||||
tools: List[Union[CrewStructuredTool, dict]],
|
||||
tools: List[CrewStructuredTool],
|
||||
tools_names: str,
|
||||
stop_words: List[str],
|
||||
tools_description: str,
|
||||
@@ -84,8 +84,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
self.messages: List[Dict[str, str]] = []
|
||||
self.iterations = 0
|
||||
self.log_error_after = 3
|
||||
self.tool_name_to_tool_map: Dict[str, Union[CrewStructuredTool, BaseTool, dict]] = {
|
||||
tool.name if hasattr(tool, 'name') else tool.get('name', 'unknown'): tool for tool in self.tools
|
||||
self.tool_name_to_tool_map: Dict[str, Union[CrewStructuredTool, BaseTool]] = {
|
||||
tool.name: tool for tool in self.tools
|
||||
}
|
||||
existing_stop = self.llm.stop or []
|
||||
self.llm.stop = list(
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.159.0,<1.0.0"
|
||||
"crewai[tools]>=0.165.0,<1.0.0"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.159.0,<1.0.0",
|
||||
"crewai[tools]>=0.165.0,<1.0.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.159.0"
|
||||
"crewai[tools]>=0.165.0"
|
||||
]
|
||||
|
||||
[tool.crewai]
|
||||
|
||||
@@ -286,10 +286,12 @@ class Crew(FlowTrackable, BaseModel):
|
||||
|
||||
self._cache_handler = CacheHandler()
|
||||
event_listener = EventListener()
|
||||
if on_first_execution_tracing_confirmation():
|
||||
self.tracing = True
|
||||
|
||||
if is_tracing_enabled() or self.tracing:
|
||||
if (
|
||||
on_first_execution_tracing_confirmation()
|
||||
or is_tracing_enabled()
|
||||
or self.tracing
|
||||
):
|
||||
trace_listener = TraceCollectionListener()
|
||||
trace_listener.setup_listeners(crewai_event_bus)
|
||||
event_listener.verbose = self.verbose
|
||||
@@ -503,6 +505,7 @@ class Crew(FlowTrackable, BaseModel):
|
||||
)
|
||||
return self
|
||||
|
||||
|
||||
@property
|
||||
def key(self) -> str:
|
||||
source: List[str] = [agent.key for agent in self.agents] + [
|
||||
@@ -639,6 +642,7 @@ class Crew(FlowTrackable, BaseModel):
|
||||
self._inputs = inputs
|
||||
self._interpolate_inputs(inputs)
|
||||
self._set_tasks_callbacks()
|
||||
self._set_allow_crewai_trigger_context_for_first_task()
|
||||
|
||||
i18n = I18N(prompt_file=self.prompt_file)
|
||||
|
||||
@@ -1508,3 +1512,10 @@ class Crew(FlowTrackable, BaseModel):
|
||||
"""Reset crew and agent knowledge storage."""
|
||||
for ks in knowledges:
|
||||
ks.reset()
|
||||
|
||||
def _set_allow_crewai_trigger_context_for_first_task(self):
|
||||
crewai_trigger_payload = self._inputs and self._inputs.get("crewai_trigger_payload")
|
||||
able_to_inject = self.tasks and self.tasks[0].allow_crewai_trigger_context is None
|
||||
|
||||
if self.process == Process.sequential and crewai_trigger_payload and able_to_inject:
|
||||
self.tasks[0].allow_crewai_trigger_context = True
|
||||
|
||||
@@ -43,9 +43,7 @@ class ToolSelectionEvaluator(BaseEvaluator):
|
||||
available_tools_info = ""
|
||||
if agent.tools:
|
||||
for tool in agent.tools:
|
||||
tool_name = tool.name if hasattr(tool, 'name') else tool.get('name', 'unknown')
|
||||
tool_desc = tool.description if hasattr(tool, 'description') else tool.get('description', 'No description')
|
||||
available_tools_info += f"- {tool_name}: {tool_desc}\n"
|
||||
available_tools_info += f"- {tool.name}: {tool.description}\n"
|
||||
else:
|
||||
available_tools_info = "No tools available"
|
||||
|
||||
|
||||
@@ -11,6 +11,8 @@ import chromadb.errors
|
||||
from chromadb.api import ClientAPI
|
||||
from chromadb.api.types import OneOrMany
|
||||
from chromadb.config import Settings
|
||||
from pydantic.warnings import PydanticDeprecatedSince211
|
||||
import warnings
|
||||
|
||||
from crewai.knowledge.storage.base_knowledge_storage import BaseKnowledgeStorage
|
||||
from crewai.rag.embeddings.configurator import EmbeddingConfigurator
|
||||
@@ -85,6 +87,15 @@ class KnowledgeStorage(BaseKnowledgeStorage):
|
||||
raise Exception("Collection not initialized")
|
||||
|
||||
def initialize_knowledge_storage(self):
|
||||
# Suppress deprecation warnings from chromadb, which are not relevant to us
|
||||
# TODO: Remove this once we upgrade chromadb to at least 1.0.8.
|
||||
warnings.filterwarnings(
|
||||
"ignore",
|
||||
category=PydanticDeprecatedSince211,
|
||||
message=r".*'model_fields'.*is deprecated.*",
|
||||
module=r"^chromadb(\.|$)",
|
||||
)
|
||||
|
||||
self.app = create_persistent_client(
|
||||
path=os.path.join(db_storage_path(), "knowledge"),
|
||||
settings=Settings(allow_reset=True),
|
||||
|
||||
@@ -564,10 +564,9 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
|
||||
if isinstance(formatted_answer, AgentAction):
|
||||
try:
|
||||
from typing import cast
|
||||
tool_result = execute_tool_and_check_finality(
|
||||
agent_action=formatted_answer,
|
||||
tools=cast(List[Union[CrewStructuredTool, dict]], self._parsed_tools),
|
||||
tools=self._parsed_tools,
|
||||
i18n=self.i18n,
|
||||
agent_key=self.key,
|
||||
agent_role=self.role,
|
||||
|
||||
@@ -12,6 +12,8 @@ from crewai.rag.embeddings.configurator import EmbeddingConfigurator
|
||||
from crewai.utilities.chromadb import create_persistent_client
|
||||
from crewai.utilities.constants import MAX_FILE_NAME_LENGTH
|
||||
from crewai.utilities.paths import db_storage_path
|
||||
import warnings
|
||||
from pydantic.warnings import PydanticDeprecatedSince211
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
@@ -62,6 +64,15 @@ class RAGStorage(BaseRAGStorage):
|
||||
def _initialize_app(self):
|
||||
from chromadb.config import Settings
|
||||
|
||||
# Suppress deprecation warnings from chromadb, which are not relevant to us
|
||||
# TODO: Remove this once we upgrade chromadb to at least 1.0.8.
|
||||
warnings.filterwarnings(
|
||||
"ignore",
|
||||
category=PydanticDeprecatedSince211,
|
||||
message=r".*'model_fields'.*is deprecated.*",
|
||||
module=r"^chromadb(\.|$)",
|
||||
)
|
||||
|
||||
self._set_embedder_config()
|
||||
|
||||
self.app = create_persistent_client(
|
||||
|
||||
@@ -72,6 +72,10 @@ class Task(BaseModel):
|
||||
output_pydantic: Pydantic model for task output.
|
||||
security_config: Security configuration including fingerprinting.
|
||||
tools: List of tools/resources limited for task execution.
|
||||
allow_crewai_trigger_context: Optional flag to control crewai_trigger_payload injection.
|
||||
None (default): Auto-inject for first task only.
|
||||
True: Always inject trigger payload for this task.
|
||||
False: Never inject trigger payload, even for first task.
|
||||
"""
|
||||
|
||||
__hash__ = object.__hash__ # type: ignore
|
||||
@@ -163,6 +167,10 @@ class Task(BaseModel):
|
||||
end_time: Optional[datetime.datetime] = Field(
|
||||
default=None, description="End time of the task execution"
|
||||
)
|
||||
allow_crewai_trigger_context: Optional[bool] = Field(
|
||||
default=None,
|
||||
description="Whether this task should append 'Trigger Payload: {crewai_trigger_payload}' to the task description when crewai_trigger_payload exists in crew inputs.",
|
||||
)
|
||||
model_config = {"arbitrary_types_allowed": True}
|
||||
|
||||
@field_validator("guardrail")
|
||||
@@ -548,12 +556,23 @@ class Task(BaseModel):
|
||||
str: The formatted prompt string containing the task description,
|
||||
expected output, and optional markdown formatting instructions.
|
||||
"""
|
||||
tasks_slices = [self.description]
|
||||
description = self.description
|
||||
|
||||
should_inject = self.allow_crewai_trigger_context
|
||||
|
||||
if should_inject and self.agent:
|
||||
crew = getattr(self.agent, 'crew', None)
|
||||
if crew and hasattr(crew, '_inputs') and crew._inputs:
|
||||
trigger_payload = crew._inputs.get("crewai_trigger_payload")
|
||||
if trigger_payload is not None:
|
||||
description += f"\n\nTrigger Payload: {trigger_payload}"
|
||||
|
||||
tasks_slices = [description]
|
||||
|
||||
output = self.i18n.slice("expected_output").format(
|
||||
expected_output=self.expected_output
|
||||
)
|
||||
tasks_slices = [self.description, output]
|
||||
tasks_slices = [description, output]
|
||||
|
||||
if self.markdown:
|
||||
markdown_instruction = """Your final answer MUST be formatted in Markdown syntax.
|
||||
|
||||
@@ -11,6 +11,7 @@ from crewai.agents.parser import (
|
||||
)
|
||||
from crewai.llm import LLM
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.tools import BaseTool as CrewAITool
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.tools.structured_tool import CrewStructuredTool
|
||||
from crewai.tools.tool_types import ToolResult
|
||||
@@ -24,28 +25,26 @@ from crewai.cli.config import Settings
|
||||
|
||||
console = Console()
|
||||
|
||||
def parse_tools(tools: List[Union[BaseTool, dict]]) -> List[Union[CrewStructuredTool, dict]]:
|
||||
def parse_tools(tools: List[BaseTool]) -> List[CrewStructuredTool]:
|
||||
"""Parse tools to be used for the task."""
|
||||
tools_list: List[Union[CrewStructuredTool, dict]] = []
|
||||
tools_list = []
|
||||
|
||||
for tool in tools:
|
||||
if isinstance(tool, dict):
|
||||
tools_list.append(tool)
|
||||
elif isinstance(tool, BaseTool):
|
||||
if isinstance(tool, CrewAITool):
|
||||
tools_list.append(tool.to_structured_tool())
|
||||
else:
|
||||
raise ValueError(f"Tool is not a CrewStructuredTool, BaseTool, or raw tool definition (dict): {type(tool)}")
|
||||
raise ValueError("Tool is not a CrewStructuredTool or BaseTool")
|
||||
|
||||
return tools_list
|
||||
|
||||
|
||||
def get_tool_names(tools: Sequence[Union[CrewStructuredTool, BaseTool, dict]]) -> str:
|
||||
def get_tool_names(tools: Sequence[Union[CrewStructuredTool, BaseTool]]) -> str:
|
||||
"""Get the names of the tools."""
|
||||
return ", ".join([t.name if hasattr(t, 'name') else t.get('name', 'unknown') for t in tools])
|
||||
return ", ".join([t.name for t in tools])
|
||||
|
||||
|
||||
def render_text_description_and_args(
|
||||
tools: Sequence[Union[CrewStructuredTool, BaseTool, dict]],
|
||||
tools: Sequence[Union[CrewStructuredTool, BaseTool]],
|
||||
) -> str:
|
||||
"""Render the tool name, description, and args in plain text.
|
||||
|
||||
@@ -55,12 +54,7 @@ def render_text_description_and_args(
|
||||
"""
|
||||
tool_strings = []
|
||||
for tool in tools:
|
||||
if hasattr(tool, 'description'):
|
||||
tool_strings.append(tool.description)
|
||||
elif isinstance(tool, dict) and 'description' in tool:
|
||||
tool_strings.append(f"Tool name: {tool.get('name', 'unknown')}\nTool description:\n{tool['description']}")
|
||||
else:
|
||||
tool_strings.append(f"Tool name: {tool.get('name', 'unknown') if isinstance(tool, dict) else 'unknown'}")
|
||||
tool_strings.append(tool.description)
|
||||
|
||||
return "\n".join(tool_strings)
|
||||
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
import os
|
||||
import re
|
||||
import portalocker
|
||||
from chromadb import PersistentClient
|
||||
from hashlib import md5
|
||||
from typing import Optional
|
||||
|
||||
from crewai.utilities.paths import db_storage_path
|
||||
|
||||
MIN_COLLECTION_LENGTH = 3
|
||||
MAX_COLLECTION_LENGTH = 63
|
||||
@@ -27,7 +28,9 @@ def is_ipv4_pattern(name: str) -> bool:
|
||||
return bool(IPV4_PATTERN.match(name))
|
||||
|
||||
|
||||
def sanitize_collection_name(name: Optional[str], max_collection_length: int = MAX_COLLECTION_LENGTH) -> str:
|
||||
def sanitize_collection_name(
|
||||
name: Optional[str], max_collection_length: int = MAX_COLLECTION_LENGTH
|
||||
) -> str:
|
||||
"""
|
||||
Sanitize a collection name to meet ChromaDB requirements:
|
||||
1. 3-63 characters long
|
||||
@@ -72,7 +75,8 @@ def create_persistent_client(path: str, **kwargs):
|
||||
concurrent creations. Works for both multi-threads and multi-processes
|
||||
environments.
|
||||
"""
|
||||
lockfile = f"chromadb-{md5(path.encode(), usedforsecurity=False).hexdigest()}.lock"
|
||||
lock_id = md5(path.encode(), usedforsecurity=False).hexdigest()
|
||||
lockfile = os.path.join(db_storage_path(), f"chromadb-{lock_id}.lock")
|
||||
with portalocker.Lock(lockfile):
|
||||
client = PersistentClient(path=path, **kwargs)
|
||||
|
||||
|
||||
@@ -40,6 +40,8 @@ class TraceBatch:
|
||||
class TraceBatchManager:
|
||||
"""Single responsibility: Manage batches and event buffering"""
|
||||
|
||||
is_current_batch_ephemeral: bool = False
|
||||
|
||||
def __init__(self):
|
||||
try:
|
||||
self.plus_api = PlusAPI(api_key=get_auth_token())
|
||||
@@ -62,6 +64,7 @@ class TraceBatchManager:
|
||||
user_context=user_context, execution_metadata=execution_metadata
|
||||
)
|
||||
self.event_buffer.clear()
|
||||
self.is_current_batch_ephemeral = use_ephemeral
|
||||
|
||||
self.record_start_time("execution")
|
||||
self._initialize_backend_batch(user_context, execution_metadata, use_ephemeral)
|
||||
@@ -136,7 +139,7 @@ class TraceBatchManager:
|
||||
"""Add event to buffer"""
|
||||
self.event_buffer.append(trace_event)
|
||||
|
||||
def _send_events_to_backend(self, ephemeral: bool = True):
|
||||
def _send_events_to_backend(self):
|
||||
"""Send buffered events to backend"""
|
||||
if not self.plus_api or not self.trace_batch_id or not self.event_buffer:
|
||||
return
|
||||
@@ -156,7 +159,7 @@ class TraceBatchManager:
|
||||
|
||||
response = (
|
||||
self.plus_api.send_ephemeral_trace_events(self.trace_batch_id, payload)
|
||||
if ephemeral
|
||||
if self.is_current_batch_ephemeral
|
||||
else self.plus_api.send_trace_events(self.trace_batch_id, payload)
|
||||
)
|
||||
|
||||
@@ -170,15 +173,14 @@ class TraceBatchManager:
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error sending events to backend: {str(e)}")
|
||||
|
||||
def finalize_batch(self, ephemeral: bool = True) -> Optional[TraceBatch]:
|
||||
def finalize_batch(self) -> Optional[TraceBatch]:
|
||||
"""Finalize batch and return it for sending"""
|
||||
if not self.current_batch:
|
||||
return None
|
||||
|
||||
if self.event_buffer:
|
||||
self._send_events_to_backend(ephemeral)
|
||||
|
||||
self._finalize_backend_batch(ephemeral)
|
||||
self._send_events_to_backend()
|
||||
self._finalize_backend_batch()
|
||||
|
||||
self.current_batch.events = self.event_buffer.copy()
|
||||
|
||||
@@ -187,12 +189,13 @@ class TraceBatchManager:
|
||||
self.current_batch = None
|
||||
self.event_buffer.clear()
|
||||
self.trace_batch_id = None
|
||||
self.is_current_batch_ephemeral = False
|
||||
|
||||
self._cleanup_batch_data()
|
||||
|
||||
return finalized_batch
|
||||
|
||||
def _finalize_backend_batch(self, ephemeral: bool = True):
|
||||
def _finalize_backend_batch(self):
|
||||
"""Send batch finalization to backend"""
|
||||
if not self.plus_api or not self.trace_batch_id:
|
||||
return
|
||||
@@ -210,7 +213,7 @@ class TraceBatchManager:
|
||||
self.plus_api.finalize_ephemeral_trace_batch(
|
||||
self.trace_batch_id, payload
|
||||
)
|
||||
if ephemeral
|
||||
if self.is_current_batch_ephemeral
|
||||
else self.plus_api.finalize_trace_batch(self.trace_batch_id, payload)
|
||||
)
|
||||
|
||||
@@ -219,7 +222,7 @@ class TraceBatchManager:
|
||||
console = Console()
|
||||
return_link = (
|
||||
f"{CREWAI_BASE_URL}/crewai_plus/trace_batches/{self.trace_batch_id}"
|
||||
if not ephemeral and access_code
|
||||
if not self.is_current_batch_ephemeral and access_code is None
|
||||
else f"{CREWAI_BASE_URL}/crewai_plus/ephemeral_trace_batches/{self.trace_batch_id}?access_code={access_code}"
|
||||
)
|
||||
panel = Panel(
|
||||
|
||||
@@ -166,7 +166,7 @@ class TraceCollectionListener(BaseEventListener):
|
||||
@event_bus.on(CrewKickoffCompletedEvent)
|
||||
def on_crew_completed(source, event):
|
||||
self._handle_trace_event("crew_kickoff_completed", source, event)
|
||||
self.batch_manager.finalize_batch(ephemeral=True)
|
||||
self.batch_manager.finalize_batch()
|
||||
|
||||
@event_bus.on(CrewKickoffFailedEvent)
|
||||
def on_crew_failed(source, event):
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from crewai.agents.parser import AgentAction
|
||||
from crewai.security import Fingerprint
|
||||
@@ -10,7 +10,7 @@ from crewai.utilities.i18n import I18N
|
||||
|
||||
def execute_tool_and_check_finality(
|
||||
agent_action: AgentAction,
|
||||
tools: List[Union[CrewStructuredTool, dict]],
|
||||
tools: List[CrewStructuredTool],
|
||||
i18n: I18N,
|
||||
agent_key: Optional[str] = None,
|
||||
agent_role: Optional[str] = None,
|
||||
@@ -37,8 +37,7 @@ def execute_tool_and_check_finality(
|
||||
ToolResult containing the execution result and whether it should be treated as a final answer
|
||||
"""
|
||||
try:
|
||||
executable_tools = [tool for tool in tools if hasattr(tool, 'name') and hasattr(tool, 'result_as_answer')]
|
||||
tool_name_to_tool_map = {tool.name: tool for tool in executable_tools}
|
||||
tool_name_to_tool_map = {tool.name: tool for tool in tools}
|
||||
|
||||
if agent_key and agent_role and agent:
|
||||
fingerprint_context = fingerprint_context or {}
|
||||
@@ -53,12 +52,10 @@ def execute_tool_and_check_finality(
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to set fingerprint: {e}")
|
||||
|
||||
# Create tool usage instance - filter to only CrewStructuredTool instances for ToolUsage
|
||||
from typing import cast
|
||||
crew_structured_tools = [tool for tool in tools if hasattr(tool, 'name') and hasattr(tool, 'result_as_answer') and not isinstance(tool, dict)]
|
||||
# Create tool usage instance
|
||||
tool_usage = ToolUsage(
|
||||
tools_handler=tools_handler,
|
||||
tools=cast(List[CrewStructuredTool], crew_structured_tools),
|
||||
tools=tools,
|
||||
function_calling_llm=function_calling_llm,
|
||||
task=task,
|
||||
agent=agent,
|
||||
@@ -85,7 +82,7 @@ def execute_tool_and_check_finality(
|
||||
# Handle invalid tool name
|
||||
tool_result = i18n.errors("wrong_tool_name").format(
|
||||
tool=tool_calling.tool_name,
|
||||
tools=", ".join([tool.name.casefold() for tool in executable_tools]),
|
||||
tools=", ".join([tool.name.casefold() for tool in tools]),
|
||||
)
|
||||
return ToolResult(tool_result, False)
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ from crewai.utilities import RPMController
|
||||
from crewai.utilities.errors import AgentRepositoryError
|
||||
from crewai.utilities.events import crewai_event_bus
|
||||
from crewai.utilities.events.tool_usage_events import ToolUsageFinishedEvent
|
||||
|
||||
from crewai.process import Process
|
||||
|
||||
def test_agent_llm_creation_with_env_vars():
|
||||
# Store original environment variables
|
||||
@@ -1209,6 +1209,181 @@ Thought:<|eot_id|>
|
||||
assert mock_format_prompt.return_value == expected_prompt
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_allow_crewai_trigger_context():
|
||||
from crewai import Crew
|
||||
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory"
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Analyze the data",
|
||||
expected_output="Analysis report",
|
||||
agent=agent,
|
||||
allow_crewai_trigger_context=True
|
||||
)
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
crew.kickoff({"crewai_trigger_payload": "Important context data"})
|
||||
|
||||
prompt = task.prompt()
|
||||
|
||||
assert "Analyze the data" in prompt
|
||||
assert "Trigger Payload: Important context data" in prompt
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_without_allow_crewai_trigger_context():
|
||||
from crewai import Crew
|
||||
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory"
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Analyze the data",
|
||||
expected_output="Analysis report",
|
||||
agent=agent,
|
||||
allow_crewai_trigger_context=False
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
crew.kickoff({"crewai_trigger_payload": "Important context data"})
|
||||
|
||||
prompt = task.prompt()
|
||||
|
||||
assert "Analyze the data" in prompt
|
||||
assert "Trigger Payload:" not in prompt
|
||||
assert "Important context data" not in prompt
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_allow_crewai_trigger_context_no_payload():
|
||||
from crewai import Crew
|
||||
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory"
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Analyze the data",
|
||||
expected_output="Analysis report",
|
||||
agent=agent,
|
||||
allow_crewai_trigger_context=True
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
crew.kickoff({"other_input": "other data"})
|
||||
|
||||
|
||||
prompt = task.prompt()
|
||||
|
||||
assert "Analyze the data" in prompt
|
||||
assert "Trigger Payload:" not in prompt
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_do_not_allow_crewai_trigger_context_for_first_task_hierarchical():
|
||||
from crewai import Crew
|
||||
|
||||
agent1 = Agent(role="First Agent", goal="First goal", backstory="First backstory")
|
||||
agent2 = Agent(role="Second Agent", goal="Second goal", backstory="Second backstory")
|
||||
|
||||
first_task = Task(
|
||||
description="Process initial data",
|
||||
expected_output="Initial analysis",
|
||||
agent=agent1,
|
||||
)
|
||||
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent1, agent2],
|
||||
tasks=[first_task],
|
||||
process=Process.hierarchical,
|
||||
manager_llm="gpt-4o"
|
||||
)
|
||||
|
||||
crew.kickoff({"crewai_trigger_payload": "Initial context data"})
|
||||
|
||||
first_prompt = first_task.prompt()
|
||||
assert "Process initial data" in first_prompt
|
||||
assert "Trigger Payload: Initial context data" not in first_prompt
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_first_task_auto_inject_trigger():
|
||||
from crewai import Crew
|
||||
|
||||
agent1 = Agent(role="First Agent", goal="First goal", backstory="First backstory")
|
||||
agent2 = Agent(role="Second Agent", goal="Second goal", backstory="Second backstory")
|
||||
|
||||
first_task = Task(
|
||||
description="Process initial data",
|
||||
expected_output="Initial analysis",
|
||||
agent=agent1,
|
||||
)
|
||||
|
||||
second_task = Task(
|
||||
description="Process secondary data",
|
||||
expected_output="Secondary analysis",
|
||||
agent=agent2,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent1, agent2],
|
||||
tasks=[first_task, second_task]
|
||||
)
|
||||
crew.kickoff({"crewai_trigger_payload": "Initial context data"})
|
||||
|
||||
first_prompt = first_task.prompt()
|
||||
assert "Process initial data" in first_prompt
|
||||
assert "Trigger Payload: Initial context data" in first_prompt
|
||||
|
||||
second_prompt = second_task.prompt()
|
||||
assert "Process secondary data" in second_prompt
|
||||
assert "Trigger Payload:" not in second_prompt
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_ensure_first_task_allow_crewai_trigger_context_is_false_does_not_inject():
|
||||
from crewai import Crew
|
||||
|
||||
agent1 = Agent(role="First Agent", goal="First goal", backstory="First backstory")
|
||||
agent2 = Agent(role="Second Agent", goal="Second goal", backstory="Second backstory")
|
||||
|
||||
first_task = Task(
|
||||
description="Process initial data",
|
||||
expected_output="Initial analysis",
|
||||
agent=agent1,
|
||||
allow_crewai_trigger_context=False
|
||||
)
|
||||
|
||||
second_task = Task(
|
||||
description="Process secondary data",
|
||||
expected_output="Secondary analysis",
|
||||
agent=agent2,
|
||||
allow_crewai_trigger_context=True
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent1, agent2],
|
||||
tasks=[first_task, second_task]
|
||||
)
|
||||
crew.kickoff({"crewai_trigger_payload": "Context data"})
|
||||
|
||||
first_prompt = first_task.prompt()
|
||||
assert "Trigger Payload: Context data" not in first_prompt
|
||||
|
||||
second_prompt = second_task.prompt()
|
||||
assert "Trigger Payload: Context data" in second_prompt
|
||||
|
||||
|
||||
|
||||
@patch("crewai.agent.CrewTrainingHandler")
|
||||
def test_agent_training_handler(crew_training_handler):
|
||||
task_prompt = "What is 1 + 1?"
|
||||
@@ -1896,7 +2071,7 @@ def test_agent_with_knowledge_sources_generate_search_query():
|
||||
assert "red" in result.raw.lower()
|
||||
|
||||
|
||||
@pytest.mark.vcr(record_mode='none', filter_headers=["authorization"])
|
||||
@pytest.mark.vcr(record_mode="none", filter_headers=["authorization"])
|
||||
def test_agent_with_knowledge_with_no_crewai_knowledge():
|
||||
mock_knowledge = MagicMock(spec=Knowledge)
|
||||
|
||||
@@ -1904,8 +2079,11 @@ def test_agent_with_knowledge_with_no_crewai_knowledge():
|
||||
role="Information Agent",
|
||||
goal="Provide information based on knowledge sources",
|
||||
backstory="You have access to specific knowledge sources.",
|
||||
llm=LLM(model="openrouter/openai/gpt-4o-mini",api_key=os.getenv('OPENROUTER_API_KEY')),
|
||||
knowledge=mock_knowledge
|
||||
llm=LLM(
|
||||
model="openrouter/openai/gpt-4o-mini",
|
||||
api_key=os.getenv("OPENROUTER_API_KEY"),
|
||||
),
|
||||
knowledge=mock_knowledge,
|
||||
)
|
||||
|
||||
# Create a task that requires the agent to use the knowledge
|
||||
@@ -1920,7 +2098,7 @@ def test_agent_with_knowledge_with_no_crewai_knowledge():
|
||||
mock_knowledge.query.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.vcr(record_mode='none', filter_headers=["authorization"])
|
||||
@pytest.mark.vcr(record_mode="none", filter_headers=["authorization"])
|
||||
def test_agent_with_only_crewai_knowledge():
|
||||
mock_knowledge = MagicMock(spec=Knowledge)
|
||||
|
||||
@@ -1928,33 +2106,10 @@ def test_agent_with_only_crewai_knowledge():
|
||||
role="Information Agent",
|
||||
goal="Provide information based on knowledge sources",
|
||||
backstory="You have access to specific knowledge sources.",
|
||||
llm=LLM(model="openrouter/openai/gpt-4o-mini",api_key=os.getenv('OPENROUTER_API_KEY'))
|
||||
)
|
||||
|
||||
# Create a task that requires the agent to use the knowledge
|
||||
task = Task(
|
||||
description="What is Vidit's favorite color?",
|
||||
expected_output="Vidit's favorclearite color.",
|
||||
agent=agent
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task],knowledge=mock_knowledge)
|
||||
crew.kickoff()
|
||||
mock_knowledge.query.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.vcr(record_mode='none', filter_headers=["authorization"])
|
||||
def test_agent_knowledege_with_crewai_knowledge():
|
||||
crew_knowledge = MagicMock(spec=Knowledge)
|
||||
agent_knowledge = MagicMock(spec=Knowledge)
|
||||
|
||||
|
||||
agent = Agent(
|
||||
role="Information Agent",
|
||||
goal="Provide information based on knowledge sources",
|
||||
backstory="You have access to specific knowledge sources.",
|
||||
llm=LLM(model="openrouter/openai/gpt-4o-mini",api_key=os.getenv('OPENROUTER_API_KEY')),
|
||||
knowledge=agent_knowledge
|
||||
llm=LLM(
|
||||
model="openrouter/openai/gpt-4o-mini",
|
||||
api_key=os.getenv("OPENROUTER_API_KEY"),
|
||||
),
|
||||
)
|
||||
|
||||
# Create a task that requires the agent to use the knowledge
|
||||
@@ -1964,7 +2119,35 @@ def test_agent_knowledege_with_crewai_knowledge():
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent],tasks=[task],knowledge=crew_knowledge)
|
||||
crew = Crew(agents=[agent], tasks=[task], knowledge=mock_knowledge)
|
||||
crew.kickoff()
|
||||
mock_knowledge.query.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.vcr(record_mode="none", filter_headers=["authorization"])
|
||||
def test_agent_knowledege_with_crewai_knowledge():
|
||||
crew_knowledge = MagicMock(spec=Knowledge)
|
||||
agent_knowledge = MagicMock(spec=Knowledge)
|
||||
|
||||
agent = Agent(
|
||||
role="Information Agent",
|
||||
goal="Provide information based on knowledge sources",
|
||||
backstory="You have access to specific knowledge sources.",
|
||||
llm=LLM(
|
||||
model="openrouter/openai/gpt-4o-mini",
|
||||
api_key=os.getenv("OPENROUTER_API_KEY"),
|
||||
),
|
||||
knowledge=agent_knowledge,
|
||||
)
|
||||
|
||||
# Create a task that requires the agent to use the knowledge
|
||||
task = Task(
|
||||
description="What is Vidit's favorite color?",
|
||||
expected_output="Vidit's favorclearite color.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task], knowledge=crew_knowledge)
|
||||
crew.kickoff()
|
||||
agent_knowledge.query.assert_called_once()
|
||||
crew_knowledge.query.assert_called_once()
|
||||
@@ -2164,7 +2347,12 @@ def mock_get_auth_token():
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
|
||||
def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
|
||||
from crewai_tools import SerperDevTool, XMLSearchTool, CSVSearchTool, EnterpriseActionTool
|
||||
from crewai_tools import (
|
||||
SerperDevTool,
|
||||
XMLSearchTool,
|
||||
CSVSearchTool,
|
||||
EnterpriseActionTool,
|
||||
)
|
||||
|
||||
mock_get_response = MagicMock()
|
||||
mock_get_response.status_code = 200
|
||||
@@ -2173,12 +2361,23 @@ def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
|
||||
"goal": "test goal",
|
||||
"backstory": "test backstory",
|
||||
"tools": [
|
||||
{"module": "crewai_tools", "name": "SerperDevTool", "init_params": {"n_results": 30}},
|
||||
{"module": "crewai_tools", "name": "XMLSearchTool", "init_params": {"summarize": True}},
|
||||
{
|
||||
"module": "crewai_tools",
|
||||
"name": "SerperDevTool",
|
||||
"init_params": {"n_results": "30"},
|
||||
},
|
||||
{
|
||||
"module": "crewai_tools",
|
||||
"name": "XMLSearchTool",
|
||||
"init_params": {"summarize": "true"},
|
||||
},
|
||||
{"module": "crewai_tools", "name": "CSVSearchTool", "init_params": {}},
|
||||
|
||||
# using a tools that returns a list of BaseTools
|
||||
{"module": "crewai_tools", "name": "CrewaiEnterpriseTools", "init_params": {"actions_list": [], "enterprise_token": "test_key"}},
|
||||
{
|
||||
"module": "crewai_tools",
|
||||
"name": "CrewaiEnterpriseTools",
|
||||
"init_params": {"actions_list": [], "enterprise_token": "test_key"},
|
||||
},
|
||||
],
|
||||
}
|
||||
mock_get_agent.return_value = mock_get_response
|
||||
@@ -2221,7 +2420,9 @@ def test_agent_from_repository_override_attributes(mock_get_agent, mock_get_auth
|
||||
"role": "test role",
|
||||
"goal": "test goal",
|
||||
"backstory": "test backstory",
|
||||
"tools": [{"name": "SerperDevTool", "module": "crewai_tools", "init_params": {}}],
|
||||
"tools": [
|
||||
{"name": "SerperDevTool", "module": "crewai_tools", "init_params": {}}
|
||||
],
|
||||
}
|
||||
mock_get_agent.return_value = mock_get_response
|
||||
agent = Agent(from_repository="test_agent", role="Custom Role")
|
||||
|
||||
@@ -0,0 +1,126 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Test Agent. Test backstory\nYour
|
||||
personal goal is: Test goal\nTo give my best complete final answer to the task
|
||||
respond using the exact following format:\n\nThought: I now can give a great
|
||||
answer\nFinal Answer: Your final answer must be the great and the most complete
|
||||
as possible, it must be outcome described.\n\nI MUST use these formats, my job
|
||||
depends on it!"}, {"role": "user", "content": "\nCurrent Task: Say hello to
|
||||
the world\n\nThis is the expected criteria for your final answer: hello world\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
|
||||
["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '825'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFJdi9swEHz3r1j0HBc7ZydXvx1HC0evhT6UUtrDKNLa1lXWqpJ8aTjy
|
||||
34vsXOz0A/pi8M7OaGZ3nxMApiSrgImOB9Fbnd4Why/v79HeePeD37/Zfdx8evf5+rY4fNjdPbJV
|
||||
ZNDuEUV4Yb0S1FuNQZGZYOGQB4yq+bYsr7J1nhcj0JNEHWmtDWlBaa+MStfZukizbZpfn9gdKYGe
|
||||
VfA1AQB4Hr/Rp5H4k1WQrV4qPXrPW2TVuQmAOdKxwrj3ygduAlvNoCAT0IzW78DQHgQ30KonBA5t
|
||||
tA3c+D06gG/mrTJcw834X0GHWhPsyWm5FHTYDJ7HUGbQegFwYyjwOJQxysMJOZ7Na2qto53/jcoa
|
||||
ZZTvaofck4lGfSDLRvSYADyMQxoucjPrqLehDvQdx+fycjvpsXk3C/TqBAYKXC/q29NoL/VqiYEr
|
||||
7RdjZoKLDuVMnXfCB6loASSL1H+6+Zv2lFyZ9n/kZ0AItAFlbR1KJS4Tz20O4+n+q+085dEw8+ie
|
||||
lMA6KHRxExIbPujpoJg/+IB93SjTorNOTVfV2LrcZLzZYFm+Zskx+QUAAP//AwB1vYZ+YwMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 96fc9f29dea3cf1f-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 15 Aug 2025 23:55:15 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=oA9oTa3cE0ZaEUDRf0hCpnarSAQKzrVUhl6qDS4j09w-1755302115-1.0.1.1-gUUDl4ZqvBQkg7244DTwOmSiDUT2z_AiQu0P1xUaABjaufSpZuIlI5G0H7OSnW.ldypvpxjj45NGWesJ62M_2U7r20tHz_gMmDFw6D5ZiNc;
|
||||
path=/; expires=Sat, 16-Aug-25 00:25:15 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=ICenEGMmOE5jaOjwD30bAOwrF8.XRbSIKTBl1EyWs0o-1755302115700-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '735'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '753'
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999830'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999827'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_212fde9d945a462ba0d89ea856131dce
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,123 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Test Agent. Test backstory\nYour
|
||||
personal goal is: Test goal\nTo give my best complete final answer to the task
|
||||
respond using the exact following format:\n\nThought: I now can give a great
|
||||
answer\nFinal Answer: Your final answer must be the great and the most complete
|
||||
as possible, it must be outcome described.\n\nI MUST use these formats, my job
|
||||
depends on it!"}, {"role": "user", "content": "\nCurrent Task: Say hello to
|
||||
the world\n\nThis is the expected criteria for your final answer: hello world\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
|
||||
["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '825'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=oA9oTa3cE0ZaEUDRf0hCpnarSAQKzrVUhl6qDS4j09w-1755302115-1.0.1.1-gUUDl4ZqvBQkg7244DTwOmSiDUT2z_AiQu0P1xUaABjaufSpZuIlI5G0H7OSnW.ldypvpxjj45NGWesJ62M_2U7r20tHz_gMmDFw6D5ZiNc;
|
||||
_cfuvid=ICenEGMmOE5jaOjwD30bAOwrF8.XRbSIKTBl1EyWs0o-1755302115700-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLLbtswELzrKxY8W4VkW3ajW1GgQA7JJUGLog0EmlxJrCkuQ1J2g8D/
|
||||
XlByLLkPoBcB2tkZzuzuawLAlGQlMNHyIDqr04/rl6/3z+6hu8vq/cOX5936cRPE3i8Px88rtogM
|
||||
2v1AEd5Y7wR1VmNQZEZYOOQBo2q+LYpVtszzYgA6kqgjrbEhXVPaKaPSZbZcp9k2zd+f2S0pgZ6V
|
||||
8C0BAHgdvtGnkfiTlZAt3iodes8bZOWlCYA50rHCuPfKB24CW0ygIBPQDNZvwdARBDfQqAMChyba
|
||||
Bm78ER3Ad/NJGa7hw/BfQotaExzJaTkXdFj3nsdQptd6BnBjKPA4lCHK0xk5Xcxraqyjnf+Nympl
|
||||
lG8rh9yTiUZ9IMsG9JQAPA1D6q9yM+uos6EKtMfhubzYjnps2s0MXZ3BQIHrWX17Hu21XiUxcKX9
|
||||
bMxMcNGinKjTTngvFc2AZJb6Tzd/0x6TK9P8j/wECIE2oKysQ6nEdeKpzWE83X+1XaY8GGYe3UEJ
|
||||
rIJCFzchsea9Hg+K+RcfsKtqZRp01qnxqmpbFZuM1xssihuWnJJfAAAA//8DAFSowWRjAwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 96fc9f301bf7cf1f-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 15 Aug 2025 23:55:16 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '685'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '711'
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999827'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999827'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_3f0ec42447374a76a22a4cdb9f336279
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,501 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: First Agent\nThe input to this tool should be the coworker, the task
|
||||
you want them to do, and ALL necessary context to execute the task, they know
|
||||
nothing about the task, so share absolutely everything you know, don''t reference
|
||||
things but instead explain them.\nTool Name: Ask question to coworker\nTool
|
||||
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
|
||||
of the following coworkers: First Agent\nThe input to this tool should be the
|
||||
coworker, the question you have for them, and ALL necessary context to ask the
|
||||
question properly, they know nothing about the question, so share absolutely
|
||||
everything you know, don''t reference things but instead explain them.\n\nIMPORTANT:
|
||||
Use the following format in your response:\n\n```\nThought: you should always
|
||||
think about what to do\nAction: the action to take, only one name of [Delegate
|
||||
work to coworker, Ask question to coworker], just the name, exactly as it''s
|
||||
written.\nAction Input: the input to the action, just a simple JSON object,
|
||||
enclosed in curly braces, using \" to wrap keys and values.\nObservation: the
|
||||
result of the action\n```\n\nOnce all necessary information is gathered, return
|
||||
the following format:\n\n```\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question\n```"}, {"role": "user", "content":
|
||||
"\nCurrent Task: Process initial data\n\nThis is the expected criteria for your
|
||||
final answer: Initial analysis\nyou MUST return the actual complete content
|
||||
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
|
||||
use the tools available and give your best Final Answer, your job depends on
|
||||
it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2921'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.12
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFTJbtswEL37KwY824aT2E7qWxcUSE9Fa6CHujDG5EiahhoK5MiOG+Tf
|
||||
C0re0qZAL4LAN2+WN8vTAMCwMwswtkK1deNH7+dXd5+0lTl/mZYPzd1mOvHNuzhffvt6szTDzAib
|
||||
n2T1yBrbUDeelIP0sI2EStnr1e1sNptNbmZvOqAOjnymlY2OpmF0PbmejiZ3o8n8QKwCW0pmAd8H
|
||||
AABP3TenKI4ezQImw+NLTSlhSWZxMgIwMfj8YjAlToqiZngGbRAl6bJeVqEtK13APQiRAw3gyFOJ
|
||||
SqAVgWJ6gFBAE4OllFjK7pmFldGDQ8XMyW8fOSaFtyWJ5ieS1EaCHUGFWwIErULMwQDFAVrbxhwE
|
||||
Bf0+cRrDPezY+xxpy66LXsOOtQL0vgsglFPAuAdHiuxTDnNQPNtz6tOloiCrvCW/H69kJW9tbsgC
|
||||
PhwL24X40HPzH8WjCdxL0+oCnlYmO1qZBazM577yFyWvzBBWvYyP2pstj2KxbIPfUuor+/WqYon0
|
||||
JEwkS7wlN4ZlroDF+tZRAusJ5cjOrCFYVCpD5M4pKxQhnvQbAjsS5WKfQZQ9aCRxCUKEBlUpShp2
|
||||
0qe2rvHgJPsuWBxLmXICBGVAD9xJe+hbTiRCK45inqRsmydiV6GecoPsI6eX+u7K/lQwS+Ky0gSa
|
||||
CRYFNgQu4k6giKEG1vFRzkM3Oj0vpmllni+nN1LRJszLI633FwCKBMXcyG5vfhyQ59Om+FA2MWzS
|
||||
H1RTsHCq1pEwBclbkTQ0pkOfBwA/uo1sXyyZaWKoG11reKAu3PzquvdnzjfgjF7dTA+oBkV/Bm6n
|
||||
8+ErDteHCb9YamPRVuTO1PMFwNZxuAAGF2X/nc5rvvvSWcr/cX8GrKVGya2bSI7ty5LPZpHyjfyX
|
||||
2UnmLmGTKG7Z0lqZYm6FowJb358vk/ZJqV4XLCXFJnJ/w4pmPZ1vimJCE3tnBs+D3wAAAP//AwBY
|
||||
9uEVzAUAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 97144bd22eb41abc-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 18 Aug 2025 20:52:42 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=dCMuu_IT8i5kyJB9_ugQhudGYphCvJlfXMZwJgOuB8Y-1755550362-1.0.1.1-VyrRrYT2JzvUYUjT9T5uCe31rJR0Q_FicsTyAJZYdj0j8anm6ZdVD7QhtUW0OjVK_8F82E4cVt8Uf5shMfmUm3Gf.EMuBA1AgSAUrzsHEy4;
|
||||
path=/; expires=Mon, 18-Aug-25 21:22:42 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=YeODa6MF5ug3OZUV6ob1dSrBKCM8BXbKkS77TIihYoE-1755550362828-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '3236'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '3253'
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '29999308'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999308'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 1ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_08aa9de2797d4fee93003bdc7fc19156
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are First Agent. First
|
||||
backstory\nYour personal goal is: First goal\nTo give my best complete final
|
||||
answer to the task respond using the exact following format:\n\nThought: I now
|
||||
can give a great answer\nFinal Answer: Your final answer must be the great and
|
||||
the most complete as possible, it must be outcome described.\n\nI MUST use these
|
||||
formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent Task:
|
||||
Process initial data\n\nThis is the expected criteria for your final answer:
|
||||
Your best answer to your coworker asking you this, accounting for the context
|
||||
shared.\nyou MUST return the actual complete content as the final answer, not
|
||||
a summary.\n\nThis is the context you''re working with:\nThe task involves analyzing
|
||||
the initial data set we have received. This includes cleaning the data, categorizing
|
||||
it for analysis, identifying any trends or patterns, and summarizing the findings.
|
||||
The goal is to have a clear understanding of what the data indicates and any
|
||||
initial insights that can be drawn from it.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1262'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=dCMuu_IT8i5kyJB9_ugQhudGYphCvJlfXMZwJgOuB8Y-1755550362-1.0.1.1-VyrRrYT2JzvUYUjT9T5uCe31rJR0Q_FicsTyAJZYdj0j8anm6ZdVD7QhtUW0OjVK_8F82E4cVt8Uf5shMfmUm3Gf.EMuBA1AgSAUrzsHEy4;
|
||||
_cfuvid=YeODa6MF5ug3OZUV6ob1dSrBKCM8BXbKkS77TIihYoE-1755550362828-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.12
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFbfbxRHDH7PX+HuE6DLKSEkQN6AChUhoUpFULVBkTPj3XUzO96MZ+9y
|
||||
RfzvlWfubi+USn3Jj/HaY3+f/Y2/HgE07JtLaFyP2Q1jOH5zcfriwx+v36RP3affN+9buetUT27o
|
||||
4uMvp5+bhXnIzV/k8s5r6WQYA2WWWM0uEWayqKfPz8/Pz0/OLs6KYRBPwdy6MR8/k+OBIx8/PXn6
|
||||
7Pjk+fHpi613L+xIm0v48wgA4Gv5aXlGT/fNJZwsdicDqWJHzeX+I4AmSbCTBlVZM8bcLGajk5gp
|
||||
ltTfQZQ1OIzQ8YoAobO0AaOuKQFcxbccMcCr8v8lfBQYkzhShdwTcOTMGMBjRlDKQG1LLvOKwmYB
|
||||
a4I1hwCthCBrUFpRwgC3tAHNNCpkAYo6JbJPXSCMC3CYqZPEf9MCMGLY1D886DQMaOcgU4KWo+fY
|
||||
KRjsiXqKWm5dXsWreLqEJ09+tpzeWFCO3ZMnl3AVAeAY3nLSPCeXaMW0LtVYFVZEKwkGVuXYLYCj
|
||||
k2gYUswLkASUkkSSSYFiTky63EX+vA3ZY/SBdiFghWEihZsNEOeeEvAwTtksuacBblDJg0TQKSWZ
|
||||
SlkVUEmQaJCVHSRykrzCuqdEEMkowGTllqtfec/WehgOgTfiPSZvoO1wdRhghYnxJtAhA/sq3QYe
|
||||
0bJbLqrFLscQIIhDuwEiDqSAiUBHCoF8wU5xIFjj5nEh4OlMwI7O4nxAwwe6P2BhZn3PBHDMAokC
|
||||
rTBmUOoGitn6DnN1QvalF0qbKM9EfOxZgeNKwooUuiTTuAd1FLYoe9SdDIP96jGhy5RYMztdgE6u
|
||||
B1TwNEiXcOzLaeaBYKTE4rV0A8ZNaeiRUitpwOhsKjw7zJIUHr3/9Z0+tjINsFbcVFpCYoHpzGD6
|
||||
mCj60uG/Ys6UIrzzFDO37L7H7DPnvuBTZoWq1wydLxXOoG5zAgRPGTkUhwqVEc/1mg1ky0BLsLGm
|
||||
oMtDJEuwLZxQC9CMuSCFAbJIqN4r1gnDlutyxxSdrMj6ONTDnkcLmHuOe6aX+8kJIreAucKsZNO1
|
||||
T3kBTtIuDjihtmXH1hJVH4wJ5S4W4GIGmXJgStuGGXADie4mNhqmVOcwrkgzdyViIeSZEfLbVmis
|
||||
zrdbmXmgH99NmSkQB9POKlEbkPahRq17dv0ORhcmT3AjuYc7Q8uQXFnTKHd9rkDeTRjzzjJQTuwe
|
||||
UrK7SXuZggeKDkedAuY6P9aRW1a3LDP5RYEoSrbR3zNdweNhDNt+U0s/96S0L2D5ncBhUAEvbrKJ
|
||||
LFEDDyXd2b1OWxXuriPNNgGoRVx3BCRSwuR64PaBpF3F1xvYvadVJ5XqmzHDHiWDxLCBHsvDZTOR
|
||||
YIqeUtG9MmctrI39A00po2lyPOVax5hkxZ4AXRFQw2bPRWmd8jhO9omRGuk+11SWlYoHD9A8YFWA
|
||||
KdmZWm9IYA+tCXzt4HarHriXbqD7MUiqZkngybGyxOMBb62a+tpaT2gdKNNeKVmNSWwZMQW8iu9a
|
||||
2Mi0xSVu4G6yNi/UWP7k9wS4gGkvNKaG9vmIKVcSWXev/QLGQKgEgTIMBLdR1j8drhSJ2knR1po4
|
||||
hXBgwGgdV663ZebL1vJtv74E6cYkN/qda9NyZO2vrWkk2qqiWcamWL8dAXwpa9L0YPNpxiTDmK+z
|
||||
3FK57unZ8xqvmbez2Xr2cmfNkjHMhouzZ4sfBLyuMqoHm1bj0PXkZ9d5LcPJsxwYjg7K/nc6P4pd
|
||||
S+fY/Z/ws8E5GjP56zGRZ/ew5PmzRNYw//XZHuaScGN9zI6uM1MyKjy1OIW6Uza60UzDdcuxozQm
|
||||
rotlO16fX5xge0Hn5y+bo29H/wAAAP//AwCE+a2iZgsAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 97144be7eaa81abc-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 18 Aug 2025 20:52:47 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '4424'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '4473'
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999717'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999717'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_5bf23819c1214732aa87a90207bc0d31
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: First Agent\nThe input to this tool should be the coworker, the task
|
||||
you want them to do, and ALL necessary context to execute the task, they know
|
||||
nothing about the task, so share absolutely everything you know, don''t reference
|
||||
things but instead explain them.\nTool Name: Ask question to coworker\nTool
|
||||
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
|
||||
of the following coworkers: First Agent\nThe input to this tool should be the
|
||||
coworker, the question you have for them, and ALL necessary context to ask the
|
||||
question properly, they know nothing about the question, so share absolutely
|
||||
everything you know, don''t reference things but instead explain them.\n\nIMPORTANT:
|
||||
Use the following format in your response:\n\n```\nThought: you should always
|
||||
think about what to do\nAction: the action to take, only one name of [Delegate
|
||||
work to coworker, Ask question to coworker], just the name, exactly as it''s
|
||||
written.\nAction Input: the input to the action, just a simple JSON object,
|
||||
enclosed in curly braces, using \" to wrap keys and values.\nObservation: the
|
||||
result of the action\n```\n\nOnce all necessary information is gathered, return
|
||||
the following format:\n\n```\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question\n```"}, {"role": "user", "content":
|
||||
"\nCurrent Task: Process initial data\n\nThis is the expected criteria for your
|
||||
final answer: Initial analysis\nyou MUST return the actual complete content
|
||||
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
|
||||
use the tools available and give your best Final Answer, your job depends on
|
||||
it!\n\nThought:"}, {"role": "assistant", "content": "Thought: I need to delegate
|
||||
the task of processing the initial data to the First Agent to ensure we have
|
||||
a thorough and accurate analysis. I will provide them with all the necessary
|
||||
details to complete this task effectively.\n\nAction: Delegate work to coworker\nAction
|
||||
Input: {\"task\": \"Process initial data\", \"context\": \"The task involves
|
||||
analyzing the initial data set we have received. This includes cleaning the
|
||||
data, categorizing it for analysis, identifying any trends or patterns, and
|
||||
summarizing the findings. The goal is to have a clear understanding of what
|
||||
the data indicates and any initial insights that can be drawn from it.\", \"coworker\":
|
||||
\"First Agent\"}\nObservation: To process the initial data set effectively,
|
||||
we will follow several key steps to ensure we clean, categorize, analyze, and
|
||||
summarize our findings comprehensively.\n\n1. **Data Cleaning**: \n - First,
|
||||
we will review the dataset for missing, inconsistent, or erroneous entries.
|
||||
\n - We will handle missing values by either imputing them based on surrounding
|
||||
data or removing records where necessary.\n - Additionally, we will standardize
|
||||
categorical variables to ensure consistency (e.g., ensuring all location names
|
||||
are spelled the same way).\n\n2. **Data Categorization**: \n - Next, we will
|
||||
categorize the data into relevant segments that will aid our analysis. \n -
|
||||
This involves grouping data points based on common characteristics, such as
|
||||
demographics, time periods, or any key performance indicators (KPIs) we are
|
||||
focusing on.\n\n3. **Trend and Pattern Identification**: \n - With the cleaned
|
||||
and categorized data, we will perform a detailed analysis to identify trends
|
||||
and patterns.\n - This will involve using statistical tools and visualizations
|
||||
to uncover relationships within the data. We will look at time series analysis,
|
||||
correlation coefficients, and any significant outliers that may require further
|
||||
investigation.\n\n4. **Summarizing Findings**: \n - Finally, we will compile
|
||||
a summary of our findings which will include both qualitative insights and quantitative
|
||||
metrics.\n - This summary should encapsulate the key trends identified, any
|
||||
notable patterns, and implications of these findings.\n - We will also document
|
||||
any limitations of the data and suggest areas for further research if necessary.\n\nBy
|
||||
completing these steps, we will not only have a clear understanding of what
|
||||
the data indicates but also provide actionable insights that can guide our next
|
||||
steps. This comprehensive analysis will serve as a solid foundation for any
|
||||
additional exploration or decision-making initiatives related to our project.
|
||||
\n\nIf you have any questions or need further clarification on any part of this
|
||||
process, please let me know!"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5714'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=dCMuu_IT8i5kyJB9_ugQhudGYphCvJlfXMZwJgOuB8Y-1755550362-1.0.1.1-VyrRrYT2JzvUYUjT9T5uCe31rJR0Q_FicsTyAJZYdj0j8anm6ZdVD7QhtUW0OjVK_8F82E4cVt8Uf5shMfmUm3Gf.EMuBA1AgSAUrzsHEy4;
|
||||
_cfuvid=YeODa6MF5ug3OZUV6ob1dSrBKCM8BXbKkS77TIihYoE-1755550362828-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.12
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFbbbhw3DH33VxDzlATrxTq+pX5LUwQJChRBayRA68ChJc4Ma404ETW7
|
||||
3gT594LS3pymQF/2IooUeQ55pK9HAA375goa12N2wxiOX12cvPj9t/dv4sOf79z1m8vFKvbdl+fy
|
||||
YfXlvTQz85C7v8nlrdfcyTAGyiyxml0izGRRTy7Pz8/PF6cXl8UwiKdgbt2Yj8/k+Pni+dnx4sXx
|
||||
4mLj2As70uYK/joCAPhaPi3F6OmhuYLFbLsykCp21FztNgE0SYKtNKjKmjHmZrY3OomZYsn606dP
|
||||
N/G6l6nr8xW8hSgruLeP3BO0HDEARl1Ruomvy7+X5d8VXAuMSRyplq0cOTMG8JgRlDJQ25LLvKSw
|
||||
nsGKYMUhQCshyAqUlpQwwD2tQTONClmAok6JbKsLhHEGDjN1kvgLzQAjhnX94UGnYUBbB5mSJek5
|
||||
dgqGfaKeopZT5zfxJp7M4dmzXyynVxaUY/fs2RXcRAA4htecNO+TS7RkqoVbFVZEKwkGVuXYzYCj
|
||||
k2hoUswzkASUkkSSSYFiTkw630b+sAnZY/SBtiFgiWEihbs1EOeeEvAwTtksuacB7lDJg0TQKSWZ
|
||||
SlkVUEmQaJClLSRykrzCqqdEEMkowGTllqNfes/WfxgOgbcW8Ji8gbbF1WGAJSbGu0CHDOyqdGt4
|
||||
QvNuPqsWOxxDgCAO7QSIOJACJgIdKQTyBTvFgWCF66eFgOd7ArZ0FucDGn6jhwMW9qzvmACOWSBR
|
||||
oCXGDErdQDFb32GuTsi+9EJpE+U9Edc9K3BcSliSQpdkGnegjsIWZYe6k2Gwrx4TukyJNbPTGejk
|
||||
ekAFT4N0Cce+rGYeCEZKLF5LN2Bcl4YeKbWSBozOpsKzwyxJ4cmv797qUyvTAGvFTaUlJBaYTg2m
|
||||
60TRlw5/hzlTivDWU8zcsvsesw+c+4JPmRWqXnvofKlwD+omJ0DwlJFDcahQGfFcj1lDtgy0BBtr
|
||||
Cjo/RLIE28AJtQDNmAtSGCCLhOq9ZJ0wbLguZ0zRyZKsj0Nd7Hm0gLnnuGN6vpucIHIPmCvMSjZd
|
||||
u5Rn4CRt44ATalt2bC2x0QfuYgEtZpApB6a0aZYB15Do88RGwZTqDMYlaeauRCtknBkZf2xExmp8
|
||||
vZGYR9rx3YSZ+nAgwI08rUHaR/pUBCRMZajvJPfw2RAy9JbWKMpdnyt4nyeMeWsZKCd2j2nYnqC9
|
||||
TMEDRYejTgFznRnrwg2TG2aZ/Kw0aJRs475jtwLGwxg2PaaWdu5JaZf4/DtRw6ACXtxkU1iiBh5K
|
||||
unv3OmFVrLuONFvXoxZB3QKfSAmT64HbRzJ2E39ew/YirdqoVO+JPdxRMkgMa+hxaaDbHCSYoqdU
|
||||
tK7MVgsrY/1AR8o4mgRPudYxJlmyJ0BXRNOw2XFRWsZhhG6yLUZmpIdcU5lXKh5dOvuhqqJLydbU
|
||||
ekICe2hN1GvXthvFwJ1cAz2MQVI1SwJPjpUlHg94b9XUG9Z6QusQmd5KyWpMYq8QU71ynx/e9Yna
|
||||
SdGeGnEK4cCA0ZqhkGavjI8by7fduyJINya50+9cm5Yja39rfEq0N4RmGZti/XYE8LG8X6ZHT5Jm
|
||||
TDKM+TbLPZXjTk5OX9SAzf7JtDefXv60sWbJGA78zk8uZz8IeVt1TQ8eQY1D15Pf++5fTDh5lgPD
|
||||
0UHh/87nR7Fr8Ry7/xN+b3COxkz+dkzk2T2ueb8tkbH5X9t2QJeEG2sydnSbmZKR4anFKdTnXqNr
|
||||
zTTcthw7SmPi+uZrx9uzi7u2XdDCvWiOvh39AwAA//8DAIF0yI38CgAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 97144c04e89a1abc-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 18 Aug 2025 20:52:50 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '2974'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '2999'
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '29998628'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29998627'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 2ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 2ms
|
||||
x-request-id:
|
||||
- req_c0cd67fc9b9342a7bd649b1458724745
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,296 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are First Agent. First
|
||||
backstory\nYour personal goal is: First goal\nTo give my best complete final
|
||||
answer to the task respond using the exact following format:\n\nThought: I now
|
||||
can give a great answer\nFinal Answer: Your final answer must be the great and
|
||||
the most complete as possible, it must be outcome described.\n\nI MUST use these
|
||||
formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent Task:
|
||||
Process initial data\n\nThis is the expected criteria for your final answer:
|
||||
Initial analysis\nyou MUST return the actual complete content as the final answer,
|
||||
not a summary.\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
|
||||
"gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '831'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=PslIVDqXn7jd_NXBGdSU5kVFvzwCchKPRVe9LpQVdQA-1736351415895-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.12
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFVNbxw3DL37VxBzyWV3YTtZ2/EtLerCQNH2kCAFmmDBlTgzjDXURKR2
|
||||
vQny3wtpNp7Nx6EXr0eUqPceH6nPZwAN++YWGtejuWEMy1+vLm6Gi93rF9f4x+/h7V26/zi8fPv3
|
||||
xT/34c1vzaKciNsP5OzrqZWLwxjIOMoUdonQqGS9uF6v1+vz5zc3NTBET6Ec60ZbvojLgYWXl+eX
|
||||
L5bn18uLm+PpPrIjbW7h3zMAgM/1b8Epnh6bWzhffF0ZSBU7am6fNgE0KYay0qAqq6FYs5iDLoqR
|
||||
VOj3IHEPDgU63hEgdAU2oOieEsA7uWPBAK/q9y287glY2BgDoGA4KCvEFqwn8GgILLsYdqSgtKOE
|
||||
AVxiY4cB1GjUFdxxUlvAnmDIasCexLg91Awac3L0TcIFkGhOLB1Yj1bWD4CJIFFg3AYCFF8+aIdi
|
||||
YLGenErDuznXmGJZWsFf4ugEroKLIZAz8hXUSKmNaQCEsdwwsGA6AD1i+a8Ut1zhenIP0MYE6FxO
|
||||
6A4VxdEBJKS6gBDjQ4Fdt8kBBlYt3zsMueBK4FldohHFMenqnbyTP+lx0sahURcTfzrFKhZhIBSW
|
||||
rs0BlLqBxHQBOI7hUHJvUdmBGhrrpPpA1kevBbXmYcCa8oEO0BJaTqVQ2fWAWjMvYCDP5bfwKUZd
|
||||
weuetci3Y08KLMpdbzqhqdhYLfE2V3GqDCRWKm8kniq304JWnq+857IfQzgsYMeaMfCnqu8MqGe1
|
||||
2CUcdAHb+AhjiIVsTKAOzShNK9UNx2YrNLdUY1k8peL86o4pdc+jVohjPS8Ke7aeZQZXDK50RATI
|
||||
XqGnMALLk1OrFROJL1iyBaakk15jLF1VWyMRVtYuiqMklfRdTtZTGmKiWmNUJdW5vsUobApZccuB
|
||||
7VBuRe8TTcapHTKS45YdfMykk1xo0KP47xuFDTBwd+R42gPPFLqIQVfwy9R2JH6qEOsPzV2R7jkE
|
||||
6LHOBxcIE8QdpR3T/rSyzxS0CNNZP6m8J3wovUC6gC6zL9hyseIek1coQgDL0tNofRkchVF3NEFp
|
||||
Gv8hq1WLgxB58lWiNhffTpIde5ejrOBNMB7QqDiqUmljFo+TzeZhpWST5mrY0WnGumXqmjFFV4FX
|
||||
Hp4cK0dZDlg7etKojpfV6VhN1GbFMtolh3ASQJFoE7Ey0N8fI1+eRniI3ZjiVr872rQsrP2muClK
|
||||
GddqcWxq9MsZwPv6VORvpn8zpjiMtrH4QPW6i/V6ytfML9QcvXx+fYxaNAxz4PnLy8VPEm48GXLQ
|
||||
k9emceh68vPR+WnC7DmeBM5OaP8I52e5J+os3f9JPweco9HIb8ZEnt23lOdtiT7Uyf/zbU8yV8CN
|
||||
Fsc72hhTKqXw1GIO07va6EGNhk3L0lEaE0+Paztu1lfn2F7Rev2yOfty9h8AAAD//wMAaw+BEmoI
|
||||
AAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 97144c8758cd1abc-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 18 Aug 2025 20:53:12 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=VDTNVbhdzLyVi3fpAyOvoFppI0NEm6YkT9eWIm1wnrs-1755550392-1.0.1.1-vfYBbcAz.yp6ATfVycTWX6tFDJ.1yb_ghwed7t5GOMhNlsFeYYNGz4uupfWMnhc4QLK4UNXIeZGeGKJ.me4S240xKk6FUEu3F5tEAvhPnCM;
|
||||
path=/; expires=Mon, 18-Aug-25 21:23:12 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=FFe5KuJ6P4BUXOoz57aqNdKwRoz64NOw_EhuSGirJWc-1755550392539-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '4008'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '4027'
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999825'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999825'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_f287350aa2ac4662b9a5e01e85cc221f
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Second Agent. Second
|
||||
backstory\nYour personal goal is: Second goal\nTo give my best complete final
|
||||
answer to the task respond using the exact following format:\n\nThought: I now
|
||||
can give a great answer\nFinal Answer: Your final answer must be the great and
|
||||
the most complete as possible, it must be outcome described.\n\nI MUST use these
|
||||
formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent Task:
|
||||
Process secondary data\n\nTrigger Payload: Context data\n\nThis is the expected
|
||||
criteria for your final answer: Secondary analysis\nyou MUST return the actual
|
||||
complete content as the final answer, not a summary.\n\nThis is the context
|
||||
you''re working with:\nThe initial analysis of the data involves several critical
|
||||
steps. First, we must identify the sources of the data, ensuring that they are
|
||||
reliable and relevant to the objectives of the project. Once the data is collected,
|
||||
we perform a preliminary examination to check for accuracy and completeness,
|
||||
looking for any missing values or discrepancies.\n\nNext, we categorize the
|
||||
data into meaningful segments, applying basic statistical methods to summarize
|
||||
key features such as mean, median, and mode. This provides insights into the
|
||||
distribution and central tendencies of the data.\n\nAdditionally, visualizations
|
||||
such as histograms, box plots, or scatter plots are created to better understand
|
||||
relationships and patterns within the data. These visual aids help in identifying
|
||||
trends, outliers, and potential areas of concern.\n\nFurthermore, we assess
|
||||
the data for its usability in addressing the specific questions at hand, ensuring
|
||||
that it aligns with the project''s goals. By the end of this initial analysis,
|
||||
we will have a clear overview of the data''s strengths and weaknesses, guiding
|
||||
us towards more in-depth investigations or adjustments needed for future data
|
||||
collection. Ultimately, this foundational analysis sets the stage for future
|
||||
analytical processes and decision-making initiatives.\n\nBegin! This is VERY
|
||||
important to you, use the tools available and give your best Final Answer, your
|
||||
job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2214'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=FFe5KuJ6P4BUXOoz57aqNdKwRoz64NOw_EhuSGirJWc-1755550392539-0.0.1.1-604800000;
|
||||
__cf_bm=VDTNVbhdzLyVi3fpAyOvoFppI0NEm6YkT9eWIm1wnrs-1755550392-1.0.1.1-vfYBbcAz.yp6ATfVycTWX6tFDJ.1yb_ghwed7t5GOMhNlsFeYYNGz4uupfWMnhc4QLK4UNXIeZGeGKJ.me4S240xKk6FUEu3F5tEAvhPnCM
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.12
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFZNbxw5Dr37VxB9yaVt2JlpO/EtM0Awe5hd7GIWGGAzMGiJVcVYJSmk
|
||||
1O3OIP99QKn6w94c5lLoFiWKfHx84p8XACv2q3tYuQmLm3O4/Pn25t2XX/9T7z7+evP77vdNdP++
|
||||
peH6brz95W1ere1EevxMrhxOXbk050CFU+xmJ4SFzOvN3Waz2Vz/8P5tM8zJU7BjYy6XP6bLmSNf
|
||||
vr1+++Pl9d3lzbvl9JTYka7u4X8XAAB/tq/FGT09r+7hen1YmUkVR1rdHzcBrCQFW1mhKmvBWFbr
|
||||
k9GlWCi20P8BMe3AYYSRtwQIo4UNGHVHAvApfuSIAT60//fw20TAkQtjAIwY9soKaYAyEXgsCBy3
|
||||
KWxJQWlLggGccGGHAbRQ1iv4yKJlDTuCuWoB9hQLD/vmQVMVRy8croGiVuE4Qpmw2PoeUAiEAuNj
|
||||
IMDo7Q9tMRYoqZ3speHtyVeWZEtX8NvEehalk1o48td+AYET8vzIgcvejhK6aQlrDahKqoedM5Up
|
||||
+RTSyKRQlTwMSToKLoVgAaS4bvG5FAeW+VUWgccIOy5T8yekhOImQJ716lP8FP8VHZ0hqwe35Bt+
|
||||
mWRIMgNCNjBmjih7oGe0X3a3oeEmck8tMHSuCrr9ElAjK0VSXTCZCaN9C7saUtWwh5DSk8Xcjsc9
|
||||
zNzT32KopGvwNQd2WAgoFmFbMgRYnVDG6AyYlq9LNXjQJ9pZmjUUvYKfA2E8gNkybMCwghYcqeUr
|
||||
1RnT7P4jDTgWGsXqwxFSlYWH1DH7Jz13dllYYxL+eg5hLKnlyXEcagClcaZY1KLeojQ+LREbxTJJ
|
||||
4UidVnbTsURfKqkhfMBucdRRxxDSTlvQdjPmDpKZ0gCPqOwsxcLaG6MTSe0SrfOMLeYn2sNAWKqQ
|
||||
XsFPe3AYXA1YTvTDuIaZPONCMtOVlvqIHFuNWn9wVB6noj37hgVrEX6sPVpjg5UPAxSKnnrZXrTg
|
||||
bmJrAxJrGVRAGFKNvudkec5JaOHU88t6fPCebRuGsO91abIIW9aKgb82HwpajfgKE2tJo+Csa3hM
|
||||
z5BDKtrTU4elkPQlA4tCdezNWW+f0H1NnLUdyG1/1NZiHI/5WM1IDyEAslfIAfeAsOWCAUw7jVwH
|
||||
bWqQC0Vv/K4lMMkSU06mo00MhbCh5lJ0JFYSo4EdrS1ao61koXKiY0ONY6lsUhX2DbCPVcpEYraG
|
||||
Vxed46E3JjW4CBRHQO/lTJQ0k+OB3Ymh1lUTRv9a+ZowHJvbpMgYfJKjRTDfKIwJQ0f0TFYNf+tI
|
||||
ajEKQeoAi3HNoJ+u4EOXjJPzuRmFvlQWWnARokbVctanx3dAIPM2lRed3psGc5aEbmqY/dSfD7IQ
|
||||
h64hrx+phuWOQ4AJ2ztnbBWaKKq9e2lLsmXanfP+jSmRUBzL1NPfET6ZYNKh7Wv0JPa6egOyeR8r
|
||||
+yW1HYrXQ5EvPeUyGfxWl3GhvUHvP1ctXYYikV+ekaFa679+TYCGIYmp539D4RkLWVe1hE8def40
|
||||
K5XOna6pZ57PsMySXEurZenJsXKKlzM+db01JFvV10BzTjuShdaNLFauwB4GwZl2SZ6sqo+Vg4ea
|
||||
TWDUoPcUtvalTNKVyApKzzkkOWrjsUPPhxWhoSrawBRrCGcGjDF1yW1j0h+L5dtxMAppzJIe9dXR
|
||||
1cCRdXqwjk3RhiAtKa+a9dsFwB9tAKsvZqpVljTn8lDSE7Xrfni/6f5Wp7nvzPrudrGWVDCcDHfv
|
||||
btbfcfjgqSAHPZvhVg7dRP509DTwYfWczgwXZ2n/fzjf891T5zj+Hfcng3OUC/mHbEOSe5nyaZvQ
|
||||
5zakfH/bEeYW8MoeFXb0UJjESuFpwBr6tLrSvRaaHwaOo2kn95F1yA+b22scbmmzeb+6+HbxFwAA
|
||||
AP//AwAAHGphwAsAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 97144ca1b97b1abc-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 18 Aug 2025 20:53:21 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '8604'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '8628'
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999482'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999485'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_545a8ffcdf954433b9059a5b35dddf20
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
437
tests/cassettes/test_first_task_auto_inject_trigger.yaml
Normal file
437
tests/cassettes/test_first_task_auto_inject_trigger.yaml
Normal file
File diff suppressed because one or more lines are too long
228
tests/cassettes/test_task_allow_crewai_trigger_context.yaml
Normal file
228
tests/cassettes/test_task_allow_crewai_trigger_context.yaml
Normal file
@@ -0,0 +1,228 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
|
||||
personal goal is: test goal\nTo give my best complete final answer to the task
|
||||
respond using the exact following format:\n\nThought: I now can give a great
|
||||
answer\nFinal Answer: Your final answer must be the great and the most complete
|
||||
as possible, it must be outcome described.\n\nI MUST use these formats, my job
|
||||
depends on it!"}, {"role": "user", "content": "\nCurrent Task: Analyze the data\n\nTrigger
|
||||
Payload: Important context data\n\nThis is the expected criteria for your final
|
||||
answer: Analysis report\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
|
||||
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
|
||||
"model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '865'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=FFe5KuJ6P4BUXOoz57aqNdKwRoz64NOw_EhuSGirJWc-1755550392539-0.0.1.1-604800000;
|
||||
__cf_bm=VDTNVbhdzLyVi3fpAyOvoFppI0NEm6YkT9eWIm1wnrs-1755550392-1.0.1.1-vfYBbcAz.yp6ATfVycTWX6tFDJ.1yb_ghwed7t5GOMhNlsFeYYNGz4uupfWMnhc4QLK4UNXIeZGeGKJ.me4S240xKk6FUEu3F5tEAvhPnCM
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.12
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: 'upstream connect error or disconnect/reset before headers. reset reason:
|
||||
connection termination'
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 97144cd97d521abc-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '95'
|
||||
Content-Type:
|
||||
- text/plain
|
||||
Date:
|
||||
- Mon, 18 Aug 2025 20:53:22 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
status:
|
||||
code: 503
|
||||
message: Service Unavailable
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
|
||||
personal goal is: test goal\nTo give my best complete final answer to the task
|
||||
respond using the exact following format:\n\nThought: I now can give a great
|
||||
answer\nFinal Answer: Your final answer must be the great and the most complete
|
||||
as possible, it must be outcome described.\n\nI MUST use these formats, my job
|
||||
depends on it!"}, {"role": "user", "content": "\nCurrent Task: Analyze the data\n\nTrigger
|
||||
Payload: Important context data\n\nThis is the expected criteria for your final
|
||||
answer: Analysis report\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
|
||||
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
|
||||
"model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '865'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=FFe5KuJ6P4BUXOoz57aqNdKwRoz64NOw_EhuSGirJWc-1755550392539-0.0.1.1-604800000;
|
||||
__cf_bm=VDTNVbhdzLyVi3fpAyOvoFppI0NEm6YkT9eWIm1wnrs-1755550392-1.0.1.1-vfYBbcAz.yp6ATfVycTWX6tFDJ.1yb_ghwed7t5GOMhNlsFeYYNGz4uupfWMnhc4QLK4UNXIeZGeGKJ.me4S240xKk6FUEu3F5tEAvhPnCM
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '1'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.12
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA4xXTW8cNxK961cUBtBFmBEkRfKHbrLiAI6xcBLvYRe7gVFDVneXxSZbLPaMx0H+
|
||||
e1Bkf42kLPZiWM1hseq9V6/IP04AVmxXt7AyDSbTdm5z/+ry7cW/Ht99vfmIPz/uPt73b0MMP3/6
|
||||
d/fxx3erte4I269k0rjr3IS2c5Q4+LJsImEijXr5+ubm5ubi+uIqL7TBktNtdZc212HTsufN1cXV
|
||||
9ebi9ebyzbC7CWxIVrfwnxMAgD/yv5qnt/RtdQsX6/FLSyJY0+p2+hHAKganX1YowpLQp9V6XjTB
|
||||
J/I59Q/gwx4Meqh5R4BQa9qAXvYUAf7rf2KPDu7y37f64ezszqM7CAv8Rl2I6exMP+vCB59isL1R
|
||||
EMrXfzYEXR+7IAShgtSwQMy7gAVSgC6GHVs9WPGL1JCXnMh4Ri/sa0gNQc76WwKLCcd9FtjnxRS5
|
||||
rilChwcX0J7DuwPQN1Rsh+0dxcSefIIdRsatIwH0FtiST1wd8u8ieSvrozyR25ypJcc7irBD1+tu
|
||||
YC9cN0kgNZgyhOyrEFuwZFg4+E2LDxq1i8GQCMl5gelHLeDTjuKOaT/jpHUJJa1TORMFTFMNvUBL
|
||||
KbIRMME5MoksBM0FQToyXLHR8jjYNZBXIJV2XwPqchKQ3jSAAoJadcV1H0nWYHpJoaUI5GusqSWf
|
||||
1hmT0FFEZREdUFWxYfLmcA4f6bBAj71xvaVbreryHM7OPufwWt7t2Rn8I/jUuMPxoSBU6zlkYXtQ
|
||||
ZFQuYDBRHSJniK401P2Y2vsptRxzwCGS0+ZSXqYi2CeKmMUn6yE5BWFPW+FEsGPhJGuQYBgdtGQZ
|
||||
QQOXDVp3RWS3aB5AedRcftBcPi3QeD+hoelMnRCqI8xGuuYkcEcRa4JI0gUvBIlbWkPVu4qd0ywg
|
||||
YqIhj0gS+mgI0LlgctSldsZji3YW0P9CUTNHb6isAcAGVFyFhNw67C0r4AIIXRBO2m9Z+UVU2iwO
|
||||
JUEV+giPPcZEUdaw59QA+vwjdE6Li4SlsS9vTrUTI+3I93Sej2y4bkjSpitJDZ2gfAtgpLHxmKxK
|
||||
85dBCncZgPGvd+vc9pG3fcoNGuDVxWm2kpDQlbLOhzo/E0qGP0s00wp7igRhKxR3ZMcaQLj22jXo
|
||||
04CMdPxAYPuop/x6vQZM5dQisiY4tpgF24YcOfPxd1JdgH//vMcmeUgT9oDgQxoMZQCU/TPR9p32
|
||||
y9XNafZJjCWt7GuRdtkjigMU7BeHqawg+GPZdw5T1jg0KIBOAkQW8nrIDxen61Ekiob0Ru2r6h20
|
||||
GB8ofzTYdsi1n8Cf6jzuoSwJdIo5Jpaq9OeQVAVvbk4HUkxo2+Chi8iidhHi5A6PPTpOhyyMqduV
|
||||
UTaUafgfXbpg4s7PU+VJv84elznRwQDc6owpGLJ/qVP3DZsGGtzN1GVW3mpRkSp16nH6UFXpXzvy
|
||||
JPl49jvyKcQDtOgHrkYs714yi4LJwu0ee1bHLAlYGhOoYmjh+g00oY95cF1dl/+vc0c6nVmaFflG
|
||||
fULPVaNaINt3efA9HVtq/j+pLnw9OE+xFhPaLfsMZSlszKR01hFrC11KX9ck4/iclSVJ4a21MnWJ
|
||||
CbjJGL1qZ0/OFeHk6Y+xpgTYWyVRRbGBpR4WXJao6qt7PU0xrVCSDlNvoQ3lyKyGNIpsnP2H526U
|
||||
TW5ByzOVS05msqaF28jkxNlVM+i9V6HrVMOOEzr+nnt3sh2NMOMz2FVH+DB0/8jVfdDJI4t7GMvi
|
||||
TuUtRTFB57HC50l1o+oKPrHv87VqR5JG8c/sUFWFOJJmo/LyErmKpUbmVqtSnak0WmSfsFzJFLij
|
||||
HtRbqsVos2ZHDb6I6zl8njHIiTgaOmYYcMPFbM/OAflsrprNVi+T2n/PAJ7R56HR6hj2ed7Zp06g
|
||||
kao+9ZEGsH8jNS/ytsycsTfelwYbxEn2Jfc8orBoYyAyJxG6xC1/JztOVk17OHkD9wNb0AbPKeRA
|
||||
5pkNpzBdcVX6WCKPyl7UlmN+yLznMsk0PrhQH/KGLaWUbee5bekRI7WF12dmmZHKIvyba/5w2R4a
|
||||
TaAw/eSeHaDu9bFQ0J/pHC/cOidHaU/X/vGGw22HJqkMxw4ywEo46rJkFQw3m26+Rp0v302Rql5Q
|
||||
326+d26xgF5neGZfX2y/Dyt/Tm80F+ouhq082bqq2LM0X2K2B32PSQrdKq/+eQLwe34L9kfPu5Wa
|
||||
QZe+pPBA+bjLV1cl3mp+gs6rN9NqvjHNC6+vrtcvBPxiKSE7WTwnVwZNQ3beOr891XHDYuFkUfbz
|
||||
dF6KXUpnX/8/4ecFY6hLZL90kSyb45Lnn0X6ml9LL//sBP4CAAD//4IGM9jBStAsEV+SmVoEioqU
|
||||
1LTE0hxIx1mpuLK4JDU3Pi0zLz21qKAoE9J7TiuINzVMSbIwSUxLTFLiquUCAAAA//8DANr6751L
|
||||
EAAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 97144ce12be51abc-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 18 Aug 2025 20:53:29 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '6350'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '6385'
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999820'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999820'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_633dd1e17cb44249af3d9408f3d3c21b
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,156 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
|
||||
personal goal is: test goal\nTo give my best complete final answer to the task
|
||||
respond using the exact following format:\n\nThought: I now can give a great
|
||||
answer\nFinal Answer: Your final answer must be the great and the most complete
|
||||
as possible, it must be outcome described.\n\nI MUST use these formats, my job
|
||||
depends on it!"}, {"role": "user", "content": "\nCurrent Task: Analyze the data\n\nThis
|
||||
is the expected criteria for your final answer: Analysis report\nyou MUST return
|
||||
the actual complete content as the final answer, not a summary.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '822'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=wu1mwFBixM_Cn8wLLh.nRacWi8OMVBrEyBNuF_Htz6I-1743463498282-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.12
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//fFfbjhtHDn33VxAC5mXQEmY8lh3Mm9e3DHa9NpzZC3YdBFQ31V2Z6mKn
|
||||
WCVZCfLvC7JKLY2d7IsAdXfxcnh4yPrtCcDCdYtbWLQDpnac/PLV8+vv/n1P7+7yx+f/eeOHXQ7/
|
||||
/P6uf0fxYfqwaPQEb36mNh1PrVoeJ0/JcSiv20iYSK1ev1iv1+urmxc39mLkjrwe66e0fMbL0QW3
|
||||
fHr19Nny6sXy+rt6emDXkixu4b9PAAB+s1+NM3T0ZXELV83xyUgi2NPidv4IYBHZ65MFijhJGNKi
|
||||
Ob1sOSQKFvodBN5DiwF6tyNA6DVswCB7igCfw1sX0MNL+38Ln8PncHn5MqA/iBP4RBPHdHlZHl+v
|
||||
4C6kyF1uFYbLSz1/PziBaN/BFEkoJAEERSvSQEHM7dEgbyENBB0m/cR7ahN1wDuK9tyjJPglY0wU
|
||||
V3A/EPSMvpxycjLjBBJDR9GMWzS48QQuiOuHJA24jkJy2wOkSKGTBjB04MKW4wgdtU4ch+WIDy70
|
||||
MEVuSYQEthxhm1OOBJIiJuodyapk/3QFrzXsDzuKO0f7Y/o1G6FUknZqaMw+uckTTBhxpERRwIXW
|
||||
504dCnp15vocSRposyQeKUJHI/cRp8G10mhYijS0GgdHRzWLno4foYfOSYpukxWCFfyVDrDD6BSM
|
||||
Ctev1FXPdKuJLOHy8gfz/7b4v7y8hXtO6GtYkXYUMjWAO4rYE6SIQQrIsEOvr3JwSUDYd6ti8dUx
|
||||
hddnKajllz010FPoKDbguUU1U/KYcmwHFAVkQwPuHMdq7WPN/NWcuZr6SFHLh6HVmkcWgc5ttxQp
|
||||
PAbpBPTIksyXJ2XWxFP2GI/ISnX37hzQ12eAqteC1fStb8WZs+LVOw5HltyUIrx1QQOQQpIT5vfG
|
||||
RrX7OQAsq3UXVEyEOtgc4Hp9YUTCSJ2yXPtiirQzZ7U3Gti7NABC4GTEl8k9EHQ5atZ6YmDvOjyA
|
||||
EAqHVfGmZP3TqgJ6YYgsZFFcXTSAqUBRAhlcP1A8sRX3GCmQiCGcMPakX44YHyhpHC2OE7q+YvP/
|
||||
aHIKb4puxHg4edmgELQcVOZMPxTZnesyegHsqYOn6+XNswawbTkHc6xt/OzqwnTDmF2pZr2snt7S
|
||||
iJ5mJwKc04StRf/o+eYAT68uwAXTKPQeZCIrbbX0IXgXCGTgabLWzrGnrtG+T65VtvkD4Mihhxw3
|
||||
GE62G8tEGyL0gCCD2yZIvMfYKbF7p4FT6LGnkUKaMfyqOQ7nfTFD+UaVNXJwrQCNpDEBStFYwq5U
|
||||
p56vVLq5uqi1AxkwqphW9EwXar7f80hFhTB2FOZWAsE9oEqr4eFCVZPNAdYXpmXjVDNVLtNei3Oi
|
||||
ykltLRx1rYIv1NfcrVmMy1rOo9NvCS8grg9u61oMyR9gwyzKykeSa2VdX8yQPhKAuzpDZizf488c
|
||||
j9WjkKx2JBO1Dr0/GE4Dwd85psFkRuH5F0mCV4ySGiXHXltIo9QeknQUWVVGiqaJNcdPOaIHVDlo
|
||||
YD847W1r8cbOoKcOckg4TdTBxDrmHfrGRkcwFBUQDso6WF/M0jJX5CvaBaLO+mXuX84pErZDFbRn
|
||||
K/hELY8jhc4CPdO093P9fij1O6iyvQmD6eSRw6cq03bLMUn1pU8OnEP/1eQrhJwoWrGdDrC60JSx
|
||||
EXlkC2QF/0hOPwBhrQWM1DmEyWPShijj/NRBj4pmiqILRKXPWQDzFgE9poFUhreRx3lx+aYT33Kb
|
||||
TdM/EeoaIZWQf9gpwDqydDaopmoTxqPK0479riwkZGOtJVM8jTWrZKzgVQ0bNjl0vi4vpRc4gq2k
|
||||
1k5nnVLBUieRtDeSkiFRJEl/3AFvvkwYpM6/17Qjz5MqVO3RVhEOsNWsC9kgzqyFiQKlSmm4Cy45
|
||||
THQaBZDQea6jTfcB/yhbK/CR1bqv+awvooVPlVibWHigy2ZLp+HCqv5Zx6Qtau85uMQKteZxp9v7
|
||||
aCTSkW17gC1uVY7qEqh+HogmnY/tg6E/YOhNMOeRVLeVEm7VzLJnruBvVAesbV9J+ZyYvRnGnHjE
|
||||
RGcMm/u/NKEbyR++XlBrJ66t/K3PMi/fH8pQaow+p+247L4qpmdiCH3kvTbWDK9Gb0oDkX7JznrB
|
||||
jdZEiXT463daxr8cADsu+q2e4lEQVFhn5S5RcOwxuF9LTnrvOFbtOEDPVqnmJE8zuILJybYsJmVP
|
||||
1FVaUVNgbOEfWffzs8xmNbUMK1zPV/BysmH95ahYRot0XI47La3KkfM+z9JZBkWx9KeredWh8/X8
|
||||
cJ7Yqvh7T2ngjj33B8hShfb87qMYWRVq6Sz08xuVUNxpsHql2nKuEozeOGXmsq7WegPsjtVpc7SV
|
||||
2GPopMWpqlB29kW93hSACyvbwdGOSvVy0vndwUZ7W/uh3ILdjmR1fsWMtM2Ces0N2fuzFxh0JzXj
|
||||
ern9sb75fb7Oeu6nyBv56uhi64KT4ado0qVXV0k8Lezt708AfrRrc350E16UteKnxA9k7q7X62Jv
|
||||
UW7r/wMAAP//jFi9DoIhDNx5DGaHbxDi9zSEtEVr/CHANzj47gYwFiOD85XLHSXAtcoQ1Jr9G23/
|
||||
GgEOy7qbEDqkelvlIXlr8HAilKUS0/2GfB8ANdj+lTPj7tb5dvyHXgAAioXQxUTI8G1ZyhKdW9ie
|
||||
l322uQnW9dgxkCtMqbYCKfjt0mcMOj9yoasLXF/umLgPGkJ0xi4+WDJm1eqpXgAAAP//AwCGkEKG
|
||||
dhEAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 97144c27cad01abc-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 18 Aug 2025 20:53:07 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=gumItH7ZRtD4GgE2NL8KJd5b0g0ukzMySphsV0ru1LE-1755550387-1.0.1.1-iwCn2q9kDpJVTaZu1Swtv1kYCiM39NBeviV1R9awG4XHHMKnojkbu6T7jh_Z3UxfNbluVCsI6RMKj.2rEPp1IcH63gHUQdJfHF71CdCZ3Uc;
|
||||
path=/; expires=Mon, 18-Aug-25 21:23:07 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=d7iU8FXLKWOoICtn52jYIApBpBp20kALP6yQjOvXHvQ-1755550387858-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '14516'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '14596'
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999830'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999827'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_3c1af5f5590a4b76b33f3fbf7d3a3288
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,154 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
|
||||
personal goal is: test goal\nTo give my best complete final answer to the task
|
||||
respond using the exact following format:\n\nThought: I now can give a great
|
||||
answer\nFinal Answer: Your final answer must be the great and the most complete
|
||||
as possible, it must be outcome described.\n\nI MUST use these formats, my job
|
||||
depends on it!"}, {"role": "user", "content": "\nCurrent Task: Analyze the data\n\nThis
|
||||
is the expected criteria for your final answer: Analysis report\nyou MUST return
|
||||
the actual complete content as the final answer, not a summary.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '822'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=aoRHJvKio8gVXmGaYpzTzdGuWwkBsDAyAKAVwm6QUbE-1743465392324-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.12
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFddcxu3Dn3Pr8BoJi8eSWM5luP6zbFbN7d1m0n81nQyEAntIuaSe0lQ
|
||||
itLpf++A+6G1rztzX+zVcgEeAAeH4F+vAGZsZ1cwMzWKaVq3uLlY/XD5n7uffzvLFw+r28tV+nT3
|
||||
7vdV/hDuf8bZXC3C5isZGayWJjStI+Hgu2UTCYXU6+rter1en56vTstCEyw5NataWZyHRcOeF2en
|
||||
Z+eL07eL1WVvXQc2lGZX8McrAIC/yl/F6S19m11B8VXeNJQSVjS7Gj8CmMXg9M0MU+Ik6GU2Py6a
|
||||
4IV8gf4efNiDQQ8V7wgQKoUN6NOeIsBn/xN7dHBdfl/BZ//Zn5xce3SHxAk+UhuinJx0r1dLeO8l
|
||||
BpuNpuHk5EodPNScIJYPAblJIAFQHXwnkJqgjWHHlixYFIREAuwhREtRv6RvEtEINISefbXNDtgn
|
||||
rmpJIDVKgc5+G2IDSSIKVWzAkuHEwadlh+xsCbfq/ZaSidw+QUfdxiZ4zVSCsIUmO+HWEewwMm4c
|
||||
pTmwNy5b9hVssoAPAo4bFrKKMqGjBFuuctRvTU4SGopgqQlVxLZmk+ZQ0fADXY8ZFcoc0FsQbigJ
|
||||
Nm2BIBF9wpLHtIQf0dRAXuJBkyM9Zs1VpDZSIi8JELLn/2aa2s4BnQt7hb0NETTMpo1Uk0+l3EMh
|
||||
wxbaHE2NqURINe44RAg7UqPUkuEtk4WWIgfbZ/XNEu5J6mCDC9VhktDRLSe10EDJQi6+k6BwkpKD
|
||||
plgnSNnUgAnsUJ4dHb/TfIYYyZVcjb67pEWqIiUtNZS20h2UX8lQFGQPg12quU2wIdkT+WNde17s
|
||||
OGV0/L3bQkJwCTASoEsBsrDj712h2bnc0Qwe6QASydtUkLQoQnFk3PkSfmKvfEl9Yj77BZycfCpM
|
||||
eSh2/QKA5uwJhSDVYa+J58rzlg16gdzuMdpuy64wMk11V5k57FlqaAkfe4/BmByj5sXm8q8Oji0e
|
||||
IBGmQq774KVeBL9o9AGqGPZSg8aYAHcUsSILq/Xr3rfuWnNVU5J+i0hG+9UqN2/JULOhuBwCvhl6
|
||||
4XbSC2Pk17CJhI827L1y8MXGgUg7Qtf3u27f4NcQWQ4T2lKCPUVdsgQbbRPLO7ZZ7UoAZ+vFm/M5
|
||||
oDEhexka4vz0dem2IOie9dy1tayP6Nzh2NJJg8xxg14Jgkl1QyJvcq8EF1OHJT3zgsVg2RNVpIKv
|
||||
oI20pUjeUMHRdqqp8JTOG3YshzGHd1PhGMR3zOFHqkaYgzRAylVFSSZJ+y1EqfdaNQUOIcuxNYPU
|
||||
FLWZNPS+zthXd8IHjcwTRneAs/VrOBD2EqFPSyhKb0J2FjYEKNO8CMaK9LnB+EglFwabFrnyCRxm
|
||||
b+qOQAWvIhyjf6CmDREdvO+F/8ge/0TBJvVTNS1qU4d9n4PSExORq0OOCdBx5buA9Zi02mIbVSpK
|
||||
qftkPsif0RSPErI6gw/3pfPfwIf7+ZBw9Rxa4Ub5pIo+lLcJhVcQ4pHl5CusqCGvZx4LoyrfoCDr
|
||||
JdxMdO9Z2adLJtB2y4bLKdCTjUaihcRFUF9WwhEZOtDU7ViYOkXryr8LLjc0cmKyl6b8dHm57guv
|
||||
muCGg5mAtWiCSnA9Uceq03YbYsEJNvKuFEk36qO+WMJHMqFpyNsO7hP5fBhodD86vBloNNLiHSbl
|
||||
tJ+qyDg1zEGQXYhPqDhMDqVcpjuoQnfQFt1QCdFGyO3z9m/wAAcmZyHlTRm0GB1EkjycBQr7d2UE
|
||||
fyf4JME8wq+0I3fEe8c76jZjS166k7bLvpI2/YtylyKx1zlT431O7Tlg28bQRu7YoBs73RjQfs1J
|
||||
mkKXVA/9WpRTAjREAm3QAVFDsdQUMuRYUXpZkX781qJPx5kK4EaHKUsFtcYVe4nSk14FbphiRk2a
|
||||
A0vJ5YZgQ56UYtpBAdjvtK2qEgQ37DC+XDgtjLcUe1UraltKpPOZcJOduujUrCfbW20xnevS04Hw
|
||||
eKh2s2mCTlV0ZHhh/NyiUb1W991Up8NsP4EuGnwsULwtfCfYZsmRjiLT49VOKXCEfQ45QRM8S4iD
|
||||
MVpspev3sJ3GvRnZjoKLsoc/gtyzc5pUE3PJKHtokL3ORkWDQ9OSdAKBdodesBp78XIJ121LKig0
|
||||
sHUxvPoG11dwS9pMZKEba8os9VAGq2ffvruCO+VLR9qbGlUC3g/DlCL5hQ7jxPTM+OZqOmfCu3Fi
|
||||
UGn9NJknx3tJ0Yv+OPqXeXerZzlCDJucBLYh93pTnB5vEq1D74cahJYi9mLJjZKDui5ScewpHWKF
|
||||
vp8kl9NLV6RtTqgXP5+dmyyg96GrbLnu/dmv/D1e8Fyo2hg26ZnpbMueU/0lFjnQy1yS0M7K6t+v
|
||||
AP4sF8n85G44U71v5YuERyrbrdbrzt/seH89rl6s3vSrZaA5Lry9uJy/4PCLLYRIk7vozKAe7UfT
|
||||
48UVs+UwWXg1Cft/4bzkuwudffX/uD8uGEOtkP3S/gMAAP//KkpNyUxG9TJCWVEqqH+PSxk8mMEO
|
||||
VipOLSrLTE6NL8lMLQJFRUpqWmJpDqTXrVRcWVySmhuflpmXnlpUUJQJ6XqnFcSbmhkkppmlmppa
|
||||
KnHVcgEAAAD//wMABbo03YgQAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 97144d0daeb11abc-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 18 Aug 2025 20:53:43 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=UW4fV15_S2h9VQ58d_nhU200TOxc3Tjdd_QFUBY6B80-1755550423-1.0.1.1-.oSX43E.zjFk61gbEHMacZh5c8ndmynl75bstCvKcohtwVY6oLpdBWnO2lTUFXpzvGaGsbuYt55OUo_Hmi228z97Nm4cDdOT84lhfStAcms;
|
||||
path=/; expires=Mon, 18-Aug-25 21:23:43 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=dg9d3YnyfwVQNRGWo64PZ6mtqIOlYEozligD5ggvZFc-1755550423708-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '13654'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '13673'
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999827'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999827'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_169cd22058fb418f90f12e041c0880a9
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -2,6 +2,7 @@
|
||||
import os
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
import pytest
|
||||
from dotenv import load_dotenv
|
||||
@@ -42,6 +43,122 @@ def setup_test_environment():
|
||||
# Cleanup is handled automatically when tempfile context exits
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
config.addinivalue_line(
|
||||
"markers", "telemetry: mark test as a telemetry test (don't mock telemetry)"
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def auto_mock_telemetry(request):
|
||||
if request.node.get_closest_marker("telemetry"):
|
||||
telemetry_env = {
|
||||
key: value
|
||||
for key, value in os.environ.items()
|
||||
if key not in ["CREWAI_DISABLE_TELEMETRY", "OTEL_SDK_DISABLED"]
|
||||
}
|
||||
with patch.dict(os.environ, telemetry_env, clear=True):
|
||||
yield
|
||||
return
|
||||
|
||||
if "telemetry" in str(request.fspath):
|
||||
telemetry_env = {
|
||||
key: value
|
||||
for key, value in os.environ.items()
|
||||
if key not in ["CREWAI_DISABLE_TELEMETRY", "OTEL_SDK_DISABLED"]
|
||||
}
|
||||
with patch.dict(os.environ, telemetry_env, clear=True):
|
||||
yield
|
||||
return
|
||||
|
||||
with patch.dict(
|
||||
os.environ, {"CREWAI_DISABLE_TELEMETRY": "true", "OTEL_SDK_DISABLED": "true"}
|
||||
):
|
||||
with patch("crewai.telemetry.Telemetry") as mock_telemetry_class:
|
||||
mock_instance = create_mock_telemetry_instance()
|
||||
mock_telemetry_class.return_value = mock_instance
|
||||
|
||||
with (
|
||||
patch(
|
||||
"crewai.utilities.events.event_listener.Telemetry",
|
||||
mock_telemetry_class,
|
||||
),
|
||||
patch("crewai.tools.tool_usage.Telemetry", mock_telemetry_class),
|
||||
patch("crewai.cli.command.Telemetry", mock_telemetry_class),
|
||||
patch("crewai.cli.create_flow.Telemetry", mock_telemetry_class),
|
||||
):
|
||||
yield mock_instance
|
||||
|
||||
|
||||
def create_mock_telemetry_instance():
|
||||
mock_instance = Mock()
|
||||
|
||||
mock_instance.ready = False
|
||||
mock_instance.trace_set = False
|
||||
mock_instance._initialized = True
|
||||
|
||||
mock_instance._is_telemetry_disabled.return_value = True
|
||||
mock_instance._should_execute_telemetry.return_value = False
|
||||
|
||||
telemetry_methods = [
|
||||
"set_tracer",
|
||||
"crew_creation",
|
||||
"task_started",
|
||||
"task_ended",
|
||||
"tool_usage",
|
||||
"tool_repeated_usage",
|
||||
"tool_usage_error",
|
||||
"crew_execution_span",
|
||||
"end_crew",
|
||||
"flow_creation_span",
|
||||
"flow_execution_span",
|
||||
"individual_test_result_span",
|
||||
"test_execution_span",
|
||||
"deploy_signup_error_span",
|
||||
"start_deployment_span",
|
||||
"create_crew_deployment_span",
|
||||
"get_crew_logs_span",
|
||||
"remove_crew_span",
|
||||
"flow_plotting_span",
|
||||
"_add_attribute",
|
||||
"_safe_telemetry_operation",
|
||||
]
|
||||
|
||||
for method in telemetry_methods:
|
||||
setattr(mock_instance, method, Mock(return_value=None))
|
||||
|
||||
mock_instance.task_started.return_value = None
|
||||
|
||||
return mock_instance
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_opentelemetry_components():
|
||||
with (
|
||||
patch("opentelemetry.trace.get_tracer") as mock_get_tracer,
|
||||
patch("opentelemetry.trace.set_tracer_provider") as mock_set_provider,
|
||||
patch("opentelemetry.baggage.set_baggage") as mock_set_baggage,
|
||||
patch("opentelemetry.baggage.get_baggage") as mock_get_baggage,
|
||||
patch("opentelemetry.context.attach") as mock_attach,
|
||||
patch("opentelemetry.context.detach") as mock_detach,
|
||||
):
|
||||
mock_tracer = Mock()
|
||||
mock_span = Mock()
|
||||
mock_tracer.start_span.return_value = mock_span
|
||||
mock_get_tracer.return_value = mock_tracer
|
||||
|
||||
yield {
|
||||
"get_tracer": mock_get_tracer,
|
||||
"set_tracer_provider": mock_set_provider,
|
||||
"tracer": mock_tracer,
|
||||
"span": mock_span,
|
||||
"set_baggage": mock_set_baggage,
|
||||
"get_baggage": mock_get_baggage,
|
||||
"attach": mock_attach,
|
||||
"detach": mock_detach,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def vcr_config(request) -> dict:
|
||||
return {
|
||||
|
||||
@@ -39,6 +39,7 @@ def test_short_term_memory_search_events(short_term_memory):
|
||||
events = defaultdict(list)
|
||||
|
||||
with crewai_event_bus.scoped_handlers():
|
||||
|
||||
@crewai_event_bus.on(MemoryQueryStartedEvent)
|
||||
def on_search_started(source, event):
|
||||
events["MemoryQueryStartedEvent"].append(event)
|
||||
@@ -59,33 +60,34 @@ def test_short_term_memory_search_events(short_term_memory):
|
||||
assert len(events["MemoryQueryFailedEvent"]) == 0
|
||||
|
||||
assert dict(events["MemoryQueryStartedEvent"][0]) == {
|
||||
'timestamp': ANY,
|
||||
'type': 'memory_query_started',
|
||||
'source_fingerprint': None,
|
||||
'source_type': 'short_term_memory',
|
||||
'fingerprint_metadata': None,
|
||||
'query': 'test value',
|
||||
'limit': 3,
|
||||
'score_threshold': 0.35
|
||||
"timestamp": ANY,
|
||||
"type": "memory_query_started",
|
||||
"source_fingerprint": None,
|
||||
"source_type": "short_term_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"query": "test value",
|
||||
"limit": 3,
|
||||
"score_threshold": 0.35,
|
||||
}
|
||||
|
||||
assert dict(events["MemoryQueryCompletedEvent"][0]) == {
|
||||
'timestamp': ANY,
|
||||
'type': 'memory_query_completed',
|
||||
'source_fingerprint': None,
|
||||
'source_type': 'short_term_memory',
|
||||
'fingerprint_metadata': None,
|
||||
'query': 'test value',
|
||||
'results': [],
|
||||
'limit': 3,
|
||||
'score_threshold': 0.35,
|
||||
'query_time_ms': ANY
|
||||
"timestamp": ANY,
|
||||
"type": "memory_query_completed",
|
||||
"source_fingerprint": None,
|
||||
"source_type": "short_term_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"query": "test value",
|
||||
"results": [],
|
||||
"limit": 3,
|
||||
"score_threshold": 0.35,
|
||||
"query_time_ms": ANY,
|
||||
}
|
||||
|
||||
|
||||
def test_short_term_memory_save_events(short_term_memory):
|
||||
events = defaultdict(list)
|
||||
with crewai_event_bus.scoped_handlers():
|
||||
|
||||
@crewai_event_bus.on(MemorySaveStartedEvent)
|
||||
def on_save_started(source, event):
|
||||
events["MemorySaveStartedEvent"].append(event)
|
||||
@@ -105,28 +107,29 @@ def test_short_term_memory_save_events(short_term_memory):
|
||||
assert len(events["MemorySaveFailedEvent"]) == 0
|
||||
|
||||
assert dict(events["MemorySaveStartedEvent"][0]) == {
|
||||
'timestamp': ANY,
|
||||
'type': 'memory_save_started',
|
||||
'source_fingerprint': None,
|
||||
'source_type': 'short_term_memory',
|
||||
'fingerprint_metadata': None,
|
||||
'value': 'test value',
|
||||
'metadata': {'task': 'test_task'},
|
||||
'agent_role': "test_agent"
|
||||
"timestamp": ANY,
|
||||
"type": "memory_save_started",
|
||||
"source_fingerprint": None,
|
||||
"source_type": "short_term_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"value": "test value",
|
||||
"metadata": {"task": "test_task"},
|
||||
"agent_role": "test_agent",
|
||||
}
|
||||
|
||||
assert dict(events["MemorySaveCompletedEvent"][0]) == {
|
||||
'timestamp': ANY,
|
||||
'type': 'memory_save_completed',
|
||||
'source_fingerprint': None,
|
||||
'source_type': 'short_term_memory',
|
||||
'fingerprint_metadata': None,
|
||||
'value': 'test value',
|
||||
'metadata': {'task': 'test_task', 'agent': 'test_agent'},
|
||||
'agent_role': "test_agent",
|
||||
'save_time_ms': ANY
|
||||
"timestamp": ANY,
|
||||
"type": "memory_save_completed",
|
||||
"source_fingerprint": None,
|
||||
"source_type": "short_term_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"value": "test value",
|
||||
"metadata": {"task": "test_task", "agent": "test_agent"},
|
||||
"agent_role": "test_agent",
|
||||
"save_time_ms": ANY,
|
||||
}
|
||||
|
||||
|
||||
def test_save_and_search(short_term_memory):
|
||||
memory = ShortTermMemoryItem(
|
||||
data="""test value test value test value test value test value test value
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import os
|
||||
import threading
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
@@ -11,12 +12,16 @@ from opentelemetry import trace
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def cleanup_telemetry():
|
||||
"""Automatically clean up Telemetry singleton between tests."""
|
||||
Telemetry._instance = None
|
||||
if hasattr(Telemetry, "_lock"):
|
||||
Telemetry._lock = threading.Lock()
|
||||
yield
|
||||
Telemetry._instance = None
|
||||
if hasattr(Telemetry, "_lock"):
|
||||
Telemetry._lock = threading.Lock()
|
||||
|
||||
|
||||
@pytest.mark.telemetry
|
||||
@pytest.mark.parametrize(
|
||||
"env_var,value,expected_ready",
|
||||
[
|
||||
@@ -36,6 +41,7 @@ def test_telemetry_environment_variables(env_var, value, expected_ready):
|
||||
assert telemetry.ready is expected_ready
|
||||
|
||||
|
||||
@pytest.mark.telemetry
|
||||
def test_telemetry_enabled_by_default():
|
||||
"""Test that telemetry is enabled by default."""
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
@@ -44,6 +50,7 @@ def test_telemetry_enabled_by_default():
|
||||
assert telemetry.ready is True
|
||||
|
||||
|
||||
@pytest.mark.telemetry
|
||||
@patch("crewai.telemetry.telemetry.logger.error")
|
||||
@patch(
|
||||
"opentelemetry.exporter.otlp.proto.http.trace_exporter.OTLPSpanExporter.export",
|
||||
@@ -76,6 +83,7 @@ def test_telemetry_fails_due_connect_timeout(export_mock, logger_mock):
|
||||
logger_mock.assert_called_once_with(error)
|
||||
|
||||
|
||||
@pytest.mark.telemetry
|
||||
def test_telemetry_singleton_pattern():
|
||||
"""Test that Telemetry uses the singleton pattern correctly."""
|
||||
Telemetry._instance = None
|
||||
|
||||
@@ -14,14 +14,18 @@ def cleanup_telemetry():
|
||||
Telemetry._instance = None
|
||||
|
||||
|
||||
@pytest.mark.parametrize("env_var,value,expected_ready", [
|
||||
("OTEL_SDK_DISABLED", "true", False),
|
||||
("OTEL_SDK_DISABLED", "TRUE", False),
|
||||
("CREWAI_DISABLE_TELEMETRY", "true", False),
|
||||
("CREWAI_DISABLE_TELEMETRY", "TRUE", False),
|
||||
("OTEL_SDK_DISABLED", "false", True),
|
||||
("CREWAI_DISABLE_TELEMETRY", "false", True),
|
||||
])
|
||||
@pytest.mark.telemetry
|
||||
@pytest.mark.parametrize(
|
||||
"env_var,value,expected_ready",
|
||||
[
|
||||
("OTEL_SDK_DISABLED", "true", False),
|
||||
("OTEL_SDK_DISABLED", "TRUE", False),
|
||||
("CREWAI_DISABLE_TELEMETRY", "true", False),
|
||||
("CREWAI_DISABLE_TELEMETRY", "TRUE", False),
|
||||
("OTEL_SDK_DISABLED", "false", True),
|
||||
("CREWAI_DISABLE_TELEMETRY", "false", True),
|
||||
],
|
||||
)
|
||||
def test_telemetry_environment_variables(env_var, value, expected_ready):
|
||||
"""Test telemetry state with different environment variable configurations."""
|
||||
with patch.dict(os.environ, {env_var: value}):
|
||||
@@ -30,6 +34,7 @@ def test_telemetry_environment_variables(env_var, value, expected_ready):
|
||||
assert telemetry.ready is expected_ready
|
||||
|
||||
|
||||
@pytest.mark.telemetry
|
||||
def test_telemetry_enabled_by_default():
|
||||
"""Test that telemetry is enabled by default."""
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
@@ -38,57 +43,60 @@ def test_telemetry_enabled_by_default():
|
||||
assert telemetry.ready is True
|
||||
|
||||
|
||||
@pytest.mark.telemetry
|
||||
def test_telemetry_disable_after_singleton_creation():
|
||||
"""Test that telemetry operations are disabled when env var is set after singleton creation."""
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
with patch("crewai.telemetry.telemetry.TracerProvider"):
|
||||
telemetry = Telemetry()
|
||||
assert telemetry.ready is True
|
||||
|
||||
|
||||
mock_operation = MagicMock()
|
||||
telemetry._safe_telemetry_operation(mock_operation)
|
||||
mock_operation.assert_called_once()
|
||||
|
||||
|
||||
mock_operation.reset_mock()
|
||||
|
||||
os.environ['CREWAI_DISABLE_TELEMETRY'] = 'true'
|
||||
|
||||
|
||||
os.environ["CREWAI_DISABLE_TELEMETRY"] = "true"
|
||||
|
||||
telemetry._safe_telemetry_operation(mock_operation)
|
||||
mock_operation.assert_not_called()
|
||||
|
||||
|
||||
@pytest.mark.telemetry
|
||||
def test_telemetry_disable_with_multiple_instances():
|
||||
"""Test that multiple telemetry instances respect dynamically changed env vars."""
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
with patch("crewai.telemetry.telemetry.TracerProvider"):
|
||||
telemetry1 = Telemetry()
|
||||
assert telemetry1.ready is True
|
||||
|
||||
os.environ['CREWAI_DISABLE_TELEMETRY'] = 'true'
|
||||
|
||||
|
||||
os.environ["CREWAI_DISABLE_TELEMETRY"] = "true"
|
||||
|
||||
telemetry2 = Telemetry()
|
||||
assert telemetry2 is telemetry1
|
||||
assert telemetry2.ready is True
|
||||
|
||||
|
||||
mock_operation = MagicMock()
|
||||
telemetry2._safe_telemetry_operation(mock_operation)
|
||||
mock_operation.assert_not_called()
|
||||
|
||||
|
||||
@pytest.mark.telemetry
|
||||
def test_telemetry_otel_sdk_disabled_after_creation():
|
||||
"""Test that OTEL_SDK_DISABLED also works when set after singleton creation."""
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
with patch("crewai.telemetry.telemetry.TracerProvider"):
|
||||
telemetry = Telemetry()
|
||||
assert telemetry.ready is True
|
||||
|
||||
|
||||
mock_operation = MagicMock()
|
||||
telemetry._safe_telemetry_operation(mock_operation)
|
||||
mock_operation.assert_called_once()
|
||||
|
||||
|
||||
mock_operation.reset_mock()
|
||||
|
||||
os.environ['OTEL_SDK_DISABLED'] = 'true'
|
||||
|
||||
|
||||
os.environ["OTEL_SDK_DISABLED"] = "true"
|
||||
|
||||
telemetry._safe_telemetry_operation(mock_operation)
|
||||
mock_operation.assert_not_called()
|
||||
|
||||
@@ -1,237 +0,0 @@
|
||||
import pytest
|
||||
from crewai import Agent
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
|
||||
class MockTool(BaseTool):
|
||||
name: str = "mock_tool"
|
||||
description: str = "A mock tool for testing"
|
||||
|
||||
def _run(self, query: str) -> str:
|
||||
return f"Mock result for: {query}"
|
||||
|
||||
|
||||
def test_agent_with_crewai_tools_only():
|
||||
"""Test backward compatibility with CrewAI tools only."""
|
||||
mock_tool = MockTool()
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
tools=[mock_tool]
|
||||
)
|
||||
|
||||
assert len(agent.tools) == 1
|
||||
assert isinstance(agent.tools[0], BaseTool)
|
||||
|
||||
|
||||
def test_agent_with_raw_tools_only():
|
||||
"""Test agent with raw tool definitions only."""
|
||||
raw_tool = {
|
||||
"name": "hosted_search",
|
||||
"description": "Search the web using hosted search",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": {"type": "string", "description": "Search query"}
|
||||
},
|
||||
"required": ["query"]
|
||||
}
|
||||
}
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
tools=[raw_tool]
|
||||
)
|
||||
|
||||
assert len(agent.tools) == 1
|
||||
assert isinstance(agent.tools[0], dict)
|
||||
assert agent.tools[0]["name"] == "hosted_search"
|
||||
|
||||
|
||||
def test_agent_with_mixed_tools():
|
||||
"""Test agent with both CrewAI tools and raw tool definitions."""
|
||||
mock_tool = MockTool()
|
||||
raw_tool = {
|
||||
"name": "hosted_calculator",
|
||||
"description": "Hosted calculator tool",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"expression": {"type": "string"}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
tools=[mock_tool, raw_tool]
|
||||
)
|
||||
|
||||
assert len(agent.tools) == 2
|
||||
assert isinstance(agent.tools[0], BaseTool)
|
||||
assert isinstance(agent.tools[1], dict)
|
||||
|
||||
|
||||
def test_invalid_raw_tool_definition():
|
||||
"""Test error handling for invalid raw tool definitions."""
|
||||
invalid_tool = {"description": "Missing name field"}
|
||||
|
||||
with pytest.raises(ValueError, match="Raw tool definition must have a 'name' field"):
|
||||
Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
tools=[invalid_tool]
|
||||
)
|
||||
|
||||
|
||||
def test_parse_tools_with_mixed_types():
|
||||
"""Test parse_tools function with mixed tool types."""
|
||||
from crewai.utilities.agent_utils import parse_tools
|
||||
|
||||
mock_tool = MockTool()
|
||||
raw_tool = {
|
||||
"name": "hosted_tool",
|
||||
"description": "A hosted tool",
|
||||
"parameters": {"type": "object"}
|
||||
}
|
||||
|
||||
parsed = parse_tools([mock_tool, raw_tool])
|
||||
|
||||
assert len(parsed) == 2
|
||||
assert hasattr(parsed[0], 'name')
|
||||
assert parsed[0].name == "mock_tool"
|
||||
assert isinstance(parsed[1], dict)
|
||||
assert parsed[1]["name"] == "hosted_tool"
|
||||
|
||||
|
||||
def test_get_tool_names_with_mixed_types():
|
||||
"""Test get_tool_names function with mixed tool types."""
|
||||
from crewai.utilities.agent_utils import get_tool_names
|
||||
|
||||
mock_tool = MockTool()
|
||||
raw_tool = {"name": "hosted_tool", "description": "A hosted tool"}
|
||||
|
||||
names = get_tool_names([mock_tool, raw_tool])
|
||||
assert "mock_tool" in names
|
||||
assert "hosted_tool" in names
|
||||
|
||||
|
||||
def test_render_text_description_with_mixed_types():
|
||||
"""Test render_text_description_and_args function with mixed tool types."""
|
||||
from crewai.utilities.agent_utils import render_text_description_and_args
|
||||
|
||||
mock_tool = MockTool()
|
||||
raw_tool = {"name": "hosted_tool", "description": "A hosted tool"}
|
||||
|
||||
description = render_text_description_and_args([mock_tool, raw_tool])
|
||||
assert "A mock tool for testing" in description
|
||||
assert "A hosted tool" in description
|
||||
|
||||
|
||||
def test_agent_executor_with_mixed_tools():
|
||||
"""Test CrewAgentExecutor initialization with mixed tool types."""
|
||||
mock_tool = MockTool()
|
||||
raw_tool = {"name": "hosted_tool", "description": "A hosted tool"}
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
tools=[mock_tool, raw_tool]
|
||||
)
|
||||
|
||||
agent.create_agent_executor()
|
||||
|
||||
assert len(agent.agent_executor.tool_name_to_tool_map) == 2
|
||||
assert "mock_tool" in agent.agent_executor.tool_name_to_tool_map
|
||||
assert "hosted_tool" in agent.agent_executor.tool_name_to_tool_map
|
||||
|
||||
|
||||
def test_empty_tools_list():
|
||||
"""Test agent with empty tools list."""
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
tools=[]
|
||||
)
|
||||
|
||||
assert len(agent.tools) == 0
|
||||
|
||||
|
||||
def test_none_tools():
|
||||
"""Test agent with None tools."""
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
tools=None
|
||||
)
|
||||
|
||||
assert agent.tools == []
|
||||
|
||||
|
||||
def test_raw_tool_without_description():
|
||||
"""Test raw tool definition without description field."""
|
||||
raw_tool = {"name": "minimal_tool"}
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
tools=[raw_tool]
|
||||
)
|
||||
|
||||
assert len(agent.tools) == 1
|
||||
assert isinstance(agent.tools[0], dict)
|
||||
assert agent.tools[0]["name"] == "minimal_tool"
|
||||
|
||||
|
||||
def test_complex_raw_tool_definition():
|
||||
"""Test complex raw tool definition with nested parameters."""
|
||||
raw_tool = {
|
||||
"name": "complex_search",
|
||||
"description": "Advanced search with multiple parameters",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": {
|
||||
"type": "string",
|
||||
"description": "Search query"
|
||||
},
|
||||
"filters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"date_range": {"type": "string"},
|
||||
"category": {"type": "string"}
|
||||
}
|
||||
},
|
||||
"limit": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 100,
|
||||
"default": 10
|
||||
}
|
||||
},
|
||||
"required": ["query"]
|
||||
}
|
||||
}
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
tools=[raw_tool]
|
||||
)
|
||||
|
||||
assert len(agent.tools) == 1
|
||||
assert isinstance(agent.tools[0], dict)
|
||||
assert agent.tools[0]["name"] == "complex_search"
|
||||
assert "parameters" in agent.tools[0]
|
||||
assert agent.tools[0]["parameters"]["type"] == "object"
|
||||
@@ -2,6 +2,7 @@ import os
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai.flow.flow import Flow, start
|
||||
from crewai.utilities.events.listeners.tracing.trace_listener import (
|
||||
@@ -321,6 +322,74 @@ class TestTraceListenerSetup:
|
||||
FlowExample()
|
||||
assert mock_listener_setup.call_count >= 1
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_trace_listener_ephemeral_batch(self):
|
||||
"""Test that trace listener properly handles ephemeral batches"""
|
||||
with (
|
||||
patch.dict(os.environ, {"CREWAI_TRACING_ENABLED": "true"}),
|
||||
patch(
|
||||
"crewai.utilities.events.listeners.tracing.trace_listener.TraceCollectionListener._check_authenticated",
|
||||
return_value=False,
|
||||
),
|
||||
):
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
task = Task(
|
||||
description="Say hello to the world",
|
||||
expected_output="hello world",
|
||||
agent=agent,
|
||||
)
|
||||
crew = Crew(agents=[agent], tasks=[task], tracing=True)
|
||||
|
||||
with patch.object(TraceBatchManager, "initialize_batch") as mock_initialize:
|
||||
crew.kickoff()
|
||||
|
||||
assert mock_initialize.call_count >= 1
|
||||
assert mock_initialize.call_args_list[0][1]["use_ephemeral"] is True
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_trace_listener_with_authenticated_user(self):
|
||||
"""Test that trace listener properly handles authenticated batches"""
|
||||
with (
|
||||
patch.dict(os.environ, {"CREWAI_TRACING_ENABLED": "true"}),
|
||||
patch(
|
||||
"crewai.utilities.events.listeners.tracing.trace_batch_manager.PlusAPI"
|
||||
) as mock_plus_api_class,
|
||||
):
|
||||
mock_plus_api_instance = MagicMock()
|
||||
mock_plus_api_class.return_value = mock_plus_api_instance
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
task = Task(
|
||||
description="Say hello to the world",
|
||||
expected_output="hello world",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
with (
|
||||
patch.object(TraceBatchManager, "initialize_batch") as mock_initialize,
|
||||
patch.object(
|
||||
TraceBatchManager, "finalize_batch"
|
||||
) as mock_finalize_backend_batch,
|
||||
):
|
||||
crew = Crew(agents=[agent], tasks=[task], tracing=True)
|
||||
crew.kickoff()
|
||||
|
||||
mock_plus_api_class.assert_called_with(api_key="mock_token_12345")
|
||||
|
||||
assert mock_initialize.call_count >= 1
|
||||
mock_finalize_backend_batch.assert_called_with()
|
||||
assert mock_finalize_backend_batch.call_count >= 1
|
||||
|
||||
# Helper method to ensure cleanup
|
||||
def teardown_method(self):
|
||||
"""Cleanup after each test method"""
|
||||
|
||||
Reference in New Issue
Block a user