mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-08 15:48:29 +00:00
Compare commits
4 Commits
devin/1742
...
devin/1742
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d58a24950c | ||
|
|
15d59cfc34 | ||
|
|
fd18bdfabb | ||
|
|
c77ac31db6 |
@@ -1,187 +0,0 @@
|
||||
---
|
||||
title: Changelog
|
||||
description: View the latest updates and changes to CrewAI
|
||||
icon: timeline
|
||||
---
|
||||
|
||||
<Update label="2024-03-17" description="v0.108.0">
|
||||
**Features**
|
||||
- Converted tabs to spaces in `crew.py` template
|
||||
- Enhanced LLM Streaming Response Handling and Event System
|
||||
- Included `model_name`
|
||||
- Enhanced Event Listener with rich visualization and improved logging
|
||||
- Added fingerprints
|
||||
|
||||
**Bug Fixes**
|
||||
- Fixed Mistral issues
|
||||
- Fixed a bug in documentation
|
||||
- Fixed type check error in fingerprint property
|
||||
|
||||
**Documentation Updates**
|
||||
- Improved tool documentation
|
||||
- Updated installation guide for the `uv` tool package
|
||||
- Added instructions for upgrading crewAI with the `uv` tool
|
||||
- Added documentation for `ApifyActorsTool`
|
||||
</Update>
|
||||
|
||||
<Update label="2024-03-10" description="v0.105.0">
|
||||
**Core Improvements & Fixes**
|
||||
- Fixed issues with missing template variables and user memory configuration
|
||||
- Improved async flow support and addressed agent response formatting
|
||||
- Enhanced memory reset functionality and fixed CLI memory commands
|
||||
- Fixed type issues, tool calling properties, and telemetry decoupling
|
||||
|
||||
**New Features & Enhancements**
|
||||
- Added Flow state export and improved state utilities
|
||||
- Enhanced agent knowledge setup with optional crew embedder
|
||||
- Introduced event emitter for better observability and LLM call tracking
|
||||
- Added support for Python 3.10 and ChatOllama from langchain_ollama
|
||||
- Integrated context window size support for the o3-mini model
|
||||
- Added support for multiple router calls
|
||||
|
||||
**Documentation & Guides**
|
||||
- Improved documentation layout and hierarchical structure
|
||||
- Added QdrantVectorSearchTool guide and clarified event listener usage
|
||||
- Fixed typos in prompts and updated Amazon Bedrock model listings
|
||||
</Update>
|
||||
|
||||
<Update label="2024-02-12" description="v0.102.0">
|
||||
**Core Improvements & Fixes**
|
||||
- Enhanced LLM Support: Improved structured LLM output, parameter handling, and formatting for Anthropic models
|
||||
- Crew & Agent Stability: Fixed issues with cloning agents/crews using knowledge sources, multiple task outputs in conditional tasks, and ignored Crew task callbacks
|
||||
- Memory & Storage Fixes: Fixed short-term memory handling with Bedrock, ensured correct embedder initialization, and added a reset memories function in the crew class
|
||||
- Training & Execution Reliability: Fixed broken training and interpolation issues with dict and list input types
|
||||
|
||||
**New Features & Enhancements**
|
||||
- Advanced Knowledge Management: Improved naming conventions and enhanced embedding configuration with custom embedder support
|
||||
- Expanded Logging & Observability: Added JSON format support for logging and integrated MLflow tracing documentation
|
||||
- Data Handling Improvements: Updated excel_knowledge_source.py to process multi-tab files
|
||||
- General Performance & Codebase Clean-Up: Streamlined enterprise code alignment and resolved linting issues
|
||||
- Adding new tool: `QdrantVectorSearchTool`
|
||||
|
||||
**Documentation & Guides**
|
||||
- Updated AI & Memory Docs: Improved Bedrock, Google AI, and long-term memory documentation
|
||||
- Task & Workflow Clarity: Added "Human Input" row to Task Attributes, Langfuse guide, and FileWriterTool documentation
|
||||
- Fixed Various Typos & Formatting Issues
|
||||
</Update>
|
||||
|
||||
<Update label="2024-01-28" description="v0.100.0">
|
||||
**Features**
|
||||
- Add Composio docs
|
||||
- Add SageMaker as a LLM provider
|
||||
|
||||
**Fixes**
|
||||
- Overall LLM connection issues
|
||||
- Using safe accessors on training
|
||||
- Add version check to crew_chat.py
|
||||
|
||||
**Documentation**
|
||||
- New docs for crewai chat
|
||||
- Improve formatting and clarity in CLI and Composio Tool docs
|
||||
</Update>
|
||||
|
||||
<Update label="2024-01-20" description="v0.98.0">
|
||||
**Features**
|
||||
- Conversation crew v1
|
||||
- Add unique ID to flow states
|
||||
- Add @persist decorator with FlowPersistence interface
|
||||
|
||||
**Integrations**
|
||||
- Add SambaNova integration
|
||||
- Add NVIDIA NIM provider in cli
|
||||
- Introducing VoyageAI
|
||||
|
||||
**Fixes**
|
||||
- Fix API Key Behavior and Entity Handling in Mem0 Integration
|
||||
- Fixed core invoke loop logic and relevant tests
|
||||
- Make tool inputs actual objects and not strings
|
||||
- Add important missing parts to creating tools
|
||||
- Drop litellm version to prevent windows issue
|
||||
- Before kickoff if inputs are none
|
||||
- Fixed typos, nested pydantic model issue, and docling issues
|
||||
</Update>
|
||||
|
||||
<Update label="2024-01-04" description="v0.95.0">
|
||||
**New Features**
|
||||
- Adding Multimodal Abilities to Crew
|
||||
- Programatic Guardrails
|
||||
- HITL multiple rounds
|
||||
- Gemini 2.0 Support
|
||||
- CrewAI Flows Improvements
|
||||
- Add Workflow Permissions
|
||||
- Add support for langfuse with litellm
|
||||
- Portkey Integration with CrewAI
|
||||
- Add interpolate_only method and improve error handling
|
||||
- Docling Support
|
||||
- Weviate Support
|
||||
|
||||
**Fixes**
|
||||
- output_file not respecting system path
|
||||
- disk I/O error when resetting short-term memory
|
||||
- CrewJSONEncoder now accepts enums
|
||||
- Python max version
|
||||
- Interpolation for output_file in Task
|
||||
- Handle coworker role name case/whitespace properly
|
||||
- Add tiktoken as explicit dependency and document Rust requirement
|
||||
- Include agent knowledge in planning process
|
||||
- Change storage initialization to None for KnowledgeStorage
|
||||
- Fix optional storage checks
|
||||
- include event emitter in flows
|
||||
- Docstring, Error Handling, and Type Hints Improvements
|
||||
- Suppressed userWarnings from litellm pydantic issues
|
||||
</Update>
|
||||
|
||||
<Update label="2023-12-05" description="v0.86.0">
|
||||
**Changes**
|
||||
- Remove all references to pipeline and pipeline router
|
||||
- Add Nvidia NIM as provider in Custom LLM
|
||||
- Add knowledge demo + improve knowledge docs
|
||||
- Add HITL multiple rounds of followup
|
||||
- New docs about yaml crew with decorators
|
||||
- Simplify template crew
|
||||
</Update>
|
||||
|
||||
<Update label="2023-12-04" description="v0.85.0">
|
||||
**Features**
|
||||
- Added knowledge to agent level
|
||||
- Feat/remove langchain
|
||||
- Improve typed task outputs
|
||||
- Log in to Tool Repository on crewai login
|
||||
|
||||
**Fixes**
|
||||
- Fixes issues with result as answer not properly exiting LLM loop
|
||||
- Fix missing key name when running with ollama provider
|
||||
- Fix spelling issue found
|
||||
|
||||
**Documentation**
|
||||
- Update readme for running mypy
|
||||
- Add knowledge to mint.json
|
||||
- Update Github actions
|
||||
- Update Agents docs to include two approaches for creating an agent
|
||||
- Improvements to LLM Configuration and Usage
|
||||
</Update>
|
||||
|
||||
<Update label="2023-11-25" description="v0.83.0">
|
||||
**New Features**
|
||||
- New before_kickoff and after_kickoff crew callbacks
|
||||
- Support to pre-seed agents with Knowledge
|
||||
- Add support for retrieving user preferences and memories using Mem0
|
||||
|
||||
**Fixes**
|
||||
- Fix Async Execution
|
||||
- Upgrade chroma and adjust embedder function generator
|
||||
- Update CLI Watson supported models + docs
|
||||
- Reduce level for Bandit
|
||||
- Fixing all tests
|
||||
|
||||
**Documentation**
|
||||
- Update Docs
|
||||
</Update>
|
||||
|
||||
<Update label="2023-11-13" description="v0.80.0">
|
||||
**Fixes**
|
||||
- Fixing Tokens callback replacement bug
|
||||
- Fixing Step callback issue
|
||||
- Add cached prompt tokens info on usage metrics
|
||||
- Fix crew_train_success test
|
||||
</Update>
|
||||
@@ -150,8 +150,6 @@ result = crew.kickoff(
|
||||
|
||||
Here are examples of how to use different types of knowledge sources:
|
||||
|
||||
Note: Please ensure that you create the ./knowldge folder. All source files (e.g., .txt, .pdf, .xlsx, .json) should be placed in this folder for centralized management.
|
||||
|
||||
### Text File Knowledge Source
|
||||
```python
|
||||
from crewai.knowledge.source.text_file_knowledge_source import TextFileKnowledgeSource
|
||||
@@ -462,12 +460,12 @@ class SpaceNewsKnowledgeSource(BaseKnowledgeSource):
|
||||
data = response.json()
|
||||
articles = data.get('results', [])
|
||||
|
||||
formatted_data = self.validate_content(articles)
|
||||
formatted_data = self._format_articles(articles)
|
||||
return {self.api_endpoint: formatted_data}
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to fetch space news: {str(e)}")
|
||||
|
||||
def validate_content(self, articles: list) -> str:
|
||||
def _format_articles(self, articles: list) -> str:
|
||||
"""Format articles into readable text."""
|
||||
formatted = "Space News Articles:\n\n"
|
||||
for article in articles:
|
||||
|
||||
@@ -59,7 +59,7 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
|
||||
goal: Conduct comprehensive research and analysis
|
||||
backstory: A dedicated research professional with years of experience
|
||||
verbose: true
|
||||
llm: openai/gpt-4o-mini # your model here
|
||||
llm: openai/gpt-4o-mini # your model here
|
||||
# (see provider configuration examples below for more)
|
||||
```
|
||||
|
||||
@@ -111,7 +111,7 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
|
||||
## Provider Configuration Examples
|
||||
|
||||
|
||||
CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities.
|
||||
CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities.
|
||||
In this section, you'll find detailed examples that help you select, configure, and optimize the LLM that best fits your project's needs.
|
||||
|
||||
<AccordionGroup>
|
||||
@@ -121,7 +121,7 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
```toml Code
|
||||
# Required
|
||||
OPENAI_API_KEY=sk-...
|
||||
|
||||
|
||||
# Optional
|
||||
OPENAI_API_BASE=<custom-base-url>
|
||||
OPENAI_ORGANIZATION=<your-org-id>
|
||||
@@ -158,11 +158,7 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
|
||||
<Accordion title="Anthropic">
|
||||
```toml Code
|
||||
# Required
|
||||
ANTHROPIC_API_KEY=sk-ant-...
|
||||
|
||||
# Optional
|
||||
ANTHROPIC_API_BASE=<custom-base-url>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
@@ -226,7 +222,7 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
AZURE_API_KEY=<your-api-key>
|
||||
AZURE_API_BASE=<your-resource-url>
|
||||
AZURE_API_VERSION=<api-version>
|
||||
|
||||
|
||||
# Optional
|
||||
AZURE_AD_TOKEN=<your-azure-ad-token>
|
||||
AZURE_API_TYPE=<your-azure-api-type>
|
||||
@@ -254,42 +250,8 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
|
||||
)
|
||||
```
|
||||
|
||||
Before using Amazon Bedrock, make sure you have boto3 installed in your environment
|
||||
|
||||
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|-------------------------|----------------------|-------------------------------------------------------------------|
|
||||
| Amazon Nova Pro | Up to 300k tokens | High-performance, model balancing accuracy, speed, and cost-effectiveness across diverse tasks. |
|
||||
| Amazon Nova Micro | Up to 128k tokens | High-performance, cost-effective text-only model optimized for lowest latency responses. |
|
||||
| Amazon Nova Lite | Up to 300k tokens | High-performance, affordable multimodal processing for images, video, and text with real-time capabilities. |
|
||||
| Claude 3.7 Sonnet | Up to 128k tokens | High-performance, best for complex reasoning, coding & AI agents |
|
||||
| Claude 3.5 Sonnet v2 | Up to 200k tokens | State-of-the-art model specialized in software engineering, agentic capabilities, and computer interaction at optimized cost. |
|
||||
| Claude 3.5 Sonnet | Up to 200k tokens | High-performance model delivering superior intelligence and reasoning across diverse tasks with optimal speed-cost balance. |
|
||||
| Claude 3.5 Haiku | Up to 200k tokens | Fast, compact multimodal model optimized for quick responses and seamless human-like interactions |
|
||||
| Claude 3 Sonnet | Up to 200k tokens | Multimodal model balancing intelligence and speed for high-volume deployments. |
|
||||
| Claude 3 Haiku | Up to 200k tokens | Compact, high-speed multimodal model optimized for quick responses and natural conversational interactions |
|
||||
| Claude 3 Opus | Up to 200k tokens | Most advanced multimodal model exceling at complex tasks with human-like reasoning and superior contextual understanding. |
|
||||
| Claude 2.1 | Up to 200k tokens | Enhanced version with expanded context window, improved reliability, and reduced hallucinations for long-form and RAG applications |
|
||||
| Claude | Up to 100k tokens | Versatile model excelling in sophisticated dialogue, creative content, and precise instruction following. |
|
||||
| Claude Instant | Up to 100k tokens | Fast, cost-effective model for everyday tasks like dialogue, analysis, summarization, and document Q&A |
|
||||
| Llama 3.1 405B Instruct | Up to 128k tokens | Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks. |
|
||||
| Llama 3.1 70B Instruct | Up to 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
|
||||
| Llama 3.1 8B Instruct | Up to 128k tokens | Advanced state-of-the-art model with language understanding, superior reasoning, and text generation. |
|
||||
| Llama 3 70B Instruct | Up to 8k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
|
||||
| Llama 3 8B Instruct | Up to 8k tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
|
||||
| Titan Text G1 - Lite | Up to 4k tokens | Lightweight, cost-effective model optimized for English tasks and fine-tuning with focus on summarization and content generation. |
|
||||
| Titan Text G1 - Express | Up to 8k tokens | Versatile model for general language tasks, chat, and RAG applications with support for English and 100+ languages. |
|
||||
| Cohere Command | Up to 4k tokens | Model specialized in following user commands and delivering practical enterprise solutions. |
|
||||
| Jurassic-2 Mid | Up to 8,191 tokens | Cost-effective model balancing quality and affordability for diverse language tasks like Q&A, summarization, and content generation. |
|
||||
| Jurassic-2 Ultra | Up to 8,191 tokens | Model for advanced text generation and comprehension, excelling in complex tasks like analysis and content creation. |
|
||||
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
|
||||
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
|
||||
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
|
||||
|
||||
</Accordion>
|
||||
|
||||
|
||||
<Accordion title="Amazon SageMaker">
|
||||
```toml Code
|
||||
AWS_ACCESS_KEY_ID=<your-access-key>
|
||||
@@ -406,46 +368,6 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
| baichuan-inc/baichuan2-13b-chat | 4,096 tokens | Support Chinese and English chat, coding, math, instruction following, solving quizzes |
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Local NVIDIA NIM Deployed using WSL2">
|
||||
|
||||
NVIDIA NIM enables you to run powerful LLMs locally on your Windows machine using WSL2 (Windows Subsystem for Linux).
|
||||
This approach allows you to leverage your NVIDIA GPU for private, secure, and cost-effective AI inference without relying on cloud services.
|
||||
Perfect for development, testing, or production scenarios where data privacy or offline capabilities are required.
|
||||
|
||||
Here is a step-by-step guide to setting up a local NVIDIA NIM model:
|
||||
|
||||
1. Follow installation instructions from [NVIDIA Website](https://docs.nvidia.com/nim/wsl2/latest/getting-started.html)
|
||||
|
||||
2. Install the local model. For Llama 3.1-8b follow [instructions](https://build.nvidia.com/meta/llama-3_1-8b-instruct/deploy)
|
||||
|
||||
3. Configure your crewai local models:
|
||||
|
||||
```python Code
|
||||
from crewai.llm import LLM
|
||||
|
||||
local_nvidia_nim_llm = LLM(
|
||||
model="openai/meta/llama-3.1-8b-instruct", # it's an openai-api compatible model
|
||||
base_url="http://localhost:8000/v1",
|
||||
api_key="<your_api_key|any text if you have not configured it>", # api_key is required, but you can use any text
|
||||
)
|
||||
|
||||
# Then you can use it in your crew:
|
||||
|
||||
@CrewBase
|
||||
class MyCrew():
|
||||
# ...
|
||||
|
||||
@agent
|
||||
def researcher(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['researcher'],
|
||||
llm=local_nvidia_nim_llm
|
||||
)
|
||||
|
||||
# ...
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Groq">
|
||||
Set the following environment variables in your `.env` file:
|
||||
|
||||
@@ -474,7 +396,7 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
WATSONX_URL=<your-url>
|
||||
WATSONX_APIKEY=<your-apikey>
|
||||
WATSONX_PROJECT_ID=<your-project-id>
|
||||
|
||||
|
||||
# Optional
|
||||
WATSONX_TOKEN=<your-token>
|
||||
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id>
|
||||
@@ -491,7 +413,7 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
|
||||
<Accordion title="Ollama (Local LLMs)">
|
||||
1. Install Ollama: [ollama.ai](https://ollama.ai/)
|
||||
2. Run a model: `ollama run llama3`
|
||||
2. Run a model: `ollama run llama2`
|
||||
3. Configure:
|
||||
|
||||
```python Code
|
||||
@@ -600,7 +522,7 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
```toml Code
|
||||
OPENROUTER_API_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
@@ -723,7 +645,7 @@ Learn how to get the most out of your LLM configuration:
|
||||
- Small tasks (up to 4K tokens): Standard models
|
||||
- Medium tasks (between 4K-32K): Enhanced models
|
||||
- Large tasks (over 32K): Large context models
|
||||
|
||||
|
||||
```python
|
||||
# Configure model with appropriate settings
|
||||
llm = LLM(
|
||||
@@ -760,11 +682,11 @@ Learn how to get the most out of your LLM configuration:
|
||||
<Warning>
|
||||
Most authentication issues can be resolved by checking API key format and environment variable names.
|
||||
</Warning>
|
||||
|
||||
|
||||
```bash
|
||||
# OpenAI
|
||||
OPENAI_API_KEY=sk-...
|
||||
|
||||
|
||||
# Anthropic
|
||||
ANTHROPIC_API_KEY=sk-ant-...
|
||||
```
|
||||
@@ -773,11 +695,11 @@ Learn how to get the most out of your LLM configuration:
|
||||
<Check>
|
||||
Always include the provider prefix in model names
|
||||
</Check>
|
||||
|
||||
|
||||
```python
|
||||
# Correct
|
||||
llm = LLM(model="openai/gpt-4")
|
||||
|
||||
|
||||
# Incorrect
|
||||
llm = LLM(model="gpt-4")
|
||||
```
|
||||
@@ -787,9 +709,4 @@ Learn how to get the most out of your LLM configuration:
|
||||
Use larger context models for extensive tasks
|
||||
</Tip>
|
||||
|
||||
```python
|
||||
# Large context model
|
||||
llm = LLM(model="openai/gpt-4o") # 128K tokens
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
@@ -60,8 +60,7 @@ my_crew = Crew(
|
||||
```python Code
|
||||
from crewai import Crew, Process
|
||||
from crewai.memory import LongTermMemory, ShortTermMemory, EntityMemory
|
||||
from crewai.memory.storage.rag_storage import RAGStorage
|
||||
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
|
||||
from crewai.memory.storage import LTMSQLiteStorage, RAGStorage
|
||||
from typing import List, Optional
|
||||
|
||||
# Assemble your crew with memory capabilities
|
||||
@@ -120,7 +119,7 @@ Example using environment variables:
|
||||
import os
|
||||
from crewai import Crew
|
||||
from crewai.memory import LongTermMemory
|
||||
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
|
||||
from crewai.memory.storage import LTMSQLiteStorage
|
||||
|
||||
# Configure storage path using environment variable
|
||||
storage_path = os.getenv("CREWAI_STORAGE_DIR", "./storage")
|
||||
@@ -149,7 +148,7 @@ crew = Crew(memory=True) # Uses default storage locations
|
||||
```python
|
||||
from crewai import Crew
|
||||
from crewai.memory import LongTermMemory
|
||||
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
|
||||
from crewai.memory.storage import LTMSQLiteStorage
|
||||
|
||||
# Configure custom storage paths
|
||||
crew = Crew(
|
||||
|
||||
223
docs/docs.json
223
docs/docs.json
@@ -1,223 +0,0 @@
|
||||
{
|
||||
"$schema": "https://mintlify.com/docs.json",
|
||||
"theme": "palm",
|
||||
"name": "CrewAI",
|
||||
"colors": {
|
||||
"primary": "#EB6658",
|
||||
"light": "#F3A78B",
|
||||
"dark": "#C94C3C"
|
||||
},
|
||||
"favicon": "favicon.svg",
|
||||
"navigation": {
|
||||
"tabs": [
|
||||
{
|
||||
"tab": "Get Started",
|
||||
"groups": [
|
||||
{
|
||||
"group": "Get Started",
|
||||
"pages": [
|
||||
"introduction",
|
||||
"installation",
|
||||
"quickstart",
|
||||
"changelog"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Guides",
|
||||
"pages": [
|
||||
{
|
||||
"group": "Concepts",
|
||||
"pages": [
|
||||
"guides/concepts/evaluating-use-cases"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Agents",
|
||||
"pages": [
|
||||
"guides/agents/crafting-effective-agents"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Crews",
|
||||
"pages": [
|
||||
"guides/crews/first-crew"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Flows",
|
||||
"pages": [
|
||||
"guides/flows/first-flow",
|
||||
"guides/flows/mastering-flow-state"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Advanced",
|
||||
"pages": [
|
||||
"guides/advanced/customizing-prompts",
|
||||
"guides/advanced/fingerprinting"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Core Concepts",
|
||||
"pages": [
|
||||
"concepts/agents",
|
||||
"concepts/tasks",
|
||||
"concepts/crews",
|
||||
"concepts/flows",
|
||||
"concepts/knowledge",
|
||||
"concepts/llms",
|
||||
"concepts/processes",
|
||||
"concepts/collaboration",
|
||||
"concepts/training",
|
||||
"concepts/memory",
|
||||
"concepts/planning",
|
||||
"concepts/testing",
|
||||
"concepts/cli",
|
||||
"concepts/tools",
|
||||
"concepts/event-listener",
|
||||
"concepts/langchain-tools",
|
||||
"concepts/llamaindex-tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "How to Guides",
|
||||
"pages": [
|
||||
"how-to/create-custom-tools",
|
||||
"how-to/sequential-process",
|
||||
"how-to/hierarchical-process",
|
||||
"how-to/custom-manager-agent",
|
||||
"how-to/llm-connections",
|
||||
"how-to/customizing-agents",
|
||||
"how-to/multimodal-agents",
|
||||
"how-to/coding-agents",
|
||||
"how-to/force-tool-output-as-result",
|
||||
"how-to/human-input-on-execution",
|
||||
"how-to/kickoff-async",
|
||||
"how-to/kickoff-for-each",
|
||||
"how-to/replay-tasks-from-latest-crew-kickoff",
|
||||
"how-to/conditional-tasks",
|
||||
"how-to/agentops-observability",
|
||||
"how-to/langtrace-observability",
|
||||
"how-to/mlflow-observability",
|
||||
"how-to/openlit-observability",
|
||||
"how-to/portkey-observability",
|
||||
"how-to/langfuse-observability"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Tools",
|
||||
"pages": [
|
||||
"tools/aimindtool",
|
||||
"tools/apifyactorstool",
|
||||
"tools/bravesearchtool",
|
||||
"tools/browserbaseloadtool",
|
||||
"tools/codedocssearchtool",
|
||||
"tools/codeinterpretertool",
|
||||
"tools/composiotool",
|
||||
"tools/csvsearchtool",
|
||||
"tools/dalletool",
|
||||
"tools/directorysearchtool",
|
||||
"tools/directoryreadtool",
|
||||
"tools/docxsearchtool",
|
||||
"tools/exasearchtool",
|
||||
"tools/filereadtool",
|
||||
"tools/filewritetool",
|
||||
"tools/firecrawlcrawlwebsitetool",
|
||||
"tools/firecrawlscrapewebsitetool",
|
||||
"tools/firecrawlsearchtool",
|
||||
"tools/githubsearchtool",
|
||||
"tools/hyperbrowserloadtool",
|
||||
"tools/linkupsearchtool",
|
||||
"tools/llamaindextool",
|
||||
"tools/serperdevtool",
|
||||
"tools/s3readertool",
|
||||
"tools/s3writertool",
|
||||
"tools/scrapegraphscrapetool",
|
||||
"tools/scrapeelementfromwebsitetool",
|
||||
"tools/jsonsearchtool",
|
||||
"tools/mdxsearchtool",
|
||||
"tools/mysqltool",
|
||||
"tools/multiontool",
|
||||
"tools/nl2sqltool",
|
||||
"tools/patronustools",
|
||||
"tools/pdfsearchtool",
|
||||
"tools/pgsearchtool",
|
||||
"tools/qdrantvectorsearchtool",
|
||||
"tools/ragtool",
|
||||
"tools/scrapewebsitetool",
|
||||
"tools/scrapflyscrapetool",
|
||||
"tools/seleniumscrapingtool",
|
||||
"tools/snowflakesearchtool",
|
||||
"tools/spidertool",
|
||||
"tools/txtsearchtool",
|
||||
"tools/visiontool",
|
||||
"tools/weaviatevectorsearchtool",
|
||||
"tools/websitesearchtool",
|
||||
"tools/xmlsearchtool",
|
||||
"tools/youtubechannelsearchtool",
|
||||
"tools/youtubevideosearchtool"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Telemetry",
|
||||
"pages": [
|
||||
"telemetry"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"tab": "Examples",
|
||||
"groups": [
|
||||
{
|
||||
"group": "Examples",
|
||||
"pages": [
|
||||
"examples/example"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"global": {
|
||||
"anchors": [
|
||||
{
|
||||
"anchor": "Community",
|
||||
"href": "https://community.crewai.com",
|
||||
"icon": "discourse"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"logo": {
|
||||
"light": "crew_only_logo.png",
|
||||
"dark": "crew_only_logo.png"
|
||||
},
|
||||
"appearance": {
|
||||
"default": "dark",
|
||||
"strict": false
|
||||
},
|
||||
"navbar": {
|
||||
"primary": {
|
||||
"type": "github",
|
||||
"href": "https://github.com/crewAIInc/crewAI"
|
||||
}
|
||||
},
|
||||
"search": {
|
||||
"prompt": "Search CrewAI docs"
|
||||
},
|
||||
"seo": {
|
||||
"indexing": "navigable"
|
||||
},
|
||||
"footer": {
|
||||
"socials": {
|
||||
"website": "https://crewai.com",
|
||||
"x": "https://x.com/crewAIInc",
|
||||
"github": "https://github.com/crewAIInc/crewAI",
|
||||
"linkedin": "https://www.linkedin.com/company/crewai-inc",
|
||||
"youtube": "https://youtube.com/@crewAIInc",
|
||||
"reddit": "https://www.reddit.com/r/crewAIInc/"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,5 +1,4 @@
|
||||
---
|
||||
title: Customizing Prompts
|
||||
---title: Customizing Prompts
|
||||
description: Dive deeper into low-level prompt customization for CrewAI, enabling super custom and complex use cases for different models and languages.
|
||||
icon: message-pen
|
||||
---
|
||||
|
||||
225
docs/mint.json
Normal file
225
docs/mint.json
Normal file
@@ -0,0 +1,225 @@
|
||||
{
|
||||
"name": "CrewAI",
|
||||
"theme": "venus",
|
||||
"logo": {
|
||||
"dark": "crew_only_logo.png",
|
||||
"light": "crew_only_logo.png"
|
||||
},
|
||||
"favicon": "favicon.svg",
|
||||
"colors": {
|
||||
"primary": "#EB6658",
|
||||
"light": "#F3A78B",
|
||||
"dark": "#C94C3C",
|
||||
"anchors": {
|
||||
"from": "#737373",
|
||||
"to": "#EB6658"
|
||||
}
|
||||
},
|
||||
"seo": {
|
||||
"indexHiddenPages": false
|
||||
},
|
||||
"modeToggle": {
|
||||
"default": "dark",
|
||||
"isHidden": false
|
||||
},
|
||||
"feedback": {
|
||||
"suggestEdit": true,
|
||||
"raiseIssue": true,
|
||||
"thumbsRating": true
|
||||
},
|
||||
"topbarCtaButton": {
|
||||
"type": "github",
|
||||
"url": "https://github.com/crewAIInc/crewAI"
|
||||
},
|
||||
"primaryTab": {
|
||||
"name": "Get Started"
|
||||
},
|
||||
"tabs": [
|
||||
{
|
||||
"name": "Examples",
|
||||
"url": "examples"
|
||||
}
|
||||
],
|
||||
"anchors": [
|
||||
{
|
||||
"name": "Community",
|
||||
"icon": "discourse",
|
||||
"url": "https://community.crewai.com"
|
||||
},
|
||||
{
|
||||
"name": "Changelog",
|
||||
"icon": "timeline",
|
||||
"url": "https://github.com/crewAIInc/crewAI/releases"
|
||||
}
|
||||
],
|
||||
"navigation": [
|
||||
{
|
||||
"group": "Get Started",
|
||||
"pages": [
|
||||
"introduction",
|
||||
"installation",
|
||||
"quickstart"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Guides",
|
||||
"pages": [
|
||||
{
|
||||
"group": "Concepts",
|
||||
"pages": [
|
||||
"guides/concepts/evaluating-use-cases"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Agents",
|
||||
"pages": [
|
||||
"guides/agents/crafting-effective-agents"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Crews",
|
||||
"pages": [
|
||||
"guides/crews/first-crew"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Flows",
|
||||
"pages": [
|
||||
"guides/flows/first-flow",
|
||||
"guides/flows/mastering-flow-state"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Advanced",
|
||||
"pages": [
|
||||
"guides/advanced/customizing-prompts",
|
||||
"guides/advanced/fingerprinting"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Core Concepts",
|
||||
"pages": [
|
||||
"concepts/agents",
|
||||
"concepts/tasks",
|
||||
"concepts/crews",
|
||||
"concepts/flows",
|
||||
"concepts/knowledge",
|
||||
"concepts/llms",
|
||||
"concepts/processes",
|
||||
"concepts/collaboration",
|
||||
"concepts/training",
|
||||
"concepts/memory",
|
||||
"concepts/planning",
|
||||
"concepts/testing",
|
||||
"concepts/cli",
|
||||
"concepts/tools",
|
||||
"concepts/event-listener",
|
||||
"concepts/langchain-tools",
|
||||
"concepts/llamaindex-tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "How to Guides",
|
||||
"pages": [
|
||||
"how-to/create-custom-tools",
|
||||
"how-to/sequential-process",
|
||||
"how-to/hierarchical-process",
|
||||
"how-to/custom-manager-agent",
|
||||
"how-to/llm-connections",
|
||||
"how-to/customizing-agents",
|
||||
"how-to/multimodal-agents",
|
||||
"how-to/coding-agents",
|
||||
"how-to/force-tool-output-as-result",
|
||||
"how-to/human-input-on-execution",
|
||||
"how-to/kickoff-async",
|
||||
"how-to/kickoff-for-each",
|
||||
"how-to/replay-tasks-from-latest-crew-kickoff",
|
||||
"how-to/conditional-tasks",
|
||||
"how-to/agentops-observability",
|
||||
"how-to/langtrace-observability",
|
||||
"how-to/mlflow-observability",
|
||||
"how-to/openlit-observability",
|
||||
"how-to/portkey-observability",
|
||||
"how-to/langfuse-observability"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Examples",
|
||||
"pages": [
|
||||
"examples/example"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Tools",
|
||||
"pages": [
|
||||
"tools/aimindtool",
|
||||
"tools/apifyactorstool",
|
||||
"tools/bravesearchtool",
|
||||
"tools/browserbaseloadtool",
|
||||
"tools/codedocssearchtool",
|
||||
"tools/codeinterpretertool",
|
||||
"tools/composiotool",
|
||||
"tools/csvsearchtool",
|
||||
"tools/dalletool",
|
||||
"tools/directorysearchtool",
|
||||
"tools/directoryreadtool",
|
||||
"tools/docxsearchtool",
|
||||
"tools/exasearchtool",
|
||||
"tools/filereadtool",
|
||||
"tools/filewritetool",
|
||||
"tools/firecrawlcrawlwebsitetool",
|
||||
"tools/firecrawlscrapewebsitetool",
|
||||
"tools/firecrawlsearchtool",
|
||||
"tools/githubsearchtool",
|
||||
"tools/hyperbrowserloadtool",
|
||||
"tools/linkupsearchtool",
|
||||
"tools/llamaindextool",
|
||||
"tools/serperdevtool",
|
||||
"tools/s3readertool",
|
||||
"tools/s3writertool",
|
||||
"tools/scrapegraphscrapetool",
|
||||
"tools/scrapeelementfromwebsitetool",
|
||||
"tools/jsonsearchtool",
|
||||
"tools/mdxsearchtool",
|
||||
"tools/mysqltool",
|
||||
"tools/multiontool",
|
||||
"tools/nl2sqltool",
|
||||
"tools/patronustools",
|
||||
"tools/pdfsearchtool",
|
||||
"tools/pgsearchtool",
|
||||
"tools/qdrantvectorsearchtool",
|
||||
"tools/ragtool",
|
||||
"tools/scrapewebsitetool",
|
||||
"tools/scrapflyscrapetool",
|
||||
"tools/seleniumscrapingtool",
|
||||
"tools/snowflakesearchtool",
|
||||
"tools/spidertool",
|
||||
"tools/txtsearchtool",
|
||||
"tools/visiontool",
|
||||
"tools/weaviatevectorsearchtool",
|
||||
"tools/websitesearchtool",
|
||||
"tools/xmlsearchtool",
|
||||
"tools/youtubechannelsearchtool",
|
||||
"tools/youtubevideosearchtool"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Telemetry",
|
||||
"pages": [
|
||||
"telemetry"
|
||||
]
|
||||
}
|
||||
],
|
||||
"search": {
|
||||
"prompt": "Search CrewAI docs"
|
||||
},
|
||||
"footerSocials": {
|
||||
"website": "https://crewai.com",
|
||||
"x": "https://x.com/crewAIInc",
|
||||
"github": "https://github.com/crewAIInc/crewAI",
|
||||
"linkedin": "https://www.linkedin.com/company/crewai-inc",
|
||||
"youtube": "https://youtube.com/@crewAIInc"
|
||||
}
|
||||
}
|
||||
@@ -300,7 +300,7 @@ email_summarizer:
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Note how we use the same name for the task in the `tasks.yaml` (`email_summarizer_task`) file as the method name in the `crew.py` (`email_summarizer_task`) file.
|
||||
Note how we use the same name for the agent in the `tasks.yaml` (`email_summarizer_task`) file as the method name in the `crew.py` (`email_summarizer_task`) file.
|
||||
</Tip>
|
||||
|
||||
```yaml tasks.yaml
|
||||
|
||||
@@ -7,10 +7,8 @@ icon: file-code
|
||||
# `JSONSearchTool`
|
||||
|
||||
<Note>
|
||||
The JSONSearchTool is currently in an experimental phase. This means the tool
|
||||
is under active development, and users might encounter unexpected behavior or
|
||||
changes. We highly encourage feedback on any issues or suggestions for
|
||||
improvements.
|
||||
The JSONSearchTool is currently in an experimental phase. This means the tool is under active development, and users might encounter unexpected behavior or changes.
|
||||
We highly encourage feedback on any issues or suggestions for improvements.
|
||||
</Note>
|
||||
|
||||
## Description
|
||||
@@ -62,7 +60,7 @@ tool = JSONSearchTool(
|
||||
# stream=true,
|
||||
},
|
||||
},
|
||||
"embedding_model": {
|
||||
"embedder": {
|
||||
"provider": "google", # or openai, ollama, ...
|
||||
"config": {
|
||||
"model": "models/embedding-001",
|
||||
@@ -72,4 +70,4 @@ tool = JSONSearchTool(
|
||||
},
|
||||
}
|
||||
)
|
||||
```
|
||||
```
|
||||
@@ -8,8 +8,8 @@ icon: vector-square
|
||||
|
||||
## Description
|
||||
|
||||
The `RagTool` is designed to answer questions by leveraging the power of Retrieval-Augmented Generation (RAG) through EmbedChain.
|
||||
It provides a dynamic knowledge base that can be queried to retrieve relevant information from various data sources.
|
||||
The `RagTool` is designed to answer questions by leveraging the power of Retrieval-Augmented Generation (RAG) through EmbedChain.
|
||||
It provides a dynamic knowledge base that can be queried to retrieve relevant information from various data sources.
|
||||
This tool is particularly useful for applications that require access to a vast array of information and need to provide contextually relevant answers.
|
||||
|
||||
## Example
|
||||
@@ -138,7 +138,7 @@ config = {
|
||||
"model": "gpt-4",
|
||||
}
|
||||
},
|
||||
"embedding_model": {
|
||||
"embedder": {
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"model": "text-embedding-ada-002"
|
||||
@@ -151,4 +151,4 @@ rag_tool = RagTool(config=config, summarize=True)
|
||||
|
||||
## Conclusion
|
||||
|
||||
The `RagTool` provides a powerful way to create and query knowledge bases from various data sources. By leveraging Retrieval-Augmented Generation, it enables agents to access and retrieve relevant information efficiently, enhancing their ability to provide accurate and contextually appropriate responses.
|
||||
The `RagTool` provides a powerful way to create and query knowledge bases from various data sources. By leveraging Retrieval-Augmented Generation, it enables agents to access and retrieve relevant information efficiently, enhancing their ability to provide accurate and contextually appropriate responses.
|
||||
@@ -17,9 +17,9 @@ dependencies = [
|
||||
"pdfplumber>=0.11.4",
|
||||
"regex>=2024.9.11",
|
||||
# Telemetry and Monitoring
|
||||
"opentelemetry-api>=1.30.0",
|
||||
"opentelemetry-sdk>=1.30.0",
|
||||
"opentelemetry-exporter-otlp-proto-http>=1.30.0",
|
||||
"opentelemetry-api>=1.22.0",
|
||||
"opentelemetry-sdk>=1.22.0",
|
||||
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
|
||||
# Data Handling
|
||||
"chromadb>=0.5.23",
|
||||
"openpyxl>=3.1.5",
|
||||
|
||||
@@ -124,9 +124,9 @@ class CrewAgentParser:
|
||||
)
|
||||
|
||||
def _extract_thought(self, text: str) -> str:
|
||||
thought_index = text.find("\nAction")
|
||||
thought_index = text.find("\n\nAction")
|
||||
if thought_index == -1:
|
||||
thought_index = text.find("\nFinal Answer")
|
||||
thought_index = text.find("\n\nFinal Answer")
|
||||
if thought_index == -1:
|
||||
return ""
|
||||
thought = text[:thought_index].strip()
|
||||
@@ -136,7 +136,7 @@ class CrewAgentParser:
|
||||
|
||||
def _clean_action(self, text: str) -> str:
|
||||
"""Clean action string by removing non-essential formatting characters."""
|
||||
return text.strip().strip("*").strip()
|
||||
return re.sub(r"^\s*\*+\s*|\s*\*+\s*$", "", text).strip()
|
||||
|
||||
def _safe_repair_json(self, tool_input: str) -> str:
|
||||
UNABLE_TO_REPAIR_JSON_RESULTS = ['""', "{}"]
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import subprocess
|
||||
from functools import lru_cache
|
||||
|
||||
|
||||
class Repository:
|
||||
@@ -36,7 +35,6 @@ class Repository:
|
||||
encoding="utf-8",
|
||||
).strip()
|
||||
|
||||
@lru_cache(maxsize=None)
|
||||
def is_git_repo(self) -> bool:
|
||||
"""Check if the current directory is a git repository."""
|
||||
try:
|
||||
|
||||
@@ -10,7 +10,6 @@ dependencies = [
|
||||
|
||||
[project.scripts]
|
||||
kickoff = "{{folder_name}}.main:kickoff"
|
||||
run_crew = "{{folder_name}}.main:kickoff"
|
||||
plot = "{{folder_name}}.main:plot"
|
||||
|
||||
[build-system]
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import asyncio
|
||||
import concurrent.futures
|
||||
import json
|
||||
import re
|
||||
import uuid
|
||||
@@ -6,7 +7,7 @@ import warnings
|
||||
from concurrent.futures import Future
|
||||
from copy import copy as shallow_copy
|
||||
from hashlib import md5
|
||||
from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Union
|
||||
from typing import Any, Callable, Dict, List, Optional, Sequence, Set, Tuple, Union
|
||||
|
||||
from pydantic import (
|
||||
UUID4,
|
||||
@@ -707,6 +708,65 @@ class Crew(BaseModel):
|
||||
self.usage_metrics = total_usage_metrics
|
||||
self._task_output_handler.reset()
|
||||
return results
|
||||
|
||||
def kickoff_for_each_parallel(self, inputs: Sequence[Dict[str, Any]], max_workers: Optional[int] = None) -> List[CrewOutput]:
|
||||
"""Executes the Crew's workflow for each input in the list in parallel using ThreadPoolExecutor.
|
||||
|
||||
Args:
|
||||
inputs: Sequence of input dictionaries to be passed to each crew execution.
|
||||
max_workers: Maximum number of worker threads to use. If None, uses the default
|
||||
ThreadPoolExecutor behavior (typically min(32, os.cpu_count() + 4)).
|
||||
|
||||
Returns:
|
||||
List of CrewOutput objects, one for each input.
|
||||
"""
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
|
||||
if not isinstance(inputs, (list, tuple)):
|
||||
raise TypeError(f"Inputs must be a list of dictionaries. Received {type(inputs).__name__} instead.")
|
||||
|
||||
if not inputs:
|
||||
return []
|
||||
|
||||
results: List[CrewOutput] = []
|
||||
|
||||
# Initialize the parent crew's usage metrics
|
||||
total_usage_metrics = UsageMetrics()
|
||||
|
||||
# Create a copy of the crew for each input to avoid state conflicts
|
||||
crew_copies = [self.copy() for _ in inputs]
|
||||
|
||||
# Execute each crew in parallel
|
||||
try:
|
||||
with ThreadPoolExecutor(max_workers=max_workers) as executor:
|
||||
# Submit all tasks to the executor
|
||||
future_to_crew = {
|
||||
executor.submit(crew_copies[i].kickoff, inputs[i]): i
|
||||
for i in range(len(inputs))
|
||||
}
|
||||
|
||||
# Process results as they complete
|
||||
for future in as_completed(future_to_crew):
|
||||
crew_index = future_to_crew[future]
|
||||
try:
|
||||
output = future.result()
|
||||
results.append(output)
|
||||
|
||||
# Aggregate usage metrics
|
||||
if crew_copies[crew_index].usage_metrics:
|
||||
total_usage_metrics.add_usage_metrics(crew_copies[crew_index].usage_metrics)
|
||||
except Exception as exc:
|
||||
# Re-raise the exception to maintain consistent behavior with kickoff_for_each
|
||||
raise exc
|
||||
finally:
|
||||
# Clean up to assist garbage collection
|
||||
crew_copies.clear()
|
||||
|
||||
# Set the aggregated metrics on the parent crew
|
||||
self.usage_metrics = total_usage_metrics
|
||||
self._task_output_handler.reset()
|
||||
|
||||
return results
|
||||
|
||||
def _handle_crew_planning(self):
|
||||
"""Handles the Crew planning."""
|
||||
|
||||
@@ -114,60 +114,6 @@ LLM_CONTEXT_WINDOW_SIZES = {
|
||||
"Llama-3.2-11B-Vision-Instruct": 16384,
|
||||
"Meta-Llama-3.2-3B-Instruct": 4096,
|
||||
"Meta-Llama-3.2-1B-Instruct": 16384,
|
||||
# bedrock
|
||||
"us.amazon.nova-pro-v1:0": 300000,
|
||||
"us.amazon.nova-micro-v1:0": 128000,
|
||||
"us.amazon.nova-lite-v1:0": 300000,
|
||||
"us.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
|
||||
"us.anthropic.claude-3-5-haiku-20241022-v1:0": 200000,
|
||||
"us.anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,
|
||||
"us.anthropic.claude-3-7-sonnet-20250219-v1:0": 200000,
|
||||
"us.anthropic.claude-3-sonnet-20240229-v1:0": 200000,
|
||||
"us.anthropic.claude-3-opus-20240229-v1:0": 200000,
|
||||
"us.anthropic.claude-3-haiku-20240307-v1:0": 200000,
|
||||
"us.meta.llama3-2-11b-instruct-v1:0": 128000,
|
||||
"us.meta.llama3-2-3b-instruct-v1:0": 131000,
|
||||
"us.meta.llama3-2-90b-instruct-v1:0": 128000,
|
||||
"us.meta.llama3-2-1b-instruct-v1:0": 131000,
|
||||
"us.meta.llama3-1-8b-instruct-v1:0": 128000,
|
||||
"us.meta.llama3-1-70b-instruct-v1:0": 128000,
|
||||
"us.meta.llama3-3-70b-instruct-v1:0": 128000,
|
||||
"us.meta.llama3-1-405b-instruct-v1:0": 128000,
|
||||
"eu.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
|
||||
"eu.anthropic.claude-3-sonnet-20240229-v1:0": 200000,
|
||||
"eu.anthropic.claude-3-haiku-20240307-v1:0": 200000,
|
||||
"eu.meta.llama3-2-3b-instruct-v1:0": 131000,
|
||||
"eu.meta.llama3-2-1b-instruct-v1:0": 131000,
|
||||
"apac.anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
|
||||
"apac.anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,
|
||||
"apac.anthropic.claude-3-sonnet-20240229-v1:0": 200000,
|
||||
"apac.anthropic.claude-3-haiku-20240307-v1:0": 200000,
|
||||
"amazon.nova-pro-v1:0": 300000,
|
||||
"amazon.nova-micro-v1:0": 128000,
|
||||
"amazon.nova-lite-v1:0": 300000,
|
||||
"anthropic.claude-3-5-sonnet-20240620-v1:0": 200000,
|
||||
"anthropic.claude-3-5-haiku-20241022-v1:0": 200000,
|
||||
"anthropic.claude-3-5-sonnet-20241022-v2:0": 200000,
|
||||
"anthropic.claude-3-7-sonnet-20250219-v1:0": 200000,
|
||||
"anthropic.claude-3-sonnet-20240229-v1:0": 200000,
|
||||
"anthropic.claude-3-opus-20240229-v1:0": 200000,
|
||||
"anthropic.claude-3-haiku-20240307-v1:0": 200000,
|
||||
"anthropic.claude-v2:1": 200000,
|
||||
"anthropic.claude-v2": 100000,
|
||||
"anthropic.claude-instant-v1": 100000,
|
||||
"meta.llama3-1-405b-instruct-v1:0": 128000,
|
||||
"meta.llama3-1-70b-instruct-v1:0": 128000,
|
||||
"meta.llama3-1-8b-instruct-v1:0": 128000,
|
||||
"meta.llama3-70b-instruct-v1:0": 8000,
|
||||
"meta.llama3-8b-instruct-v1:0": 8000,
|
||||
"amazon.titan-text-lite-v1": 4000,
|
||||
"amazon.titan-text-express-v1": 8000,
|
||||
"cohere.command-text-v14": 4000,
|
||||
"ai21.j2-mid-v1": 8191,
|
||||
"ai21.j2-ultra-v1": 8191,
|
||||
"ai21.jamba-instruct-v1:0": 256000,
|
||||
"mistral.mistral-7b-instruct-v0:2": 32000,
|
||||
"mistral.mixtral-8x7b-instruct-v0:1": 32000,
|
||||
# mistral
|
||||
"mistral-tiny": 32768,
|
||||
"mistral-small-latest": 32768,
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import os
|
||||
from typing import Any, Dict, List
|
||||
|
||||
from mem0 import Memory, MemoryClient
|
||||
from mem0 import MemoryClient
|
||||
|
||||
from crewai.memory.storage.interface import Storage
|
||||
|
||||
@@ -32,16 +32,13 @@ class Mem0Storage(Storage):
|
||||
mem0_org_id = config.get("org_id")
|
||||
mem0_project_id = config.get("project_id")
|
||||
|
||||
# Initialize MemoryClient or Memory based on the presence of the mem0_api_key
|
||||
if mem0_api_key:
|
||||
if mem0_org_id and mem0_project_id:
|
||||
self.memory = MemoryClient(
|
||||
api_key=mem0_api_key, org_id=mem0_org_id, project_id=mem0_project_id
|
||||
)
|
||||
else:
|
||||
self.memory = MemoryClient(api_key=mem0_api_key)
|
||||
# Initialize MemoryClient with available parameters
|
||||
if mem0_org_id and mem0_project_id:
|
||||
self.memory = MemoryClient(
|
||||
api_key=mem0_api_key, org_id=mem0_org_id, project_id=mem0_project_id
|
||||
)
|
||||
else:
|
||||
self.memory = Memory() # Fallback to Memory if no Mem0 API key is provided
|
||||
self.memory = MemoryClient(api_key=mem0_api_key)
|
||||
|
||||
def _sanitize_role(self, role: str) -> str:
|
||||
"""
|
||||
|
||||
@@ -19,8 +19,6 @@ from typing import (
|
||||
Tuple,
|
||||
Type,
|
||||
Union,
|
||||
get_args,
|
||||
get_origin,
|
||||
)
|
||||
|
||||
from pydantic import (
|
||||
@@ -180,29 +178,15 @@ class Task(BaseModel):
|
||||
"""
|
||||
if v is not None:
|
||||
sig = inspect.signature(v)
|
||||
positional_args = [
|
||||
param
|
||||
for param in sig.parameters.values()
|
||||
if param.default is inspect.Parameter.empty
|
||||
]
|
||||
if len(positional_args) != 1:
|
||||
if len(sig.parameters) != 1:
|
||||
raise ValueError("Guardrail function must accept exactly one parameter")
|
||||
|
||||
# Check return annotation if present, but don't require it
|
||||
return_annotation = sig.return_annotation
|
||||
if return_annotation != inspect.Signature.empty:
|
||||
|
||||
return_annotation_args = get_args(return_annotation)
|
||||
if not (
|
||||
get_origin(return_annotation) is tuple
|
||||
and len(return_annotation_args) == 2
|
||||
and return_annotation_args[0] is bool
|
||||
and (
|
||||
return_annotation_args[1] is Any
|
||||
or return_annotation_args[1] is str
|
||||
or return_annotation_args[1] is TaskOutput
|
||||
or return_annotation_args[1] == Union[str, TaskOutput]
|
||||
)
|
||||
return_annotation == Tuple[bool, Any]
|
||||
or str(return_annotation) == "Tuple[bool, Any]"
|
||||
):
|
||||
raise ValueError(
|
||||
"If return type is annotated, it must be Tuple[bool, Any]"
|
||||
|
||||
@@ -281,16 +281,8 @@ class Telemetry:
|
||||
return self._safe_telemetry_operation(operation)
|
||||
|
||||
def task_ended(self, span: Span, task: Task, crew: Crew):
|
||||
"""Records the completion of a task execution in a crew.
|
||||
"""Records task execution in a crew."""
|
||||
|
||||
Args:
|
||||
span (Span): The OpenTelemetry span tracking the task execution
|
||||
task (Task): The task that was completed
|
||||
crew (Crew): The crew context in which the task was executed
|
||||
|
||||
Note:
|
||||
If share_crew is enabled, this will also record the task output
|
||||
"""
|
||||
def operation():
|
||||
if crew.share_crew:
|
||||
self._add_attribute(
|
||||
@@ -305,13 +297,8 @@ class Telemetry:
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def tool_repeated_usage(self, llm: Any, tool_name: str, attempts: int):
|
||||
"""Records when a tool is used repeatedly, which might indicate an issue.
|
||||
"""Records the repeated usage 'error' of a tool by an agent."""
|
||||
|
||||
Args:
|
||||
llm (Any): The language model being used
|
||||
tool_name (str): Name of the tool being repeatedly used
|
||||
attempts (int): Number of attempts made with this tool
|
||||
"""
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Tool Repeated Usage")
|
||||
@@ -330,13 +317,8 @@ class Telemetry:
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def tool_usage(self, llm: Any, tool_name: str, attempts: int):
|
||||
"""Records the usage of a tool by an agent.
|
||||
"""Records the usage of a tool by an agent."""
|
||||
|
||||
Args:
|
||||
llm (Any): The language model being used
|
||||
tool_name (str): Name of the tool being used
|
||||
attempts (int): Number of attempts made with this tool
|
||||
"""
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Tool Usage")
|
||||
@@ -355,11 +337,8 @@ class Telemetry:
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def tool_usage_error(self, llm: Any):
|
||||
"""Records when a tool usage results in an error.
|
||||
"""Records the usage of a tool by an agent."""
|
||||
|
||||
Args:
|
||||
llm (Any): The language model being used when the error occurred
|
||||
"""
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Tool Usage Error")
|
||||
@@ -378,14 +357,6 @@ class Telemetry:
|
||||
def individual_test_result_span(
|
||||
self, crew: Crew, quality: float, exec_time: int, model_name: str
|
||||
):
|
||||
"""Records individual test results for a crew execution.
|
||||
|
||||
Args:
|
||||
crew (Crew): The crew being tested
|
||||
quality (float): Quality score of the execution
|
||||
exec_time (int): Execution time in seconds
|
||||
model_name (str): Name of the model used
|
||||
"""
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Individual Test Result")
|
||||
@@ -412,14 +383,6 @@ class Telemetry:
|
||||
inputs: dict[str, Any] | None,
|
||||
model_name: str,
|
||||
):
|
||||
"""Records the execution of a test suite for a crew.
|
||||
|
||||
Args:
|
||||
crew (Crew): The crew being tested
|
||||
iterations (int): Number of test iterations
|
||||
inputs (dict[str, Any] | None): Input parameters for the test
|
||||
model_name (str): Name of the model used in testing
|
||||
"""
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Test Execution")
|
||||
@@ -445,7 +408,6 @@ class Telemetry:
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def deploy_signup_error_span(self):
|
||||
"""Records when an error occurs during the deployment signup process."""
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Deploy Signup Error")
|
||||
@@ -455,11 +417,6 @@ class Telemetry:
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def start_deployment_span(self, uuid: Optional[str] = None):
|
||||
"""Records the start of a deployment process.
|
||||
|
||||
Args:
|
||||
uuid (Optional[str]): Unique identifier for the deployment
|
||||
"""
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Start Deployment")
|
||||
@@ -471,7 +428,6 @@ class Telemetry:
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def create_crew_deployment_span(self):
|
||||
"""Records the creation of a new crew deployment."""
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Create Crew Deployment")
|
||||
@@ -481,12 +437,6 @@ class Telemetry:
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def get_crew_logs_span(self, uuid: Optional[str], log_type: str = "deployment"):
|
||||
"""Records the retrieval of crew logs.
|
||||
|
||||
Args:
|
||||
uuid (Optional[str]): Unique identifier for the crew
|
||||
log_type (str, optional): Type of logs being retrieved. Defaults to "deployment".
|
||||
"""
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Get Crew Logs")
|
||||
@@ -499,11 +449,6 @@ class Telemetry:
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def remove_crew_span(self, uuid: Optional[str] = None):
|
||||
"""Records the removal of a crew.
|
||||
|
||||
Args:
|
||||
uuid (Optional[str]): Unique identifier for the crew being removed
|
||||
"""
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Remove Crew")
|
||||
@@ -629,11 +574,6 @@ class Telemetry:
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def flow_creation_span(self, flow_name: str):
|
||||
"""Records the creation of a new flow.
|
||||
|
||||
Args:
|
||||
flow_name (str): Name of the flow being created
|
||||
"""
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Creation")
|
||||
@@ -644,12 +584,6 @@ class Telemetry:
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def flow_plotting_span(self, flow_name: str, node_names: list[str]):
|
||||
"""Records flow visualization/plotting activity.
|
||||
|
||||
Args:
|
||||
flow_name (str): Name of the flow being plotted
|
||||
node_names (list[str]): List of node names in the flow
|
||||
"""
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Plotting")
|
||||
@@ -661,12 +595,6 @@ class Telemetry:
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def flow_execution_span(self, flow_name: str, node_names: list[str]):
|
||||
"""Records the execution of a flow.
|
||||
|
||||
Args:
|
||||
flow_name (str): Name of the flow being executed
|
||||
node_names (list[str]): List of nodes being executed in the flow
|
||||
"""
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Execution")
|
||||
|
||||
@@ -244,7 +244,6 @@ class ToolUsage:
|
||||
tool_calling=calling,
|
||||
from_cache=from_cache,
|
||||
started_at=started_at,
|
||||
result=result, # Pass the result
|
||||
)
|
||||
|
||||
if (
|
||||
@@ -456,7 +455,7 @@ class ToolUsage:
|
||||
|
||||
# Attempt 4: Repair JSON
|
||||
try:
|
||||
repaired_input = repair_json(tool_input, skip_json_loads=True)
|
||||
repaired_input = repair_json(tool_input)
|
||||
self._printer.print(
|
||||
content=f"Repaired JSON: {repaired_input}", color="blue"
|
||||
)
|
||||
@@ -493,18 +492,8 @@ class ToolUsage:
|
||||
crewai_event_bus.emit(self, ToolUsageErrorEvent(**{**event_data, "error": e}))
|
||||
|
||||
def on_tool_use_finished(
|
||||
self, tool: Any, tool_calling: ToolCalling, from_cache: bool, started_at: float,
|
||||
result: Any = None
|
||||
self, tool: Any, tool_calling: ToolCalling, from_cache: bool, started_at: float
|
||||
) -> None:
|
||||
"""Handle tool usage completion event.
|
||||
|
||||
Args:
|
||||
tool: The tool that was used
|
||||
tool_calling: The tool calling information
|
||||
from_cache: Whether the result was retrieved from cache
|
||||
started_at: Timestamp when the tool execution started
|
||||
result: The execution result of the tool
|
||||
"""
|
||||
finished_at = time.time()
|
||||
event_data = self._prepare_event_data(tool, tool_calling)
|
||||
event_data.update(
|
||||
@@ -512,7 +501,6 @@ class ToolUsage:
|
||||
"started_at": datetime.datetime.fromtimestamp(started_at),
|
||||
"finished_at": datetime.datetime.fromtimestamp(finished_at),
|
||||
"from_cache": from_cache,
|
||||
"result": result, # Tool execution result
|
||||
}
|
||||
)
|
||||
crewai_event_bus.emit(self, ToolUsageFinishedEvent(**event_data))
|
||||
|
||||
@@ -67,13 +67,16 @@ class CrewAIEventsBus:
|
||||
source: The object emitting the event
|
||||
event: The event instance to emit
|
||||
"""
|
||||
for event_type, handlers in self._handlers.items():
|
||||
if isinstance(event, event_type):
|
||||
for handler in handlers:
|
||||
handler(source, event)
|
||||
|
||||
event_type = type(event)
|
||||
if event_type in self._handlers:
|
||||
for handler in self._handlers[event_type]:
|
||||
handler(source, event)
|
||||
self._signal.send(source, event=event)
|
||||
|
||||
def clear_handlers(self) -> None:
|
||||
"""Clear all registered event handlers - useful for testing"""
|
||||
self._handlers.clear()
|
||||
|
||||
def register_handler(
|
||||
self, event_type: Type[EventTypes], handler: Callable[[Any, EventTypes], None]
|
||||
) -> None:
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
from datetime import datetime
|
||||
from typing import Any, Callable, Dict, Optional, Union
|
||||
from typing import Any, Callable, Dict
|
||||
|
||||
from .base_events import CrewEvent
|
||||
|
||||
@@ -25,16 +25,11 @@ class ToolUsageStartedEvent(ToolUsageEvent):
|
||||
|
||||
|
||||
class ToolUsageFinishedEvent(ToolUsageEvent):
|
||||
"""Event emitted when a tool execution is completed
|
||||
|
||||
This event contains the result of the tool execution, allowing listeners
|
||||
to access the output directly without implementing workarounds.
|
||||
"""
|
||||
"""Event emitted when a tool execution is completed"""
|
||||
|
||||
started_at: datetime
|
||||
finished_at: datetime
|
||||
from_cache: bool = False
|
||||
result: Any = None # Tool execution result
|
||||
type: str = "tool_usage_finished"
|
||||
|
||||
|
||||
|
||||
@@ -96,10 +96,6 @@ class CrewPlanner:
|
||||
tasks_summary = []
|
||||
for idx, task in enumerate(self.tasks):
|
||||
knowledge_list = self._get_agent_knowledge(task)
|
||||
agent_tools = (
|
||||
f"[{', '.join(str(tool) for tool in task.agent.tools)}]" if task.agent and task.agent.tools else '"agent has no tools"',
|
||||
f',\n "agent_knowledge": "[\\"{knowledge_list[0]}\\"]"' if knowledge_list and str(knowledge_list) != "None" else ""
|
||||
)
|
||||
task_summary = f"""
|
||||
Task Number {idx + 1} - {task.description}
|
||||
"task_description": {task.description}
|
||||
@@ -107,7 +103,10 @@ class CrewPlanner:
|
||||
"agent": {task.agent.role if task.agent else "None"}
|
||||
"agent_goal": {task.agent.goal if task.agent else "None"}
|
||||
"task_tools": {task.tools}
|
||||
"agent_tools": {"".join(agent_tools)}"""
|
||||
"agent_tools": %s%s""" % (
|
||||
f"[{', '.join(str(tool) for tool in task.agent.tools)}]" if task.agent and task.agent.tools else '"agent has no tools"',
|
||||
f',\n "agent_knowledge": "[\\"{knowledge_list[0]}\\"]"' if knowledge_list and str(knowledge_list) != "None" else ""
|
||||
)
|
||||
|
||||
tasks_summary.append(task_summary)
|
||||
return " ".join(tasks_summary)
|
||||
|
||||
254
tests/cassettes/test_kickoff_for_each_parallel_max_workers.yaml
Normal file
254
tests/cassettes/test_kickoff_for_each_parallel_max_workers.yaml
Normal file
@@ -0,0 +1,254 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are dog Researcher. You
|
||||
have a lot of experience with dog.\nYour personal goal is: Express hot takes
|
||||
on dog.\nTo give my best complete final answer to the task respond using the
|
||||
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
Your final answer must be the great and the most complete as possible, it must
|
||||
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
|
||||
{"role": "user", "content": "\nCurrent Task: Give me an analysis around dog.\n\nThis
|
||||
is the expected criteria for your final answer: 1 bullet point about dog that''s
|
||||
under 15 words.\nyou MUST return the actual complete content as the final answer,
|
||||
not a summary.\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
|
||||
"gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '914'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=xOCSjxqrC.2f2I2kISNioXWWErqIIGNKPkTsoEzcVRY-1742403707-1.0.1.1-bG9XNEXc7EBBNdNl9ff975bzPeeXgfmzAyYN1AesFIuQ2gEcmP67jHmnN6ABv.JJVDhw9oF5SREuZ3PUkYA8Xu6UQeWnIQxKi10X_40KN38;
|
||||
_cfuvid=RypiiyRYeZ0ZoVTLFkoT3v1Yzk.qiNfGEnh9gk4.bYg-1742403707485-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.61.0
|
||||
x-stainless-arch:
|
||||
- x64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- Linux
|
||||
x-stainless-package-version:
|
||||
- 1.61.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
|
||||
sk-proj-********************************************************************************************************************************************************sLcA.
|
||||
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
|
||||
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
|
||||
\"invalid_api_key\"\n }\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 922e88ad5d8d2805-SEA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '414'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Wed, 19 Mar 2025 17:01:49 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
vary:
|
||||
- Origin
|
||||
x-request-id:
|
||||
- req_9f1aa8c42b31f8139e80f67a71b11af3
|
||||
http_version: HTTP/1.1
|
||||
status_code: 401
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are apple Researcher.
|
||||
You have a lot of experience with apple.\nYour personal goal is: Express hot
|
||||
takes on apple.\nTo give my best complete final answer to the task respond using
|
||||
the exact following format:\n\nThought: I now can give a great answer\nFinal
|
||||
Answer: Your final answer must be the great and the most complete as possible,
|
||||
it must be outcome described.\n\nI MUST use these formats, my job depends on
|
||||
it!"}, {"role": "user", "content": "\nCurrent Task: Give me an analysis around
|
||||
apple.\n\nThis is the expected criteria for your final answer: 1 bullet point
|
||||
about apple that''s under 15 words.\nyou MUST return the actual complete content
|
||||
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
|
||||
use the tools available and give your best Final Answer, your job depends on
|
||||
it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '924'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=xOCSjxqrC.2f2I2kISNioXWWErqIIGNKPkTsoEzcVRY-1742403707-1.0.1.1-bG9XNEXc7EBBNdNl9ff975bzPeeXgfmzAyYN1AesFIuQ2gEcmP67jHmnN6ABv.JJVDhw9oF5SREuZ3PUkYA8Xu6UQeWnIQxKi10X_40KN38;
|
||||
_cfuvid=RypiiyRYeZ0ZoVTLFkoT3v1Yzk.qiNfGEnh9gk4.bYg-1742403707485-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.61.0
|
||||
x-stainless-arch:
|
||||
- x64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- Linux
|
||||
x-stainless-package-version:
|
||||
- 1.61.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
|
||||
sk-proj-********************************************************************************************************************************************************sLcA.
|
||||
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
|
||||
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
|
||||
\"invalid_api_key\"\n }\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 922e88adcdfe2805-SEA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '414'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Wed, 19 Mar 2025 17:01:49 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
vary:
|
||||
- Origin
|
||||
x-request-id:
|
||||
- req_2a94d77754a61f5f38f74f0648bf38af
|
||||
http_version: HTTP/1.1
|
||||
status_code: 401
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are cat Researcher. You
|
||||
have a lot of experience with cat.\nYour personal goal is: Express hot takes
|
||||
on cat.\nTo give my best complete final answer to the task respond using the
|
||||
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
Your final answer must be the great and the most complete as possible, it must
|
||||
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
|
||||
{"role": "user", "content": "\nCurrent Task: Give me an analysis around cat.\n\nThis
|
||||
is the expected criteria for your final answer: 1 bullet point about cat that''s
|
||||
under 15 words.\nyou MUST return the actual complete content as the final answer,
|
||||
not a summary.\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
|
||||
"gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '914'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=xOCSjxqrC.2f2I2kISNioXWWErqIIGNKPkTsoEzcVRY-1742403707-1.0.1.1-bG9XNEXc7EBBNdNl9ff975bzPeeXgfmzAyYN1AesFIuQ2gEcmP67jHmnN6ABv.JJVDhw9oF5SREuZ3PUkYA8Xu6UQeWnIQxKi10X_40KN38;
|
||||
_cfuvid=RypiiyRYeZ0ZoVTLFkoT3v1Yzk.qiNfGEnh9gk4.bYg-1742403707485-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.61.0
|
||||
x-stainless-arch:
|
||||
- x64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- Linux
|
||||
x-stainless-package-version:
|
||||
- 1.61.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
|
||||
sk-proj-********************************************************************************************************************************************************sLcA.
|
||||
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
|
||||
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
|
||||
\"invalid_api_key\"\n }\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 922e88adabe3f8d9-SEA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '414'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Wed, 19 Mar 2025 17:01:49 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
vary:
|
||||
- Origin
|
||||
x-request-id:
|
||||
- req_e06cdb71cffc47ee28b267ed083c3a6b
|
||||
http_version: HTTP/1.1
|
||||
status_code: 401
|
||||
version: 1
|
||||
@@ -0,0 +1,254 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are dog Researcher. You
|
||||
have a lot of experience with dog.\nYour personal goal is: Express hot takes
|
||||
on dog.\nTo give my best complete final answer to the task respond using the
|
||||
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
Your final answer must be the great and the most complete as possible, it must
|
||||
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
|
||||
{"role": "user", "content": "\nCurrent Task: Give me an analysis around dog.\n\nThis
|
||||
is the expected criteria for your final answer: 1 bullet point about dog that''s
|
||||
under 15 words.\nyou MUST return the actual complete content as the final answer,
|
||||
not a summary.\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
|
||||
"gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '914'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=xOCSjxqrC.2f2I2kISNioXWWErqIIGNKPkTsoEzcVRY-1742403707-1.0.1.1-bG9XNEXc7EBBNdNl9ff975bzPeeXgfmzAyYN1AesFIuQ2gEcmP67jHmnN6ABv.JJVDhw9oF5SREuZ3PUkYA8Xu6UQeWnIQxKi10X_40KN38;
|
||||
_cfuvid=RypiiyRYeZ0ZoVTLFkoT3v1Yzk.qiNfGEnh9gk4.bYg-1742403707485-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.61.0
|
||||
x-stainless-arch:
|
||||
- x64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- Linux
|
||||
x-stainless-package-version:
|
||||
- 1.61.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
|
||||
sk-proj-********************************************************************************************************************************************************sLcA.
|
||||
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
|
||||
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
|
||||
\"invalid_api_key\"\n }\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 922e88a8e9432805-SEA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '414'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Wed, 19 Mar 2025 17:01:48 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
vary:
|
||||
- Origin
|
||||
x-request-id:
|
||||
- req_d00f966d049ee99fde6e9e7bf652afa2
|
||||
http_version: HTTP/1.1
|
||||
status_code: 401
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are apple Researcher.
|
||||
You have a lot of experience with apple.\nYour personal goal is: Express hot
|
||||
takes on apple.\nTo give my best complete final answer to the task respond using
|
||||
the exact following format:\n\nThought: I now can give a great answer\nFinal
|
||||
Answer: Your final answer must be the great and the most complete as possible,
|
||||
it must be outcome described.\n\nI MUST use these formats, my job depends on
|
||||
it!"}, {"role": "user", "content": "\nCurrent Task: Give me an analysis around
|
||||
apple.\n\nThis is the expected criteria for your final answer: 1 bullet point
|
||||
about apple that''s under 15 words.\nyou MUST return the actual complete content
|
||||
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
|
||||
use the tools available and give your best Final Answer, your job depends on
|
||||
it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '924'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=xOCSjxqrC.2f2I2kISNioXWWErqIIGNKPkTsoEzcVRY-1742403707-1.0.1.1-bG9XNEXc7EBBNdNl9ff975bzPeeXgfmzAyYN1AesFIuQ2gEcmP67jHmnN6ABv.JJVDhw9oF5SREuZ3PUkYA8Xu6UQeWnIQxKi10X_40KN38;
|
||||
_cfuvid=RypiiyRYeZ0ZoVTLFkoT3v1Yzk.qiNfGEnh9gk4.bYg-1742403707485-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.61.0
|
||||
x-stainless-arch:
|
||||
- x64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- Linux
|
||||
x-stainless-package-version:
|
||||
- 1.61.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
|
||||
sk-proj-********************************************************************************************************************************************************sLcA.
|
||||
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
|
||||
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
|
||||
\"invalid_api_key\"\n }\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 922e88a91a59c4c8-SEA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '414'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Wed, 19 Mar 2025 17:01:48 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
vary:
|
||||
- Origin
|
||||
x-request-id:
|
||||
- req_6e9a5fd1d83c6eac8070b4e9b9b94f10
|
||||
http_version: HTTP/1.1
|
||||
status_code: 401
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are cat Researcher. You
|
||||
have a lot of experience with cat.\nYour personal goal is: Express hot takes
|
||||
on cat.\nTo give my best complete final answer to the task respond using the
|
||||
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
Your final answer must be the great and the most complete as possible, it must
|
||||
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
|
||||
{"role": "user", "content": "\nCurrent Task: Give me an analysis around cat.\n\nThis
|
||||
is the expected criteria for your final answer: 1 bullet point about cat that''s
|
||||
under 15 words.\nyou MUST return the actual complete content as the final answer,
|
||||
not a summary.\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
|
||||
"gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '914'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=xOCSjxqrC.2f2I2kISNioXWWErqIIGNKPkTsoEzcVRY-1742403707-1.0.1.1-bG9XNEXc7EBBNdNl9ff975bzPeeXgfmzAyYN1AesFIuQ2gEcmP67jHmnN6ABv.JJVDhw9oF5SREuZ3PUkYA8Xu6UQeWnIQxKi10X_40KN38;
|
||||
_cfuvid=RypiiyRYeZ0ZoVTLFkoT3v1Yzk.qiNfGEnh9gk4.bYg-1742403707485-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.61.0
|
||||
x-stainless-arch:
|
||||
- x64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- Linux
|
||||
x-stainless-package-version:
|
||||
- 1.61.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
|
||||
sk-proj-********************************************************************************************************************************************************sLcA.
|
||||
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
|
||||
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
|
||||
\"invalid_api_key\"\n }\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 922e88a91f2ef8d9-SEA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '414'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Wed, 19 Mar 2025 17:01:48 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
vary:
|
||||
- Origin
|
||||
x-request-id:
|
||||
- req_d0ddaee2449c771a230de2b6362c8d99
|
||||
http_version: HTTP/1.1
|
||||
status_code: 401
|
||||
version: 1
|
||||
@@ -1,21 +1,17 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are base_agent. You are
|
||||
a helpful assistant that just says hi\nYour personal goal is: Just say hi\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: say_hi\nTool Arguments: {}\nTool Description:
|
||||
Say hi\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, only one
|
||||
name of [say_hi], just the name, exactly as it''s written.\nAction Input: the
|
||||
input to the action, just a simple JSON object, enclosed in curly braces, using
|
||||
\" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Just say
|
||||
hi\n\nThis is the expected criteria for your final answer: hi\nyou MUST return
|
||||
the actual complete content as the final answer, not a summary.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
body: '{"messages": [{"role": "system", "content": "You are dog Researcher. You
|
||||
have a lot of experience with dog.\nYour personal goal is: Express hot takes
|
||||
on dog.\nTo give my best complete final answer to the task respond using the
|
||||
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
Your final answer must be the great and the most complete as possible, it must
|
||||
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
|
||||
{"role": "user", "content": "\nCurrent Task: Give me an analysis around dog.\n\nThis
|
||||
is the expected criteria for your final answer: 1 bullet point about dog that''s
|
||||
under 15 words.\nyou MUST return the actual complete content as the final answer,
|
||||
not a summary.\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
|
||||
"gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
@@ -24,7 +20,7 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1277'
|
||||
- '914'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
@@ -59,7 +55,7 @@ interactions:
|
||||
\"invalid_api_key\"\n }\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 9241c014df27b9d9-SEA
|
||||
- 922e88a1b9ef2805-SEA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
@@ -67,14 +63,14 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Sat, 22 Mar 2025 01:00:07 GMT
|
||||
- Wed, 19 Mar 2025 17:01:47 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=KV5GuGyEdKrxlT_rqmqKhQ9TUxeMz_ccZ5K3Xu.mfLc-1742605207-1.0.1.1-1aSRm5P8hBtB3zVmHWH3LcYPoXvhsezGomR1uazQeGCEWL_uUIGP1x3dMkviSWrjf88WKm1snQNn.Eyy8_qewmd7NT7lqUt0MgRWVgfCOAY;
|
||||
path=/; expires=Sat, 22-Mar-25 01:30:07 GMT; domain=.api.openai.com; HttpOnly;
|
||||
- __cf_bm=xOCSjxqrC.2f2I2kISNioXWWErqIIGNKPkTsoEzcVRY-1742403707-1.0.1.1-bG9XNEXc7EBBNdNl9ff975bzPeeXgfmzAyYN1AesFIuQ2gEcmP67jHmnN6ABv.JJVDhw9oF5SREuZ3PUkYA8Xu6UQeWnIQxKi10X_40KN38;
|
||||
path=/; expires=Wed, 19-Mar-25 17:31:47 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=1KExsJlCZ0R2Kp3gV2Y71_SiQ0R2D9rAbONnmJq7tjk-1742605207873-0.0.1.1-604800000;
|
||||
- _cfuvid=RypiiyRYeZ0ZoVTLFkoT3v1Yzk.qiNfGEnh9gk4.bYg-1742403707485-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
@@ -87,7 +83,7 @@ interactions:
|
||||
vary:
|
||||
- Origin
|
||||
x-request-id:
|
||||
- req_b2233dbbb60c3ebb5717c05ec5150588
|
||||
- req_5980db57b01be333b0f977bb03a15bca
|
||||
http_version: HTTP/1.1
|
||||
status_code: 401
|
||||
version: 1
|
||||
@@ -3,8 +3,6 @@
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
from functools import partial
|
||||
from typing import Tuple, Union
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
@@ -217,75 +215,6 @@ def test_multiple_output_type_error():
|
||||
)
|
||||
|
||||
|
||||
def test_guardrail_type_error():
|
||||
desc = "Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting."
|
||||
expected_output = "Bullet point list of 5 interesting ideas."
|
||||
# Lambda function
|
||||
Task(
|
||||
description=desc,
|
||||
expected_output=expected_output,
|
||||
guardrail=lambda x: (True, x),
|
||||
)
|
||||
|
||||
# Function
|
||||
def guardrail_fn(x: TaskOutput) -> tuple[bool, TaskOutput]:
|
||||
return (True, x)
|
||||
|
||||
Task(
|
||||
description=desc,
|
||||
expected_output=expected_output,
|
||||
guardrail=guardrail_fn,
|
||||
)
|
||||
|
||||
class Object:
|
||||
def guardrail_fn(self, x: TaskOutput) -> tuple[bool, TaskOutput]:
|
||||
return (True, x)
|
||||
|
||||
@classmethod
|
||||
def guardrail_class_fn(cls, x: TaskOutput) -> tuple[bool, str]:
|
||||
return (True, x)
|
||||
|
||||
@staticmethod
|
||||
def guardrail_static_fn(x: TaskOutput) -> tuple[bool, Union[str, TaskOutput]]:
|
||||
return (True, x)
|
||||
|
||||
obj = Object()
|
||||
# Method
|
||||
Task(
|
||||
description=desc,
|
||||
expected_output=expected_output,
|
||||
guardrail=obj.guardrail_fn,
|
||||
)
|
||||
# Class method
|
||||
Task(
|
||||
description=desc,
|
||||
expected_output=expected_output,
|
||||
guardrail=Object.guardrail_class_fn,
|
||||
)
|
||||
# Static method
|
||||
Task(
|
||||
description=desc,
|
||||
expected_output=expected_output,
|
||||
guardrail=Object.guardrail_static_fn,
|
||||
)
|
||||
|
||||
def error_fn(x: TaskOutput, y: bool) -> Tuple[bool, TaskOutput]:
|
||||
return (y, x)
|
||||
|
||||
Task(
|
||||
description=desc,
|
||||
expected_output=expected_output,
|
||||
guardrail=partial(error_fn, y=True),
|
||||
)
|
||||
|
||||
with pytest.raises(ValidationError):
|
||||
Task(
|
||||
description=desc,
|
||||
expected_output=expected_output,
|
||||
guardrail=error_fn,
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_output_pydantic_sequential():
|
||||
class ScoreOutput(BaseModel):
|
||||
|
||||
209
tests/test_kickoff_for_each_parallel.py
Normal file
209
tests/test_kickoff_for_each_parallel.py
Normal file
@@ -0,0 +1,209 @@
|
||||
"""Test for the kickoff_for_each_parallel method in Crew class."""
|
||||
|
||||
import concurrent.futures
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.crew import Crew
|
||||
from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.task import Task
|
||||
|
||||
|
||||
def test_kickoff_for_each_parallel_single_input():
|
||||
"""Tests if kickoff_for_each_parallel works with a single input."""
|
||||
|
||||
inputs = [{"topic": "dog"}]
|
||||
|
||||
agent = Agent(
|
||||
role="{topic} Researcher",
|
||||
goal="Express hot takes on {topic}.",
|
||||
backstory="You have a lot of experience with {topic}.",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me an analysis around {topic}.",
|
||||
expected_output="1 bullet point about {topic} that's under 15 words.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
|
||||
# Mock the kickoff method to avoid API calls
|
||||
expected_output = CrewOutput(raw="Dogs are loyal companions.")
|
||||
with patch.object(Crew, "kickoff", return_value=expected_output):
|
||||
results = crew.kickoff_for_each_parallel(inputs=inputs)
|
||||
|
||||
assert len(results) == 1
|
||||
assert results[0].raw == "Dogs are loyal companions."
|
||||
|
||||
|
||||
def test_kickoff_for_each_parallel_multiple_inputs():
|
||||
"""Tests if kickoff_for_each_parallel works with multiple inputs."""
|
||||
|
||||
inputs = [
|
||||
{"topic": "dog"},
|
||||
{"topic": "cat"},
|
||||
{"topic": "apple"},
|
||||
]
|
||||
|
||||
agent = Agent(
|
||||
role="{topic} Researcher",
|
||||
goal="Express hot takes on {topic}.",
|
||||
backstory="You have a lot of experience with {topic}.",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me an analysis around {topic}.",
|
||||
expected_output="1 bullet point about {topic} that's under 15 words.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
|
||||
# Mock the kickoff method to avoid API calls
|
||||
expected_outputs = [
|
||||
CrewOutput(raw="Dogs are loyal companions."),
|
||||
CrewOutput(raw="Cats are independent pets."),
|
||||
CrewOutput(raw="Apples are nutritious fruits."),
|
||||
]
|
||||
|
||||
with patch.object(Crew, "copy") as mock_copy:
|
||||
# Setup mock crew copies
|
||||
crew_copies = []
|
||||
for i in range(len(inputs)):
|
||||
crew_copy = MagicMock()
|
||||
crew_copy.kickoff.return_value = expected_outputs[i]
|
||||
crew_copies.append(crew_copy)
|
||||
mock_copy.side_effect = crew_copies
|
||||
|
||||
results = crew.kickoff_for_each_parallel(inputs=inputs)
|
||||
|
||||
assert len(results) == len(inputs)
|
||||
# Since ThreadPoolExecutor returns results in completion order, not input order,
|
||||
# we just check that all expected outputs are in the results
|
||||
result_texts = [result.raw for result in results]
|
||||
expected_texts = [output.raw for output in expected_outputs]
|
||||
for expected_text in expected_texts:
|
||||
assert expected_text in result_texts
|
||||
|
||||
|
||||
def test_kickoff_for_each_parallel_empty_input():
|
||||
"""Tests if kickoff_for_each_parallel handles an empty input list."""
|
||||
agent = Agent(
|
||||
role="{topic} Researcher",
|
||||
goal="Express hot takes on {topic}.",
|
||||
backstory="You have a lot of experience with {topic}.",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me an analysis around {topic}.",
|
||||
expected_output="1 bullet point about {topic} that's under 15 words.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
results = crew.kickoff_for_each_parallel(inputs=[])
|
||||
assert results == []
|
||||
|
||||
|
||||
def test_kickoff_for_each_parallel_invalid_input():
|
||||
"""Tests if kickoff_for_each_parallel raises TypeError for invalid input types."""
|
||||
|
||||
agent = Agent(
|
||||
role="{topic} Researcher",
|
||||
goal="Express hot takes on {topic}.",
|
||||
backstory="You have a lot of experience with {topic}.",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me an analysis around {topic}.",
|
||||
expected_output="1 bullet point about {topic} that's under 15 words.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
|
||||
# No need to mock here since we're testing input validation which happens before any API calls
|
||||
with pytest.raises(TypeError):
|
||||
# Pass a string instead of a list
|
||||
crew.kickoff_for_each_parallel("invalid input")
|
||||
|
||||
|
||||
def test_kickoff_for_each_parallel_error_handling():
|
||||
"""Tests error handling in kickoff_for_each_parallel when kickoff raises an error."""
|
||||
|
||||
inputs = [
|
||||
{"topic": "dog"},
|
||||
{"topic": "cat"},
|
||||
{"topic": "apple"},
|
||||
]
|
||||
|
||||
agent = Agent(
|
||||
role="{topic} Researcher",
|
||||
goal="Express hot takes on {topic}.",
|
||||
backstory="You have a lot of experience with {topic}.",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me an analysis around {topic}.",
|
||||
expected_output="1 bullet point about {topic} that's under 15 words.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
|
||||
with patch.object(Crew, "copy") as mock_copy:
|
||||
# Setup mock crew copies
|
||||
crew_copies = []
|
||||
for i in range(len(inputs)):
|
||||
crew_copy = MagicMock()
|
||||
# Make the third crew copy raise an exception
|
||||
if i == 2:
|
||||
crew_copy.kickoff.side_effect = Exception("Simulated kickoff error")
|
||||
else:
|
||||
crew_copy.kickoff.return_value = f"Output for {inputs[i]['topic']}"
|
||||
crew_copies.append(crew_copy)
|
||||
mock_copy.side_effect = crew_copies
|
||||
|
||||
with pytest.raises(Exception, match="Simulated kickoff error"):
|
||||
crew.kickoff_for_each_parallel(inputs=inputs)
|
||||
|
||||
|
||||
def test_kickoff_for_each_parallel_max_workers():
|
||||
"""Tests if kickoff_for_each_parallel respects the max_workers parameter."""
|
||||
|
||||
inputs = [
|
||||
{"topic": "dog"},
|
||||
{"topic": "cat"},
|
||||
{"topic": "apple"},
|
||||
]
|
||||
|
||||
agent = Agent(
|
||||
role="{topic} Researcher",
|
||||
goal="Express hot takes on {topic}.",
|
||||
backstory="You have a lot of experience with {topic}.",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me an analysis around {topic}.",
|
||||
expected_output="1 bullet point about {topic} that's under 15 words.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
|
||||
# Mock both ThreadPoolExecutor and crew.copy to avoid API calls
|
||||
with patch.object(concurrent.futures, "ThreadPoolExecutor", wraps=concurrent.futures.ThreadPoolExecutor) as mock_executor:
|
||||
with patch.object(Crew, "copy") as mock_copy:
|
||||
# Setup mock crew copies
|
||||
crew_copies = []
|
||||
for _ in range(len(inputs)):
|
||||
crew_copy = MagicMock()
|
||||
crew_copy.kickoff.return_value = CrewOutput(raw="Test output")
|
||||
crew_copies.append(crew_copy)
|
||||
mock_copy.side_effect = crew_copies
|
||||
|
||||
crew.kickoff_for_each_parallel(inputs=inputs, max_workers=2)
|
||||
mock_executor.assert_called_once_with(max_workers=2)
|
||||
@@ -1,92 +0,0 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are base_agent. You are
|
||||
a helpful assistant that returns structured data\nYour personal goal is: Return
|
||||
a dictionary result\nYou ONLY have access to the following tools, and should
|
||||
NEVER make up tools that are not listed here:\n\nTool Name: dict_result\nTool
|
||||
Arguments: {}\nTool Description: Return a dictionary result\n\nIMPORTANT: Use
|
||||
the following format in your response:\n\n```\nThought: you should always think
|
||||
about what to do\nAction: the action to take, only one name of [dict_result],
|
||||
just the name, exactly as it''s written.\nAction Input: the input to the action,
|
||||
just a simple JSON object, enclosed in curly braces, using \" to wrap keys and
|
||||
values.\nObservation: the result of the action\n```\n\nOnce all necessary information
|
||||
is gathered, return the following format:\n\n```\nThought: I now know the final
|
||||
answer\nFinal Answer: the final answer to the original input question\n```"},
|
||||
{"role": "user", "content": "\nCurrent Task: Return a dictionary result\n\nThis
|
||||
is the expected criteria for your final answer: Dictionary with message and
|
||||
data\nyou MUST return the actual complete content as the final answer, not a
|
||||
summary.\n\nBegin! This is VERY important to you, use the tools available and
|
||||
give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
|
||||
"gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1378'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=KV5GuGyEdKrxlT_rqmqKhQ9TUxeMz_ccZ5K3Xu.mfLc-1742605207-1.0.1.1-1aSRm5P8hBtB3zVmHWH3LcYPoXvhsezGomR1uazQeGCEWL_uUIGP1x3dMkviSWrjf88WKm1snQNn.Eyy8_qewmd7NT7lqUt0MgRWVgfCOAY;
|
||||
_cfuvid=1KExsJlCZ0R2Kp3gV2Y71_SiQ0R2D9rAbONnmJq7tjk-1742605207873-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.61.0
|
||||
x-stainless-arch:
|
||||
- x64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- Linux
|
||||
x-stainless-package-version:
|
||||
- 1.61.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
|
||||
sk-proj-********************************************************************************************************************************************************sLcA.
|
||||
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
|
||||
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
|
||||
\"invalid_api_key\"\n }\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 9241c01b0df3b9d9-SEA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '414'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Sat, 22 Mar 2025 01:00:08 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
vary:
|
||||
- Origin
|
||||
x-request-id:
|
||||
- req_d0eae92c72f0f64514d32384a7f27a9a
|
||||
http_version: HTTP/1.1
|
||||
status_code: 401
|
||||
version: 1
|
||||
@@ -1,91 +0,0 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are base_agent. You are
|
||||
a helpful assistant that returns None\nYour personal goal is: Return None as
|
||||
result\nYou ONLY have access to the following tools, and should NEVER make up
|
||||
tools that are not listed here:\n\nTool Name: none_result\nTool Arguments: {}\nTool
|
||||
Description: Return None as result\n\nIMPORTANT: Use the following format in
|
||||
your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [none_result], just the name, exactly as
|
||||
it''s written.\nAction Input: the input to the action, just a simple JSON object,
|
||||
enclosed in curly braces, using \" to wrap keys and values.\nObservation: the
|
||||
result of the action\n```\n\nOnce all necessary information is gathered, return
|
||||
the following format:\n\n```\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question\n```"}, {"role": "user", "content":
|
||||
"\nCurrent Task: Return None as result\n\nThis is the expected criteria for
|
||||
your final answer: None\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
|
||||
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
|
||||
"model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1324'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=KV5GuGyEdKrxlT_rqmqKhQ9TUxeMz_ccZ5K3Xu.mfLc-1742605207-1.0.1.1-1aSRm5P8hBtB3zVmHWH3LcYPoXvhsezGomR1uazQeGCEWL_uUIGP1x3dMkviSWrjf88WKm1snQNn.Eyy8_qewmd7NT7lqUt0MgRWVgfCOAY;
|
||||
_cfuvid=1KExsJlCZ0R2Kp3gV2Y71_SiQ0R2D9rAbONnmJq7tjk-1742605207873-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.61.0
|
||||
x-stainless-arch:
|
||||
- x64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- Linux
|
||||
x-stainless-package-version:
|
||||
- 1.61.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
|
||||
sk-proj-********************************************************************************************************************************************************sLcA.
|
||||
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
|
||||
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
|
||||
\"invalid_api_key\"\n }\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 9241c01eea3db9d9-SEA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '414'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Sat, 22 Mar 2025 01:00:09 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
vary:
|
||||
- Origin
|
||||
x-request-id:
|
||||
- req_00b89378d512de492b7c90876d0e3c8a
|
||||
http_version: HTTP/1.1
|
||||
status_code: 401
|
||||
version: 1
|
||||
@@ -1,34 +0,0 @@
|
||||
from unittest.mock import Mock
|
||||
|
||||
from crewai.utilities.events.base_events import CrewEvent
|
||||
from crewai.utilities.events.crewai_event_bus import crewai_event_bus
|
||||
|
||||
|
||||
class TestEvent(CrewEvent):
|
||||
pass
|
||||
|
||||
|
||||
def test_specific_event_handler():
|
||||
mock_handler = Mock()
|
||||
|
||||
@crewai_event_bus.on(TestEvent)
|
||||
def handler(source, event):
|
||||
mock_handler(source, event)
|
||||
|
||||
event = TestEvent(type="test_event")
|
||||
crewai_event_bus.emit("source_object", event)
|
||||
|
||||
mock_handler.assert_called_once_with("source_object", event)
|
||||
|
||||
|
||||
def test_wildcard_event_handler():
|
||||
mock_handler = Mock()
|
||||
|
||||
@crewai_event_bus.on(CrewEvent)
|
||||
def handler(source, event):
|
||||
mock_handler(source, event)
|
||||
|
||||
event = TestEvent(type="test_event")
|
||||
crewai_event_bus.emit("source_object", event)
|
||||
|
||||
mock_handler.assert_called_once_with("source_object", event)
|
||||
@@ -12,7 +12,6 @@ from crewai.flow.flow import Flow, listen, start
|
||||
from crewai.llm import LLM
|
||||
from crewai.task import Task
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.tools.tool_calling import ToolCalling
|
||||
from crewai.utilities.events.agent_events import (
|
||||
AgentExecutionCompletedEvent,
|
||||
AgentExecutionErrorEvent,
|
||||
@@ -330,137 +329,35 @@ class SayHiTool(BaseTool):
|
||||
return "hi"
|
||||
|
||||
|
||||
class DictResultTool(BaseTool):
|
||||
name: str = Field(default="dict_result", description="The name of the tool")
|
||||
description: str = Field(
|
||||
default="Return a dictionary result",
|
||||
description="The description of the tool"
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_tools_emits_finished_events():
|
||||
received_events = []
|
||||
|
||||
@crewai_event_bus.on(ToolUsageFinishedEvent)
|
||||
def handle_tool_end(source, event):
|
||||
received_events.append(event)
|
||||
|
||||
agent = Agent(
|
||||
role="base_agent",
|
||||
goal="Just say hi",
|
||||
backstory="You are a helpful assistant that just says hi",
|
||||
tools=[SayHiTool()],
|
||||
)
|
||||
|
||||
def _run(self) -> dict:
|
||||
return {"message": "success", "data": {"value": 42}}
|
||||
|
||||
|
||||
class NoneResultTool(BaseTool):
|
||||
name: str = Field(default="none_result", description="The name of the tool")
|
||||
description: str = Field(
|
||||
default="Return None as result",
|
||||
description="The description of the tool"
|
||||
task = Task(
|
||||
description="Just say hi",
|
||||
expected_output="hi",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
def _run(self) -> None:
|
||||
return None
|
||||
|
||||
|
||||
def test_tools_emits_finished_events_with_string_result():
|
||||
received_events = []
|
||||
|
||||
@crewai_event_bus.on(ToolUsageFinishedEvent)
|
||||
def handle_tool_end(source, event):
|
||||
received_events.append(event)
|
||||
|
||||
# Create a mock event with string result
|
||||
tool = SayHiTool()
|
||||
tool_calling = ToolCalling(tool_name=tool.name, arguments={}, log="")
|
||||
event_data = {
|
||||
"agent_key": "test_agent_key",
|
||||
"agent_role": "test_agent_role",
|
||||
"tool_name": tool.name,
|
||||
"tool_args": {},
|
||||
"tool_class": tool.__class__.__name__,
|
||||
"started_at": datetime.now(),
|
||||
"finished_at": datetime.now(),
|
||||
"from_cache": False,
|
||||
"result": "hi"
|
||||
}
|
||||
|
||||
# Emit the event
|
||||
crewai_event_bus.emit(None, ToolUsageFinishedEvent(**event_data))
|
||||
|
||||
# Verify the event was received with the correct result
|
||||
crew = Crew(agents=[agent], tasks=[task], name="TestCrew")
|
||||
crew.kickoff()
|
||||
assert len(received_events) == 1
|
||||
assert received_events[0].agent_key == "test_agent_key"
|
||||
assert received_events[0].agent_role == "test_agent_role"
|
||||
assert received_events[0].tool_name == tool.name
|
||||
assert received_events[0].agent_key == agent.key
|
||||
assert received_events[0].agent_role == agent.role
|
||||
assert received_events[0].tool_name == SayHiTool().name
|
||||
assert received_events[0].tool_args == {}
|
||||
assert received_events[0].type == "tool_usage_finished"
|
||||
assert isinstance(received_events[0].timestamp, datetime)
|
||||
assert received_events[0].result == "hi"
|
||||
|
||||
|
||||
def test_tools_emits_finished_events_with_dict_result():
|
||||
received_events = []
|
||||
|
||||
@crewai_event_bus.on(ToolUsageFinishedEvent)
|
||||
def handle_tool_end(source, event):
|
||||
received_events.append(event)
|
||||
|
||||
# Create a mock event with dictionary result
|
||||
tool = DictResultTool()
|
||||
tool_calling = ToolCalling(tool_name=tool.name, arguments={}, log="")
|
||||
dict_result = {"message": "success", "data": {"value": 42}}
|
||||
event_data = {
|
||||
"agent_key": "test_agent_key",
|
||||
"agent_role": "test_agent_role",
|
||||
"tool_name": tool.name,
|
||||
"tool_args": {},
|
||||
"tool_class": tool.__class__.__name__,
|
||||
"started_at": datetime.now(),
|
||||
"finished_at": datetime.now(),
|
||||
"from_cache": False,
|
||||
"result": dict_result
|
||||
}
|
||||
|
||||
# Emit the event
|
||||
crewai_event_bus.emit(None, ToolUsageFinishedEvent(**event_data))
|
||||
|
||||
# Verify the event was received with the correct result
|
||||
assert len(received_events) == 1
|
||||
assert received_events[0].agent_key == "test_agent_key"
|
||||
assert received_events[0].agent_role == "test_agent_role"
|
||||
assert received_events[0].tool_name == tool.name
|
||||
assert received_events[0].tool_args == {}
|
||||
assert received_events[0].type == "tool_usage_finished"
|
||||
assert isinstance(received_events[0].timestamp, datetime)
|
||||
assert isinstance(received_events[0].result, dict)
|
||||
assert received_events[0].result["message"] == "success"
|
||||
assert received_events[0].result["data"]["value"] == 42
|
||||
|
||||
|
||||
def test_tools_emits_finished_events_with_none_result():
|
||||
received_events = []
|
||||
|
||||
@crewai_event_bus.on(ToolUsageFinishedEvent)
|
||||
def handle_tool_end(source, event):
|
||||
received_events.append(event)
|
||||
|
||||
# Create a mock event with None result
|
||||
tool = NoneResultTool()
|
||||
tool_calling = ToolCalling(tool_name=tool.name, arguments={}, log="")
|
||||
event_data = {
|
||||
"agent_key": "test_agent_key",
|
||||
"agent_role": "test_agent_role",
|
||||
"tool_name": tool.name,
|
||||
"tool_args": {},
|
||||
"tool_class": tool.__class__.__name__,
|
||||
"started_at": datetime.now(),
|
||||
"finished_at": datetime.now(),
|
||||
"from_cache": False,
|
||||
"result": None
|
||||
}
|
||||
|
||||
# Emit the event
|
||||
crewai_event_bus.emit(None, ToolUsageFinishedEvent(**event_data))
|
||||
|
||||
# Verify the event was received with the correct result
|
||||
assert len(received_events) == 1
|
||||
assert received_events[0].agent_key == "test_agent_key"
|
||||
assert received_events[0].agent_role == "test_agent_role"
|
||||
assert received_events[0].tool_name == tool.name
|
||||
assert received_events[0].tool_args == {}
|
||||
assert received_events[0].type == "tool_usage_finished"
|
||||
assert isinstance(received_events[0].timestamp, datetime)
|
||||
assert received_events[0].result is None
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
|
||||
Reference in New Issue
Block a user