Compare commits

..

8 Commits

Author SHA1 Message Date
Brandon Hancock (bhancock_ai)
282461d13a Merge branch 'main' into devin/1742244238-fix-issue-2395 2025-03-20 12:12:44 -04:00
Amine Saihi
f8f9df6d1d update doc SpaceNewsKnowledgeSource code snippet (#2275)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-20 12:06:21 -04:00
João Moura
6e94edb777 TYPO 2025-03-20 08:21:17 -07:00
Vini Brasil
bbe896d48c Support wildcard handling in emit() (#2424)
* Support wildcard handling in `emit()`

Change `emit()` to call handlers registered for parent classes using
`isinstance()`. Ensures that base event handlers receive derived
events.

* Fix failing test

* Remove unused variable
2025-03-20 09:59:17 -04:00
Seyed Mostafa Meshkati
9298054436 docs: add base_url env for anthropic llm example (#2204)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-20 09:48:11 -04:00
Fernando Galves
90b7937796 Update documentation (#2199)
* Update llms.mdx

Update Amazon Bedrock section with more information about the foundation models available.

* Update llms.mdx

fix the description of Amazon Bedrock section

* Update llms.mdx

Remove the incorrect </tab> tag

* Update llms.mdx

Add Claude 3.7 Sonnet to the Amazon Bedrock list

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-20 09:42:23 -04:00
elda27
520933b4c5 Fix: More comfortable validation #2177 (#2178)
* Fix: More confortable validation

* Fix: union type support

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-20 09:28:31 -04:00
Devin AI
6e359ae9ee Fix incorrect import statement in memory examples documentation (fixes #2395)
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-03-17 20:46:32 +00:00
13 changed files with 176 additions and 884 deletions

View File

@@ -460,12 +460,12 @@ class SpaceNewsKnowledgeSource(BaseKnowledgeSource):
data = response.json()
articles = data.get('results', [])
formatted_data = self._format_articles(articles)
formatted_data = self.validate_content(articles)
return {self.api_endpoint: formatted_data}
except Exception as e:
raise ValueError(f"Failed to fetch space news: {str(e)}")
def _format_articles(self, articles: list) -> str:
def validate_content(self, articles: list) -> str:
"""Format articles into readable text."""
formatted = "Space News Articles:\n\n"
for article in articles:

View File

@@ -158,7 +158,11 @@ In this section, you'll find detailed examples that help you select, configure,
<Accordion title="Anthropic">
```toml Code
# Required
ANTHROPIC_API_KEY=sk-ant-...
# Optional
ANTHROPIC_API_BASE=<custom-base-url>
```
Example usage in your CrewAI project:
@@ -250,6 +254,40 @@ In this section, you'll find detailed examples that help you select, configure,
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
)
```
Before using Amazon Bedrock, make sure you have boto3 installed in your environment
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
| Model | Context Window | Best For |
|-------------------------|----------------------|-------------------------------------------------------------------|
| Amazon Nova Pro | Up to 300k tokens | High-performance, model balancing accuracy, speed, and cost-effectiveness across diverse tasks. |
| Amazon Nova Micro | Up to 128k tokens | High-performance, cost-effective text-only model optimized for lowest latency responses. |
| Amazon Nova Lite | Up to 300k tokens | High-performance, affordable multimodal processing for images, video, and text with real-time capabilities. |
| Claude 3.7 Sonnet | Up to 128k tokens | High-performance, best for complex reasoning, coding & AI agents |
| Claude 3.5 Sonnet v2 | Up to 200k tokens | State-of-the-art model specialized in software engineering, agentic capabilities, and computer interaction at optimized cost. |
| Claude 3.5 Sonnet | Up to 200k tokens | High-performance model delivering superior intelligence and reasoning across diverse tasks with optimal speed-cost balance. |
| Claude 3.5 Haiku | Up to 200k tokens | Fast, compact multimodal model optimized for quick responses and seamless human-like interactions |
| Claude 3 Sonnet | Up to 200k tokens | Multimodal model balancing intelligence and speed for high-volume deployments. |
| Claude 3 Haiku | Up to 200k tokens | Compact, high-speed multimodal model optimized for quick responses and natural conversational interactions |
| Claude 3 Opus | Up to 200k tokens | Most advanced multimodal model excelling at complex tasks with human-like reasoning and superior contextual understanding. |
| Claude 2.1 | Up to 200k tokens | Enhanced version with expanded context window, improved reliability, and reduced hallucinations for long-form and RAG applications |
| Claude | Up to 100k tokens | Versatile model excelling in sophisticated dialogue, creative content, and precise instruction following. |
| Claude Instant | Up to 100k tokens | Fast, cost-effective model for everyday tasks like dialogue, analysis, summarization, and document Q&A |
| Llama 3.1 405B Instruct | Up to 128k tokens | Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks. |
| Llama 3.1 70B Instruct | Up to 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| Llama 3.1 8B Instruct | Up to 128k tokens | Advanced state-of-the-art model with language understanding, superior reasoning, and text generation. |
| Llama 3 70B Instruct | Up to 8k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| Llama 3 8B Instruct | Up to 8k tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| Titan Text G1 - Lite | Up to 4k tokens | Lightweight, cost-effective model optimized for English tasks and fine-tuning with focus on summarization and content generation. |
| Titan Text G1 - Express | Up to 8k tokens | Versatile model for general language tasks, chat, and RAG applications with support for English and 100+ languages. |
| Cohere Command | Up to 4k tokens | Model specialized in following user commands and delivering practical enterprise solutions. |
| Jurassic-2 Mid | Up to 8,191 tokens | Cost-effective model balancing quality and affordability for diverse language tasks like Q&A, summarization, and content generation. |
| Jurassic-2 Ultra | Up to 8,191 tokens | Model for advanced text generation and comprehension, excelling in complex tasks like analysis and content creation. |
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
</Accordion>
<Accordion title="Amazon SageMaker">

View File

@@ -60,7 +60,8 @@ my_crew = Crew(
```python Code
from crewai import Crew, Process
from crewai.memory import LongTermMemory, ShortTermMemory, EntityMemory
from crewai.memory.storage import LTMSQLiteStorage, RAGStorage
from crewai.memory.storage.rag_storage import RAGStorage
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
from typing import List, Optional
# Assemble your crew with memory capabilities
@@ -119,7 +120,7 @@ Example using environment variables:
import os
from crewai import Crew
from crewai.memory import LongTermMemory
from crewai.memory.storage import LTMSQLiteStorage
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
# Configure storage path using environment variable
storage_path = os.getenv("CREWAI_STORAGE_DIR", "./storage")
@@ -148,7 +149,7 @@ crew = Crew(memory=True) # Uses default storage locations
```python
from crewai import Crew
from crewai.memory import LongTermMemory
from crewai.memory.storage import LTMSQLiteStorage
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
# Configure custom storage paths
crew = Crew(

View File

@@ -1,4 +1,5 @@
---title: Customizing Prompts
---
title: Customizing Prompts
description: Dive deeper into low-level prompt customization for CrewAI, enabling super custom and complex use cases for different models and languages.
icon: message-pen
---

View File

@@ -1,5 +1,4 @@
import asyncio
import concurrent.futures
import json
import re
import uuid
@@ -7,7 +6,7 @@ import warnings
from concurrent.futures import Future
from copy import copy as shallow_copy
from hashlib import md5
from typing import Any, Callable, Dict, List, Optional, Sequence, Set, Tuple, Union
from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Union
from pydantic import (
UUID4,
@@ -708,65 +707,6 @@ class Crew(BaseModel):
self.usage_metrics = total_usage_metrics
self._task_output_handler.reset()
return results
def kickoff_for_each_parallel(self, inputs: Sequence[Dict[str, Any]], max_workers: Optional[int] = None) -> List[CrewOutput]:
"""Executes the Crew's workflow for each input in the list in parallel using ThreadPoolExecutor.
Args:
inputs: Sequence of input dictionaries to be passed to each crew execution.
max_workers: Maximum number of worker threads to use. If None, uses the default
ThreadPoolExecutor behavior (typically min(32, os.cpu_count() + 4)).
Returns:
List of CrewOutput objects, one for each input.
"""
from concurrent.futures import ThreadPoolExecutor, as_completed
if not isinstance(inputs, (list, tuple)):
raise TypeError(f"Inputs must be a list of dictionaries. Received {type(inputs).__name__} instead.")
if not inputs:
return []
results: List[CrewOutput] = []
# Initialize the parent crew's usage metrics
total_usage_metrics = UsageMetrics()
# Create a copy of the crew for each input to avoid state conflicts
crew_copies = [self.copy() for _ in inputs]
# Execute each crew in parallel
try:
with ThreadPoolExecutor(max_workers=max_workers) as executor:
# Submit all tasks to the executor
future_to_crew = {
executor.submit(crew_copies[i].kickoff, inputs[i]): i
for i in range(len(inputs))
}
# Process results as they complete
for future in as_completed(future_to_crew):
crew_index = future_to_crew[future]
try:
output = future.result()
results.append(output)
# Aggregate usage metrics
if crew_copies[crew_index].usage_metrics:
total_usage_metrics.add_usage_metrics(crew_copies[crew_index].usage_metrics)
except Exception as exc:
# Re-raise the exception to maintain consistent behavior with kickoff_for_each
raise exc
finally:
# Clean up to assist garbage collection
crew_copies.clear()
# Set the aggregated metrics on the parent crew
self.usage_metrics = total_usage_metrics
self._task_output_handler.reset()
return results
def _handle_crew_planning(self):
"""Handles the Crew planning."""

View File

@@ -19,6 +19,8 @@ from typing import (
Tuple,
Type,
Union,
get_args,
get_origin,
)
from pydantic import (
@@ -178,15 +180,29 @@ class Task(BaseModel):
"""
if v is not None:
sig = inspect.signature(v)
if len(sig.parameters) != 1:
positional_args = [
param
for param in sig.parameters.values()
if param.default is inspect.Parameter.empty
]
if len(positional_args) != 1:
raise ValueError("Guardrail function must accept exactly one parameter")
# Check return annotation if present, but don't require it
return_annotation = sig.return_annotation
if return_annotation != inspect.Signature.empty:
return_annotation_args = get_args(return_annotation)
if not (
return_annotation == Tuple[bool, Any]
or str(return_annotation) == "Tuple[bool, Any]"
get_origin(return_annotation) is tuple
and len(return_annotation_args) == 2
and return_annotation_args[0] is bool
and (
return_annotation_args[1] is Any
or return_annotation_args[1] is str
or return_annotation_args[1] is TaskOutput
or return_annotation_args[1] == Union[str, TaskOutput]
)
):
raise ValueError(
"If return type is annotated, it must be Tuple[bool, Any]"

View File

@@ -67,15 +67,12 @@ class CrewAIEventsBus:
source: The object emitting the event
event: The event instance to emit
"""
event_type = type(event)
if event_type in self._handlers:
for handler in self._handlers[event_type]:
handler(source, event)
self._signal.send(source, event=event)
for event_type, handlers in self._handlers.items():
if isinstance(event, event_type):
for handler in handlers:
handler(source, event)
def clear_handlers(self) -> None:
"""Clear all registered event handlers - useful for testing"""
self._handlers.clear()
self._signal.send(source, event=event)
def register_handler(
self, event_type: Type[EventTypes], handler: Callable[[Any, EventTypes], None]

View File

@@ -1,254 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are dog Researcher. You
have a lot of experience with dog.\nYour personal goal is: Express hot takes
on dog.\nTo give my best complete final answer to the task respond using the
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
Your final answer must be the great and the most complete as possible, it must
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
{"role": "user", "content": "\nCurrent Task: Give me an analysis around dog.\n\nThis
is the expected criteria for your final answer: 1 bullet point about dog that''s
under 15 words.\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
"gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '914'
content-type:
- application/json
cookie:
- __cf_bm=xOCSjxqrC.2f2I2kISNioXWWErqIIGNKPkTsoEzcVRY-1742403707-1.0.1.1-bG9XNEXc7EBBNdNl9ff975bzPeeXgfmzAyYN1AesFIuQ2gEcmP67jHmnN6ABv.JJVDhw9oF5SREuZ3PUkYA8Xu6UQeWnIQxKi10X_40KN38;
_cfuvid=RypiiyRYeZ0ZoVTLFkoT3v1Yzk.qiNfGEnh9gk4.bYg-1742403707485-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.61.0
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.61.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
sk-proj-********************************************************************************************************************************************************sLcA.
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
\"invalid_api_key\"\n }\n}\n"
headers:
CF-RAY:
- 922e88ad5d8d2805-SEA
Connection:
- keep-alive
Content-Length:
- '414'
Content-Type:
- application/json; charset=utf-8
Date:
- Wed, 19 Mar 2025 17:01:49 GMT
Server:
- cloudflare
X-Content-Type-Options:
- nosniff
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
vary:
- Origin
x-request-id:
- req_9f1aa8c42b31f8139e80f67a71b11af3
http_version: HTTP/1.1
status_code: 401
- request:
body: '{"messages": [{"role": "system", "content": "You are apple Researcher.
You have a lot of experience with apple.\nYour personal goal is: Express hot
takes on apple.\nTo give my best complete final answer to the task respond using
the exact following format:\n\nThought: I now can give a great answer\nFinal
Answer: Your final answer must be the great and the most complete as possible,
it must be outcome described.\n\nI MUST use these formats, my job depends on
it!"}, {"role": "user", "content": "\nCurrent Task: Give me an analysis around
apple.\n\nThis is the expected criteria for your final answer: 1 bullet point
about apple that''s under 15 words.\nyou MUST return the actual complete content
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
use the tools available and give your best Final Answer, your job depends on
it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '924'
content-type:
- application/json
cookie:
- __cf_bm=xOCSjxqrC.2f2I2kISNioXWWErqIIGNKPkTsoEzcVRY-1742403707-1.0.1.1-bG9XNEXc7EBBNdNl9ff975bzPeeXgfmzAyYN1AesFIuQ2gEcmP67jHmnN6ABv.JJVDhw9oF5SREuZ3PUkYA8Xu6UQeWnIQxKi10X_40KN38;
_cfuvid=RypiiyRYeZ0ZoVTLFkoT3v1Yzk.qiNfGEnh9gk4.bYg-1742403707485-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.61.0
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.61.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
sk-proj-********************************************************************************************************************************************************sLcA.
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
\"invalid_api_key\"\n }\n}\n"
headers:
CF-RAY:
- 922e88adcdfe2805-SEA
Connection:
- keep-alive
Content-Length:
- '414'
Content-Type:
- application/json; charset=utf-8
Date:
- Wed, 19 Mar 2025 17:01:49 GMT
Server:
- cloudflare
X-Content-Type-Options:
- nosniff
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
vary:
- Origin
x-request-id:
- req_2a94d77754a61f5f38f74f0648bf38af
http_version: HTTP/1.1
status_code: 401
- request:
body: '{"messages": [{"role": "system", "content": "You are cat Researcher. You
have a lot of experience with cat.\nYour personal goal is: Express hot takes
on cat.\nTo give my best complete final answer to the task respond using the
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
Your final answer must be the great and the most complete as possible, it must
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
{"role": "user", "content": "\nCurrent Task: Give me an analysis around cat.\n\nThis
is the expected criteria for your final answer: 1 bullet point about cat that''s
under 15 words.\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
"gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '914'
content-type:
- application/json
cookie:
- __cf_bm=xOCSjxqrC.2f2I2kISNioXWWErqIIGNKPkTsoEzcVRY-1742403707-1.0.1.1-bG9XNEXc7EBBNdNl9ff975bzPeeXgfmzAyYN1AesFIuQ2gEcmP67jHmnN6ABv.JJVDhw9oF5SREuZ3PUkYA8Xu6UQeWnIQxKi10X_40KN38;
_cfuvid=RypiiyRYeZ0ZoVTLFkoT3v1Yzk.qiNfGEnh9gk4.bYg-1742403707485-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.61.0
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.61.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
sk-proj-********************************************************************************************************************************************************sLcA.
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
\"invalid_api_key\"\n }\n}\n"
headers:
CF-RAY:
- 922e88adabe3f8d9-SEA
Connection:
- keep-alive
Content-Length:
- '414'
Content-Type:
- application/json; charset=utf-8
Date:
- Wed, 19 Mar 2025 17:01:49 GMT
Server:
- cloudflare
X-Content-Type-Options:
- nosniff
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
vary:
- Origin
x-request-id:
- req_e06cdb71cffc47ee28b267ed083c3a6b
http_version: HTTP/1.1
status_code: 401
version: 1

View File

@@ -1,254 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are dog Researcher. You
have a lot of experience with dog.\nYour personal goal is: Express hot takes
on dog.\nTo give my best complete final answer to the task respond using the
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
Your final answer must be the great and the most complete as possible, it must
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
{"role": "user", "content": "\nCurrent Task: Give me an analysis around dog.\n\nThis
is the expected criteria for your final answer: 1 bullet point about dog that''s
under 15 words.\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
"gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '914'
content-type:
- application/json
cookie:
- __cf_bm=xOCSjxqrC.2f2I2kISNioXWWErqIIGNKPkTsoEzcVRY-1742403707-1.0.1.1-bG9XNEXc7EBBNdNl9ff975bzPeeXgfmzAyYN1AesFIuQ2gEcmP67jHmnN6ABv.JJVDhw9oF5SREuZ3PUkYA8Xu6UQeWnIQxKi10X_40KN38;
_cfuvid=RypiiyRYeZ0ZoVTLFkoT3v1Yzk.qiNfGEnh9gk4.bYg-1742403707485-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.61.0
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.61.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
sk-proj-********************************************************************************************************************************************************sLcA.
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
\"invalid_api_key\"\n }\n}\n"
headers:
CF-RAY:
- 922e88a8e9432805-SEA
Connection:
- keep-alive
Content-Length:
- '414'
Content-Type:
- application/json; charset=utf-8
Date:
- Wed, 19 Mar 2025 17:01:48 GMT
Server:
- cloudflare
X-Content-Type-Options:
- nosniff
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
vary:
- Origin
x-request-id:
- req_d00f966d049ee99fde6e9e7bf652afa2
http_version: HTTP/1.1
status_code: 401
- request:
body: '{"messages": [{"role": "system", "content": "You are apple Researcher.
You have a lot of experience with apple.\nYour personal goal is: Express hot
takes on apple.\nTo give my best complete final answer to the task respond using
the exact following format:\n\nThought: I now can give a great answer\nFinal
Answer: Your final answer must be the great and the most complete as possible,
it must be outcome described.\n\nI MUST use these formats, my job depends on
it!"}, {"role": "user", "content": "\nCurrent Task: Give me an analysis around
apple.\n\nThis is the expected criteria for your final answer: 1 bullet point
about apple that''s under 15 words.\nyou MUST return the actual complete content
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
use the tools available and give your best Final Answer, your job depends on
it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '924'
content-type:
- application/json
cookie:
- __cf_bm=xOCSjxqrC.2f2I2kISNioXWWErqIIGNKPkTsoEzcVRY-1742403707-1.0.1.1-bG9XNEXc7EBBNdNl9ff975bzPeeXgfmzAyYN1AesFIuQ2gEcmP67jHmnN6ABv.JJVDhw9oF5SREuZ3PUkYA8Xu6UQeWnIQxKi10X_40KN38;
_cfuvid=RypiiyRYeZ0ZoVTLFkoT3v1Yzk.qiNfGEnh9gk4.bYg-1742403707485-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.61.0
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.61.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
sk-proj-********************************************************************************************************************************************************sLcA.
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
\"invalid_api_key\"\n }\n}\n"
headers:
CF-RAY:
- 922e88a91a59c4c8-SEA
Connection:
- keep-alive
Content-Length:
- '414'
Content-Type:
- application/json; charset=utf-8
Date:
- Wed, 19 Mar 2025 17:01:48 GMT
Server:
- cloudflare
X-Content-Type-Options:
- nosniff
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
vary:
- Origin
x-request-id:
- req_6e9a5fd1d83c6eac8070b4e9b9b94f10
http_version: HTTP/1.1
status_code: 401
- request:
body: '{"messages": [{"role": "system", "content": "You are cat Researcher. You
have a lot of experience with cat.\nYour personal goal is: Express hot takes
on cat.\nTo give my best complete final answer to the task respond using the
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
Your final answer must be the great and the most complete as possible, it must
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
{"role": "user", "content": "\nCurrent Task: Give me an analysis around cat.\n\nThis
is the expected criteria for your final answer: 1 bullet point about cat that''s
under 15 words.\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
"gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '914'
content-type:
- application/json
cookie:
- __cf_bm=xOCSjxqrC.2f2I2kISNioXWWErqIIGNKPkTsoEzcVRY-1742403707-1.0.1.1-bG9XNEXc7EBBNdNl9ff975bzPeeXgfmzAyYN1AesFIuQ2gEcmP67jHmnN6ABv.JJVDhw9oF5SREuZ3PUkYA8Xu6UQeWnIQxKi10X_40KN38;
_cfuvid=RypiiyRYeZ0ZoVTLFkoT3v1Yzk.qiNfGEnh9gk4.bYg-1742403707485-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.61.0
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.61.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
sk-proj-********************************************************************************************************************************************************sLcA.
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
\"invalid_api_key\"\n }\n}\n"
headers:
CF-RAY:
- 922e88a91f2ef8d9-SEA
Connection:
- keep-alive
Content-Length:
- '414'
Content-Type:
- application/json; charset=utf-8
Date:
- Wed, 19 Mar 2025 17:01:48 GMT
Server:
- cloudflare
X-Content-Type-Options:
- nosniff
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
vary:
- Origin
x-request-id:
- req_d0ddaee2449c771a230de2b6362c8d99
http_version: HTTP/1.1
status_code: 401
version: 1

View File

@@ -1,89 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are dog Researcher. You
have a lot of experience with dog.\nYour personal goal is: Express hot takes
on dog.\nTo give my best complete final answer to the task respond using the
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
Your final answer must be the great and the most complete as possible, it must
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
{"role": "user", "content": "\nCurrent Task: Give me an analysis around dog.\n\nThis
is the expected criteria for your final answer: 1 bullet point about dog that''s
under 15 words.\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
"gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '914'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.61.0
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.61.0
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"error\": {\n \"message\": \"Incorrect API key provided:
sk-proj-********************************************************************************************************************************************************sLcA.
You can find your API key at https://platform.openai.com/account/api-keys.\",\n
\ \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\":
\"invalid_api_key\"\n }\n}\n"
headers:
CF-RAY:
- 922e88a1b9ef2805-SEA
Connection:
- keep-alive
Content-Length:
- '414'
Content-Type:
- application/json; charset=utf-8
Date:
- Wed, 19 Mar 2025 17:01:47 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=xOCSjxqrC.2f2I2kISNioXWWErqIIGNKPkTsoEzcVRY-1742403707-1.0.1.1-bG9XNEXc7EBBNdNl9ff975bzPeeXgfmzAyYN1AesFIuQ2gEcmP67jHmnN6ABv.JJVDhw9oF5SREuZ3PUkYA8Xu6UQeWnIQxKi10X_40KN38;
path=/; expires=Wed, 19-Mar-25 17:31:47 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=RypiiyRYeZ0ZoVTLFkoT3v1Yzk.qiNfGEnh9gk4.bYg-1742403707485-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
X-Content-Type-Options:
- nosniff
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
vary:
- Origin
x-request-id:
- req_5980db57b01be333b0f977bb03a15bca
http_version: HTTP/1.1
status_code: 401
version: 1

View File

@@ -3,6 +3,8 @@
import hashlib
import json
import os
from functools import partial
from typing import Tuple, Union
from unittest.mock import MagicMock, patch
import pytest
@@ -215,6 +217,75 @@ def test_multiple_output_type_error():
)
def test_guardrail_type_error():
desc = "Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting."
expected_output = "Bullet point list of 5 interesting ideas."
# Lambda function
Task(
description=desc,
expected_output=expected_output,
guardrail=lambda x: (True, x),
)
# Function
def guardrail_fn(x: TaskOutput) -> tuple[bool, TaskOutput]:
return (True, x)
Task(
description=desc,
expected_output=expected_output,
guardrail=guardrail_fn,
)
class Object:
def guardrail_fn(self, x: TaskOutput) -> tuple[bool, TaskOutput]:
return (True, x)
@classmethod
def guardrail_class_fn(cls, x: TaskOutput) -> tuple[bool, str]:
return (True, x)
@staticmethod
def guardrail_static_fn(x: TaskOutput) -> tuple[bool, Union[str, TaskOutput]]:
return (True, x)
obj = Object()
# Method
Task(
description=desc,
expected_output=expected_output,
guardrail=obj.guardrail_fn,
)
# Class method
Task(
description=desc,
expected_output=expected_output,
guardrail=Object.guardrail_class_fn,
)
# Static method
Task(
description=desc,
expected_output=expected_output,
guardrail=Object.guardrail_static_fn,
)
def error_fn(x: TaskOutput, y: bool) -> Tuple[bool, TaskOutput]:
return (y, x)
Task(
description=desc,
expected_output=expected_output,
guardrail=partial(error_fn, y=True),
)
with pytest.raises(ValidationError):
Task(
description=desc,
expected_output=expected_output,
guardrail=error_fn,
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_output_pydantic_sequential():
class ScoreOutput(BaseModel):

View File

@@ -1,209 +0,0 @@
"""Test for the kickoff_for_each_parallel method in Crew class."""
import concurrent.futures
from unittest.mock import MagicMock, patch
import pytest
from crewai.agent import Agent
from crewai.crew import Crew
from crewai.crews.crew_output import CrewOutput
from crewai.task import Task
def test_kickoff_for_each_parallel_single_input():
"""Tests if kickoff_for_each_parallel works with a single input."""
inputs = [{"topic": "dog"}]
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
# Mock the kickoff method to avoid API calls
expected_output = CrewOutput(raw="Dogs are loyal companions.")
with patch.object(Crew, "kickoff", return_value=expected_output):
results = crew.kickoff_for_each_parallel(inputs=inputs)
assert len(results) == 1
assert results[0].raw == "Dogs are loyal companions."
def test_kickoff_for_each_parallel_multiple_inputs():
"""Tests if kickoff_for_each_parallel works with multiple inputs."""
inputs = [
{"topic": "dog"},
{"topic": "cat"},
{"topic": "apple"},
]
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
# Mock the kickoff method to avoid API calls
expected_outputs = [
CrewOutput(raw="Dogs are loyal companions."),
CrewOutput(raw="Cats are independent pets."),
CrewOutput(raw="Apples are nutritious fruits."),
]
with patch.object(Crew, "copy") as mock_copy:
# Setup mock crew copies
crew_copies = []
for i in range(len(inputs)):
crew_copy = MagicMock()
crew_copy.kickoff.return_value = expected_outputs[i]
crew_copies.append(crew_copy)
mock_copy.side_effect = crew_copies
results = crew.kickoff_for_each_parallel(inputs=inputs)
assert len(results) == len(inputs)
# Since ThreadPoolExecutor returns results in completion order, not input order,
# we just check that all expected outputs are in the results
result_texts = [result.raw for result in results]
expected_texts = [output.raw for output in expected_outputs]
for expected_text in expected_texts:
assert expected_text in result_texts
def test_kickoff_for_each_parallel_empty_input():
"""Tests if kickoff_for_each_parallel handles an empty input list."""
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
results = crew.kickoff_for_each_parallel(inputs=[])
assert results == []
def test_kickoff_for_each_parallel_invalid_input():
"""Tests if kickoff_for_each_parallel raises TypeError for invalid input types."""
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
# No need to mock here since we're testing input validation which happens before any API calls
with pytest.raises(TypeError):
# Pass a string instead of a list
crew.kickoff_for_each_parallel("invalid input")
def test_kickoff_for_each_parallel_error_handling():
"""Tests error handling in kickoff_for_each_parallel when kickoff raises an error."""
inputs = [
{"topic": "dog"},
{"topic": "cat"},
{"topic": "apple"},
]
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
with patch.object(Crew, "copy") as mock_copy:
# Setup mock crew copies
crew_copies = []
for i in range(len(inputs)):
crew_copy = MagicMock()
# Make the third crew copy raise an exception
if i == 2:
crew_copy.kickoff.side_effect = Exception("Simulated kickoff error")
else:
crew_copy.kickoff.return_value = f"Output for {inputs[i]['topic']}"
crew_copies.append(crew_copy)
mock_copy.side_effect = crew_copies
with pytest.raises(Exception, match="Simulated kickoff error"):
crew.kickoff_for_each_parallel(inputs=inputs)
def test_kickoff_for_each_parallel_max_workers():
"""Tests if kickoff_for_each_parallel respects the max_workers parameter."""
inputs = [
{"topic": "dog"},
{"topic": "cat"},
{"topic": "apple"},
]
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
# Mock both ThreadPoolExecutor and crew.copy to avoid API calls
with patch.object(concurrent.futures, "ThreadPoolExecutor", wraps=concurrent.futures.ThreadPoolExecutor) as mock_executor:
with patch.object(Crew, "copy") as mock_copy:
# Setup mock crew copies
crew_copies = []
for _ in range(len(inputs)):
crew_copy = MagicMock()
crew_copy.kickoff.return_value = CrewOutput(raw="Test output")
crew_copies.append(crew_copy)
mock_copy.side_effect = crew_copies
crew.kickoff_for_each_parallel(inputs=inputs, max_workers=2)
mock_executor.assert_called_once_with(max_workers=2)

View File

@@ -0,0 +1,34 @@
from unittest.mock import Mock
from crewai.utilities.events.base_events import CrewEvent
from crewai.utilities.events.crewai_event_bus import crewai_event_bus
class TestEvent(CrewEvent):
pass
def test_specific_event_handler():
mock_handler = Mock()
@crewai_event_bus.on(TestEvent)
def handler(source, event):
mock_handler(source, event)
event = TestEvent(type="test_event")
crewai_event_bus.emit("source_object", event)
mock_handler.assert_called_once_with("source_object", event)
def test_wildcard_event_handler():
mock_handler = Mock()
@crewai_event_bus.on(CrewEvent)
def handler(source, event):
mock_handler(source, event)
event = TestEvent(type="test_event")
crewai_event_bus.emit("source_object", event)
mock_handler.assert_called_once_with("source_object", event)