Compare commits

..

3 Commits

Author SHA1 Message Date
Brandon Hancock (bhancock_ai)
b1c4c2c887 Merge branch 'main' into bugfix/add-crewai-chat-docs 2025-01-21 11:05:33 -05:00
Brandon Hancock
932537a037 add version number 2025-01-21 10:13:52 -05:00
Brandon Hancock
b3c2e4e26d add docs for crewai chat 2025-01-21 10:02:04 -05:00
138 changed files with 4611 additions and 21311 deletions

1
.gitignore vendored
View File

@@ -22,4 +22,3 @@ crew_tasks_output.json
.ruff_cache .ruff_cache
.venv .venv
agentops.log agentops.log
test_flow.html

View File

@@ -1,18 +1,10 @@
<div align="center"> <div align="center">
![Logo of CrewAI](./docs/crewai_logo.png) ![Logo of CrewAI, two people rowing on a boat](./docs/crewai_logo.png)
# **CrewAI** # **CrewAI**
**CrewAI**: Production-grade framework for orchestrating sophisticated AI agent systems. From simple automations to complex real-world applications, CrewAI provides precise control and deep customization. By fostering collaborative intelligence through flexible, production-ready architecture, CrewAI empowers agents to work together seamlessly, tackling complex business challenges with predictable, consistent results. 🤖 **CrewAI**: Production-grade framework for orchestrating sophisticated AI agent systems. From simple automations to complex real-world applications, CrewAI provides precise control and deep customization. By fostering collaborative intelligence through flexible, production-ready architecture, CrewAI empowers agents to work together seamlessly, tackling complex business challenges with predictable, consistent results.
**CrewAI Enterprise**
Want to plan, build (+ no code), deploy, monitor and interare your agents: [CrewAI Enterprise](https://www.crewai.com/enterprise). Designed for complex, real-world applications, our enterprise solution offers:
- **Seamless Integrations**
- **Scalable & Secure Deployment**
- **Actionable Insights**
- **24/7 Support**
<h3> <h3>
@@ -198,7 +190,7 @@ research_task:
description: > description: >
Conduct a thorough research about {topic} Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given Make sure you find any interesting and relevant information given
the current year is 2025. the current year is 2024.
expected_output: > expected_output: >
A list with 10 bullet points of the most relevant information about {topic} A list with 10 bullet points of the most relevant information about {topic}
agent: researcher agent: researcher

View File

@@ -43,7 +43,7 @@ Think of an agent as a specialized team member with specific skills, expertise,
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. | | **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
| **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. | | **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. | | **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
| **Embedder** _(optional)_ | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. | | **Embedder Config** _(optional)_ | `embedder_config` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. | | **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. | | **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
@@ -152,7 +152,7 @@ agent = Agent(
use_system_prompt=True, # Default: True use_system_prompt=True, # Default: True
tools=[SerperDevTool()], # Optional: List of tools tools=[SerperDevTool()], # Optional: List of tools
knowledge_sources=None, # Optional: List of knowledge sources knowledge_sources=None, # Optional: List of knowledge sources
embedder=None, # Optional: Custom embedder configuration embedder_config=None, # Optional: Custom embedder configuration
system_template=None, # Optional: Custom system prompt template system_template=None, # Optional: Custom system prompt template
prompt_template=None, # Optional: Custom prompt template prompt_template=None, # Optional: Custom prompt template
response_template=None, # Optional: Custom response template response_template=None, # Optional: Custom response template

View File

@@ -12,7 +12,7 @@ The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you
To use the CrewAI CLI, make sure you have CrewAI installed: To use the CrewAI CLI, make sure you have CrewAI installed:
```shell Terminal ```shell
pip install crewai pip install crewai
``` ```
@@ -20,7 +20,7 @@ pip install crewai
The basic structure of a CrewAI CLI command is: The basic structure of a CrewAI CLI command is:
```shell Terminal ```shell
crewai [COMMAND] [OPTIONS] [ARGUMENTS] crewai [COMMAND] [OPTIONS] [ARGUMENTS]
``` ```
@@ -30,7 +30,7 @@ crewai [COMMAND] [OPTIONS] [ARGUMENTS]
Create a new crew or flow. Create a new crew or flow.
```shell Terminal ```shell
crewai create [OPTIONS] TYPE NAME crewai create [OPTIONS] TYPE NAME
``` ```
@@ -38,7 +38,7 @@ crewai create [OPTIONS] TYPE NAME
- `NAME`: Name of the crew or flow - `NAME`: Name of the crew or flow
Example: Example:
```shell Terminal ```shell
crewai create crew my_new_crew crewai create crew my_new_crew
crewai create flow my_new_flow crewai create flow my_new_flow
``` ```
@@ -47,14 +47,14 @@ crewai create flow my_new_flow
Show the installed version of CrewAI. Show the installed version of CrewAI.
```shell Terminal ```shell
crewai version [OPTIONS] crewai version [OPTIONS]
``` ```
- `--tools`: (Optional) Show the installed version of CrewAI tools - `--tools`: (Optional) Show the installed version of CrewAI tools
Example: Example:
```shell Terminal ```shell
crewai version crewai version
crewai version --tools crewai version --tools
``` ```
@@ -63,7 +63,7 @@ crewai version --tools
Train the crew for a specified number of iterations. Train the crew for a specified number of iterations.
```shell Terminal ```shell
crewai train [OPTIONS] crewai train [OPTIONS]
``` ```
@@ -71,7 +71,7 @@ crewai train [OPTIONS]
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl") - `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
Example: Example:
```shell Terminal ```shell
crewai train -n 10 -f my_training_data.pkl crewai train -n 10 -f my_training_data.pkl
``` ```
@@ -79,14 +79,14 @@ crewai train -n 10 -f my_training_data.pkl
Replay the crew execution from a specific task. Replay the crew execution from a specific task.
```shell Terminal ```shell
crewai replay [OPTIONS] crewai replay [OPTIONS]
``` ```
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks - `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example: Example:
```shell Terminal ```shell
crewai replay -t task_123456 crewai replay -t task_123456
``` ```
@@ -94,7 +94,7 @@ crewai replay -t task_123456
Retrieve your latest crew.kickoff() task outputs. Retrieve your latest crew.kickoff() task outputs.
```shell Terminal ```shell
crewai log-tasks-outputs crewai log-tasks-outputs
``` ```
@@ -102,7 +102,7 @@ crewai log-tasks-outputs
Reset the crew memories (long, short, entity, latest_crew_kickoff_outputs). Reset the crew memories (long, short, entity, latest_crew_kickoff_outputs).
```shell Terminal ```shell
crewai reset-memories [OPTIONS] crewai reset-memories [OPTIONS]
``` ```
@@ -113,7 +113,7 @@ crewai reset-memories [OPTIONS]
- `-a, --all`: Reset ALL memories - `-a, --all`: Reset ALL memories
Example: Example:
```shell Terminal ```shell
crewai reset-memories --long --short crewai reset-memories --long --short
crewai reset-memories --all crewai reset-memories --all
``` ```
@@ -122,7 +122,7 @@ crewai reset-memories --all
Test the crew and evaluate the results. Test the crew and evaluate the results.
```shell Terminal ```shell
crewai test [OPTIONS] crewai test [OPTIONS]
``` ```
@@ -130,7 +130,7 @@ crewai test [OPTIONS]
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini") - `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example: Example:
```shell Terminal ```shell
crewai test -n 5 -m gpt-3.5-turbo crewai test -n 5 -m gpt-3.5-turbo
``` ```
@@ -138,7 +138,7 @@ crewai test -n 5 -m gpt-3.5-turbo
Run the crew. Run the crew.
```shell Terminal ```shell
crewai run crewai run
``` ```
<Note> <Note>
@@ -153,7 +153,7 @@ Starting in version `0.98.0`, when you run the `crewai chat` command, you start
After receiving the results, you can continue interacting with the assistant for further instructions or questions. After receiving the results, you can continue interacting with the assistant for further instructions or questions.
```shell Terminal ```shell
crewai chat crewai chat
``` ```
<Note> <Note>

View File

@@ -30,7 +30,7 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. | | **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. | | **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. | | **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** _(optional)_ | `output_log_file` | Set to True to save logs as logs.txt in the current directory or provide a file path. Logs will be in JSON format if the filename ends in .json, otherwise .txt. Defautls to `None`. | | **Output Log File** _(optional)_ | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. | | **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. | | **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. | | **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
@@ -240,23 +240,6 @@ print(f"Tasks Output: {crew_output.tasks_output}")
print(f"Token Usage: {crew_output.token_usage}") print(f"Token Usage: {crew_output.token_usage}")
``` ```
## Accessing Crew Logs
You can see real time log of the crew execution, by setting `output_log_file` as a `True(Boolean)` or a `file_name(str)`. Supports logging of events as both `file_name.txt` and `file_name.json`.
In case of `True(Boolean)` will save as `logs.txt`.
In case of `output_log_file` is set as `False(Booelan)` or `None`, the logs will not be populated.
```python Code
# Save crew logs
crew = Crew(output_log_file = True) # Logs will be saved as logs.txt
crew = Crew(output_log_file = file_name) # Logs will be saved as file_name.txt
crew = Crew(output_log_file = file_name.txt) # Logs will be saved as file_name.txt
crew = Crew(output_log_file = file_name.json) # Logs will be saved as file_name.json
```
## Memory Utilization ## Memory Utilization
Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies. Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies.
@@ -296,9 +279,9 @@ print(result)
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`. Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
- `kickoff()`: Starts the execution process according to the defined process flow. - `kickoff()`: Starts the execution process according to the defined process flow.
- `kickoff_for_each()`: Executes tasks sequentially for each provided input event or item in the collection. - `kickoff_for_each()`: Executes tasks for each agent individually.
- `kickoff_async()`: Initiates the workflow asynchronously. - `kickoff_async()`: Initiates the workflow asynchronously.
- `kickoff_for_each_async()`: Executes tasks concurrently for each provided input event or item, leveraging asynchronous processing. - `kickoff_for_each_async()`: Executes tasks for each agent individually in an asynchronous manner.
```python Code ```python Code
# Start the crew's task execution # Start the crew's task execution

View File

@@ -232,18 +232,18 @@ class UnstructuredExampleFlow(Flow):
def first_method(self): def first_method(self):
# The state automatically includes an 'id' field # The state automatically includes an 'id' field
print(f"State ID: {self.state['id']}") print(f"State ID: {self.state['id']}")
self.state['counter'] = 0 self.state.message = "Hello from structured flow"
self.state['message'] = "Hello from structured flow" self.state.counter = 0
@listen(first_method) @listen(first_method)
def second_method(self): def second_method(self):
self.state['counter'] += 1 self.state.counter += 1
self.state['message'] += " - updated" self.state.message += " - updated"
@listen(second_method) @listen(second_method)
def third_method(self): def third_method(self):
self.state['counter'] += 1 self.state.counter += 1
self.state['message'] += " - updated again" self.state.message += " - updated again"
print(f"State after third_method: {self.state}") print(f"State after third_method: {self.state}")

View File

@@ -91,7 +91,7 @@ result = crew.kickoff(inputs={"question": "What city does John live in and how o
``` ```
Here's another example with the `CrewDoclingSource`. The CrewDoclingSource is actually quite versatile and can handle multiple file formats including MD, PDF, DOCX, HTML, and more. Here's another example with the `CrewDoclingSource`. The CrewDoclingSource is actually quite versatile and can handle multiple file formats including TXT, PDF, DOCX, HTML, and more.
<Note> <Note>
You need to install `docling` for the following example to work: `uv add docling` You need to install `docling` for the following example to work: `uv add docling`
@@ -152,10 +152,10 @@ Here are examples of how to use different types of knowledge sources:
### Text File Knowledge Source ### Text File Knowledge Source
```python ```python
from crewai.knowledge.source.text_file_knowledge_source import TextFileKnowledgeSource from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
# Create a text file knowledge source # Create a text file knowledge source
text_source = TextFileKnowledgeSource( text_source = CrewDoclingSource(
file_paths=["document.txt", "another.txt"] file_paths=["document.txt", "another.txt"]
) )
@@ -324,13 +324,6 @@ agent = Agent(
verbose=True, verbose=True,
allow_delegation=False, allow_delegation=False,
llm=gemini_llm, llm=gemini_llm,
embedder={
"provider": "google",
"config": {
"model": "models/text-embedding-004",
"api_key": GEMINI_API_KEY,
}
}
) )
task = Task( task = Task(

View File

@@ -27,6 +27,155 @@ Large Language Models (LLMs) are the core intelligence behind CrewAI agents. The
</Card> </Card>
</CardGroup> </CardGroup>
## Available Models and Their Capabilities
Here's a detailed breakdown of supported models and their capabilities, you can compare performance at [lmarena.ai](https://lmarena.ai/?leaderboard) and [artificialanalysis.ai](https://artificialanalysis.ai/):
<Tabs>
<Tab title="OpenAI">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
<Note>
1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words.
</Note>
</Tab>
<Tab title="Nvidia NIM">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| nvidia/mistral-nemo-minitron-8b-8k-instruct | 8,192 tokens | State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation. |
| nvidia/nemotron-4-mini-hindi-4b-instruct| 4,096 tokens | A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language. |
| "nvidia/llama-3.1-nemotron-70b-instruct | 128k tokens | Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA in order to improve the helpfulness of LLM generated responses. |
| nvidia/llama3-chatqa-1.5-8b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
| nvidia/llama3-chatqa-1.5-70b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
| nvidia/vila | 128k tokens | Multi-modal vision-language model that understands text/img/video and creates informative responses |
| nvidia/neva-22| 4,096 tokens | Multi-modal vision-language model that understands text/images and generates informative responses |
| nvidia/nemotron-mini-4b-instruct | 8,192 tokens | General-purpose tasks |
| nvidia/usdcode-llama3-70b-instruct | 128k tokens | State-of-the-art LLM that answers OpenUSD knowledge queries and generates USD-Python code. |
| nvidia/nemotron-4-340b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| meta/codellama-70b | 100k tokens | LLM capable of generating code from natural language and vice versa. |
| meta/llama2-70b | 4,096 tokens | Cutting-edge large language AI model capable of generating text and code in response to prompts. |
| meta/llama3-8b-instruct | 8,192 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| meta/llama3-70b-instruct | 8,192 tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| meta/llama-3.1-8b-instruct | 128k tokens | Advanced state-of-the-art model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.1-70b-instruct | 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| meta/llama-3.1-405b-instruct | 128k tokens | Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks. |
| meta/llama-3.2-1b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-3b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-11b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-90b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.1-70b-instruct | 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| google/gemma-7b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/codegemma-7b | 8,192 tokens | Cutting-edge model built on Google's Gemma-7B specialized for code generation and code completion. |
| google/codegemma-1.1-7b | 8,192 tokens | Advanced programming model for code generation, completion, reasoning, and instruction following. |
| google/recurrentgemma-2b | 8,192 tokens | Novel recurrent architecture based language model for faster inference when generating long sequences. |
| google/gemma-2-9b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2-27b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2-2b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/deplot | 512 tokens | One-shot visual language understanding model that translates images of plots into tables. |
| google/paligemma | 8,192 tokens | Vision language model adept at comprehending text and visual inputs to produce informative responses. |
| mistralai/mistral-7b-instruct-v0.2 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| mistralai/mixtral-8x7b-instruct-v0.1 | 8,192 tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
| mistralai/mistral-large | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| mistralai/mixtral-8x22b-instruct-v0.1 | 8,192 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| mistralai/mistral-7b-instruct-v0.3 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| nv-mistralai/mistral-nemo-12b-instruct | 128k tokens | Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU. |
| mistralai/mamba-codestral-7b-v0.1 | 256k tokens | Model for writing and interacting with code across a wide range of programming languages and tasks. |
| microsoft/phi-3-mini-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-mini-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-small-8k-instruct | 8,192 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-small-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3.5-mini-instruct | 128K tokens | Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments |
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation |
| microsoft/kosmos-2 | 1,024 tokens | Groundbreaking multimodal model designed to understand and reason about visual elements in images. |
| microsoft/phi-3-vision-128k-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| microsoft/phi-3.5-vision-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| databricks/dbrx-instruct | 12k tokens | A general-purpose LLM with state-of-the-art performance in language understanding, coding, and RAG. |
| snowflake/arctic | 1,024 tokens | Delivers high efficiency inference for enterprise applications focused on SQL generation and coding. |
| aisingapore/sea-lion-7b-instruct | 4,096 tokens | LLM to represent and serve the linguistic and cultural diversity of Southeast Asia |
| ibm/granite-8b-code-instruct | 4,096 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
| ibm/granite-34b-code-instruct | 8,192 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
| ibm/granite-3.0-8b-instruct | 4,096 tokens | Advanced Small Language Model supporting RAG, summarization, classification, code, and agentic AI |
| ibm/granite-3.0-3b-a800m-instruct | 4,096 tokens | Highly efficient Mixture of Experts model for RAG, summarization, entity extraction, and classification |
| mediatek/breeze-7b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| upstage/solar-10.7b-instruct | 4,096 tokens | Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics. |
| writer/palmyra-med-70b-32k | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
| writer/palmyra-med-70b | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
| writer/palmyra-fin-70b-32k | 32k tokens | Specialized LLM for financial analysis, reporting, and data processing |
| 01-ai/yi-large | 32k tokens | Powerful model trained on English and Chinese for diverse tasks including chatbot and creative writing. |
| deepseek-ai/deepseek-coder-6.7b-instruct | 2k tokens | Powerful coding model offering advanced capabilities in code generation, completion, and infilling |
| rakuten/rakutenai-7b-instruct | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| rakuten/rakutenai-7b-chat | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| baichuan-inc/baichuan2-13b-chat | 4,096 tokens | Support Chinese and English chat, coding, math, instruction following, solving quizzes |
<Note>
NVIDIA's NIM support for models is expanding continuously! For the most up-to-date list of available models, please visit build.nvidia.com.
</Note>
</Tab>
<Tab title="Gemini">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
<Tip>
Google's Gemini models are all multimodal, supporting audio, images, video and text, supporting context caching, json schema, function calling, etc.
These models are available via API_KEY from
[The Gemini API](https://ai.google.dev/gemini-api/docs) and also from
[Google Cloud Vertex](https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai) as part of the
[Model Garden](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models).
</Tip>
</Tab>
<Tab title="Groq">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| Llama 3.1 70B/8B | 131,072 tokens | High-performance, large context tasks |
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks |
| Mixtral 8x7B | 32,768 tokens | Balanced performance and context |
<Tip>
Groq is known for its fast inference speeds, making it suitable for real-time applications.
</Tip>
</Tab>
<Tab title="SambaNova">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| Llama 3.1 70B/8B | Up to 131,072 tokens | High-performance, large context tasks |
| Llama 3.1 405B | 8,192 tokens | High-performance and output quality |
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks, multimodal |
| Llama 3.3 70B | Up to 131,072 tokens | High-performance and output quality|
| Qwen2 familly | 8,192 tokens | High-performance and output quality |
<Tip>
[SambaNova](https://cloud.sambanova.ai/) has several models with fast inference speed at full precision.
</Tip>
</Tab>
<Tab title="Others">
| Provider | Context Window | Key Features |
|----------|---------------|--------------|
| Deepseek Chat | 128,000 tokens | Specialized in technical discussions |
| Claude 3 | Up to 200K tokens | Strong reasoning, code understanding |
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks |
<Info>
Provider selection should consider factors like:
- API availability in your region
- Pricing structure
- Required features (e.g., streaming, function calling)
- Performance requirements
</Info>
</Tab>
</Tabs>
## Setting Up Your LLM ## Setting Up Your LLM
There are three ways to configure LLMs in CrewAI. Choose the method that best fits your workflow: There are three ways to configure LLMs in CrewAI. Choose the method that best fits your workflow:
@@ -55,12 +204,95 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
```yaml ```yaml
researcher: researcher:
# Agent Definition
role: Research Specialist role: Research Specialist
goal: Conduct comprehensive research and analysis goal: Conduct comprehensive research and analysis
backstory: A dedicated research professional with years of experience backstory: A dedicated research professional with years of experience
verbose: true verbose: true
llm: openai/gpt-4o-mini # your model here
# (see provider configuration examples below for more) # Model Selection (uncomment your choice)
# OpenAI Models - Known for reliability and performance
llm: openai/gpt-4o-mini
# llm: openai/gpt-4 # More accurate but expensive
# llm: openai/gpt-4-turbo # Fast with large context
# llm: openai/gpt-4o # Optimized for longer texts
# llm: openai/o1-preview # Latest features
# llm: openai/o1-mini # Cost-effective
# Azure Models - For enterprise deployments
# llm: azure/gpt-4o-mini
# llm: azure/gpt-4
# llm: azure/gpt-35-turbo
# Anthropic Models - Strong reasoning capabilities
# llm: anthropic/claude-3-opus-20240229-v1:0
# llm: anthropic/claude-3-sonnet-20240229-v1:0
# llm: anthropic/claude-3-haiku-20240307-v1:0
# llm: anthropic/claude-2.1
# llm: anthropic/claude-2.0
# Google Models - Strong reasoning, large cachable context window, multimodal
# llm: gemini/gemini-1.5-pro-latest
# llm: gemini/gemini-1.5-flash-latest
# llm: gemini/gemini-1.5-flash-8b-latest
# AWS Bedrock Models - Enterprise-grade
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
# llm: bedrock/anthropic.claude-v2:1
# llm: bedrock/amazon.titan-text-express-v1
# llm: bedrock/meta.llama2-70b-chat-v1
# Mistral Models - Open source alternative
# llm: mistral/mistral-large-latest
# llm: mistral/mistral-medium-latest
# llm: mistral/mistral-small-latest
# Groq Models - Fast inference
# llm: groq/mixtral-8x7b-32768
# llm: groq/llama-3.1-70b-versatile
# llm: groq/llama-3.2-90b-text-preview
# llm: groq/gemma2-9b-it
# llm: groq/gemma-7b-it
# IBM watsonx.ai Models - Enterprise features
# llm: watsonx/ibm/granite-13b-chat-v2
# llm: watsonx/meta-llama/llama-3-1-70b-instruct
# llm: watsonx/bigcode/starcoder2-15b
# Ollama Models - Local deployment
# llm: ollama/llama3:70b
# llm: ollama/codellama
# llm: ollama/mistral
# llm: ollama/mixtral
# llm: ollama/phi
# Fireworks AI Models - Specialized tasks
# llm: fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct
# llm: fireworks_ai/accounts/fireworks/models/mixtral-8x7b
# llm: fireworks_ai/accounts/fireworks/models/zephyr-7b-beta
# Perplexity AI Models - Research focused
# llm: pplx/llama-3.1-sonar-large-128k-online
# llm: pplx/mistral-7b-instruct
# llm: pplx/codellama-34b-instruct
# llm: pplx/mixtral-8x7b-instruct
# Hugging Face Models - Community models
# llm: huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct
# llm: huggingface/mistralai/Mixtral-8x7B-Instruct-v0.1
# llm: huggingface/tiiuae/falcon-180B-chat
# llm: huggingface/google/gemma-7b-it
# Nvidia NIM Models - GPU-optimized
# llm: nvidia_nim/meta/llama3-70b-instruct
# llm: nvidia_nim/mistral/mixtral-8x7b
# llm: nvidia_nim/google/gemma-7b
# SambaNova Models - Enterprise AI
# llm: sambanova/Meta-Llama-3.1-8B-Instruct
# llm: sambanova/BioMistral-7B
# llm: sambanova/Falcon-180B
``` ```
<Info> <Info>
@@ -108,465 +340,6 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
</Tab> </Tab>
</Tabs> </Tabs>
## Provider Configuration Examples
CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities.
In this section, you'll find detailed examples that help you select, configure, and optimize the LLM that best fits your project's needs.
<AccordionGroup>
<Accordion title="OpenAI">
Set the following environment variables in your `.env` file:
```toml Code
# Required
OPENAI_API_KEY=sk-...
# Optional
OPENAI_API_BASE=<custom-base-url>
OPENAI_ORGANIZATION=<your-org-id>
```
Example usage in your CrewAI project:
```python Code
from crewai import LLM
llm = LLM(
model="openai/gpt-4", # call model by provider/model_name
temperature=0.8,
max_tokens=150,
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42
)
```
OpenAI is one of the leading providers of LLMs with a wide range of models and features.
| Model | Context Window | Best For |
|---------------------|------------------|-----------------------------------------------|
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
| o3-mini | 200,000 tokens | Fast reasoning, complex reasoning |
| o1-mini | 128,000 tokens | Fast reasoning, complex reasoning |
| o1-preview | 128,000 tokens | Fast reasoning, complex reasoning |
| o1 | 200,000 tokens | Fast reasoning, complex reasoning |
</Accordion>
<Accordion title="Anthropic">
```toml Code
ANTHROPIC_API_KEY=sk-ant-...
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="anthropic/claude-3-sonnet-20240229-v1:0",
temperature=0.7
)
```
</Accordion>
<Accordion title="Google">
Set the following environment variables in your `.env` file:
```toml Code
# Option 1: Gemini accessed with an API key.
# https://ai.google.dev/gemini-api/docs/api-key
GEMINI_API_KEY=<your-api-key>
# Option 2: Vertex AI IAM credentials for Gemini, Anthropic, and Model Garden.
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
```
Get credentials from your Google Cloud Console and save it to a JSON file with the following code:
```python Code
import json
file_path = 'path/to/vertex_ai_service_account.json'
# Load the JSON file
with open(file_path, 'r') as file:
vertex_credentials = json.load(file)
# Convert the credentials to a JSON string
vertex_credentials_json = json.dumps(vertex_credentials)
```
Example usage in your CrewAI project:
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-1.5-pro-latest",
temperature=0.7,
vertex_credentials=vertex_credentials_json
)
```
Google offers a range of powerful models optimized for different use cases:
| Model | Context Window | Best For |
|-----------------------|----------------|------------------------------------------------------------------|
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
</Accordion>
<Accordion title="Azure">
```toml Code
# Required
AZURE_API_KEY=<your-api-key>
AZURE_API_BASE=<your-resource-url>
AZURE_API_VERSION=<api-version>
# Optional
AZURE_AD_TOKEN=<your-azure-ad-token>
AZURE_API_TYPE=<your-azure-api-type>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="azure/gpt-4",
api_version="2023-05-15"
)
```
</Accordion>
<Accordion title="AWS Bedrock">
```toml Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
)
```
</Accordion>
<Accordion title="Amazon SageMaker">
```toml Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="sagemaker/<my-endpoint>"
)
```
</Accordion>
<Accordion title="Mistral">
Set the following environment variables in your `.env` file:
```toml Code
MISTRAL_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="mistral/mistral-large-latest",
temperature=0.7
)
```
</Accordion>
<Accordion title="Nvidia NIM">
Set the following environment variables in your `.env` file:
```toml Code
NVIDIA_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="nvidia_nim/meta/llama3-70b-instruct",
temperature=0.7
)
```
Nvidia NIM provides a comprehensive suite of models for various use cases, from general-purpose tasks to specialized applications.
| Model | Context Window | Best For |
|-------------------------------------------------------------------------|----------------|-------------------------------------------------------------------|
| nvidia/mistral-nemo-minitron-8b-8k-instruct | 8,192 tokens | State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation. |
| nvidia/nemotron-4-mini-hindi-4b-instruct | 4,096 tokens | A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language. |
| nvidia/llama-3.1-nemotron-70b-instruct | 128k tokens | Customized for enhanced helpfulness in responses |
| nvidia/llama3-chatqa-1.5-8b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
| nvidia/llama3-chatqa-1.5-70b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
| nvidia/vila | 128k tokens | Multi-modal vision-language model that understands text/img/video and creates informative responses |
| nvidia/neva-22 | 4,096 tokens | Multi-modal vision-language model that understands text/images and generates informative responses |
| nvidia/nemotron-mini-4b-instruct | 8,192 tokens | General-purpose tasks |
| nvidia/usdcode-llama3-70b-instruct | 128k tokens | State-of-the-art LLM that answers OpenUSD knowledge queries and generates USD-Python code. |
| nvidia/nemotron-4-340b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| meta/codellama-70b | 100k tokens | LLM capable of generating code from natural language and vice versa. |
| meta/llama2-70b | 4,096 tokens | Cutting-edge large language AI model capable of generating text and code in response to prompts. |
| meta/llama3-8b-instruct | 8,192 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| meta/llama3-70b-instruct | 8,192 tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| meta/llama-3.1-8b-instruct | 128k tokens | Advanced state-of-the-art model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.1-70b-instruct | 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| meta/llama-3.1-405b-instruct | 128k tokens | Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks. |
| meta/llama-3.2-1b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-3b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-11b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-90b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| google/gemma-7b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/codegemma-7b | 8,192 tokens | Cutting-edge model built on Google's Gemma-7B specialized for code generation and code completion. |
| google/codegemma-1.1-7b | 8,192 tokens | Advanced programming model for code generation, completion, reasoning, and instruction following. |
| google/recurrentgemma-2b | 8,192 tokens | Novel recurrent architecture based language model for faster inference when generating long sequences. |
| google/gemma-2-9b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2-27b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2-2b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/deplot | 512 tokens | One-shot visual language understanding model that translates images of plots into tables. |
| google/paligemma | 8,192 tokens | Vision language model adept at comprehending text and visual inputs to produce informative responses. |
| mistralai/mistral-7b-instruct-v0.2 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| mistralai/mixtral-8x7b-instruct-v0.1 | 8,192 tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
| mistralai/mistral-large | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| mistralai/mixtral-8x22b-instruct-v0.1 | 8,192 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| mistralai/mistral-7b-instruct-v0.3 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| nv-mistralai/mistral-nemo-12b-instruct | 128k tokens | Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU. |
| mistralai/mamba-codestral-7b-v0.1 | 256k tokens | Model for writing and interacting with code across a wide range of programming languages and tasks. |
| microsoft/phi-3-mini-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-mini-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-small-8k-instruct | 8,192 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-small-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3.5-mini-instruct | 128K tokens | Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments |
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation |
| microsoft/kosmos-2 | 1,024 tokens | Groundbreaking multimodal model designed to understand and reason about visual elements in images. |
| microsoft/phi-3-vision-128k-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| microsoft/phi-3.5-vision-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| databricks/dbrx-instruct | 12k tokens | A general-purpose LLM with state-of-the-art performance in language understanding, coding, and RAG. |
| snowflake/arctic | 1,024 tokens | Delivers high efficiency inference for enterprise applications focused on SQL generation and coding. |
| aisingapore/sea-lion-7b-instruct | 4,096 tokens | LLM to represent and serve the linguistic and cultural diversity of Southeast Asia |
| ibm/granite-8b-code-instruct | 4,096 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
| ibm/granite-34b-code-instruct | 8,192 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
| ibm/granite-3.0-8b-instruct | 4,096 tokens | Advanced Small Language Model supporting RAG, summarization, classification, code, and agentic AI |
| ibm/granite-3.0-3b-a800m-instruct | 4,096 tokens | Highly efficient Mixture of Experts model for RAG, summarization, entity extraction, and classification |
| mediatek/breeze-7b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| upstage/solar-10.7b-instruct | 4,096 tokens | Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics. |
| writer/palmyra-med-70b-32k | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
| writer/palmyra-med-70b | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
| writer/palmyra-fin-70b-32k | 32k tokens | Specialized LLM for financial analysis, reporting, and data processing |
| 01-ai/yi-large | 32k tokens | Powerful model trained on English and Chinese for diverse tasks including chatbot and creative writing. |
| deepseek-ai/deepseek-coder-6.7b-instruct | 2k tokens | Powerful coding model offering advanced capabilities in code generation, completion, and infilling |
| rakuten/rakutenai-7b-instruct | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| rakuten/rakutenai-7b-chat | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| baichuan-inc/baichuan2-13b-chat | 4,096 tokens | Support Chinese and English chat, coding, math, instruction following, solving quizzes |
</Accordion>
<Accordion title="Groq">
Set the following environment variables in your `.env` file:
```toml Code
GROQ_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="groq/llama-3.2-90b-text-preview",
temperature=0.7
)
```
| Model | Context Window | Best For |
|-------------------|------------------|--------------------------------------------|
| Llama 3.1 70B/8B | 131,072 tokens | High-performance, large context tasks |
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks |
| Mixtral 8x7B | 32,768 tokens | Balanced performance and context |
</Accordion>
<Accordion title="IBM watsonx.ai">
Set the following environment variables in your `.env` file:
```toml Code
# Required
WATSONX_URL=<your-url>
WATSONX_APIKEY=<your-apikey>
WATSONX_PROJECT_ID=<your-project-id>
# Optional
WATSONX_TOKEN=<your-token>
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="watsonx/meta-llama/llama-3-1-70b-instruct",
base_url="https://api.watsonx.ai/v1"
)
```
</Accordion>
<Accordion title="Ollama (Local LLMs)">
1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama2`
3. Configure:
```python Code
llm = LLM(
model="ollama/llama3:70b",
base_url="http://localhost:11434"
)
```
</Accordion>
<Accordion title="Fireworks AI">
Set the following environment variables in your `.env` file:
```toml Code
FIREWORKS_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="Perplexity AI">
Set the following environment variables in your `.env` file:
```toml Code
PERPLEXITY_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="llama-3.1-sonar-large-128k-online",
base_url="https://api.perplexity.ai/"
)
```
</Accordion>
<Accordion title="Hugging Face">
Set the following environment variables in your `.env` file:
```toml Code
HUGGINGFACE_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct",
base_url="your_api_endpoint"
)
```
</Accordion>
<Accordion title="SambaNova">
Set the following environment variables in your `.env` file:
```toml Code
SAMBANOVA_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="sambanova/Meta-Llama-3.1-8B-Instruct",
temperature=0.7
)
```
| Model | Context Window | Best For |
|--------------------|------------------------|----------------------------------------------|
| Llama 3.1 70B/8B | Up to 131,072 tokens | High-performance, large context tasks |
| Llama 3.1 405B | 8,192 tokens | High-performance and output quality |
| Llama 3.2 Series | 8,192 tokens | General-purpose, multimodal tasks |
| Llama 3.3 70B | Up to 131,072 tokens | High-performance and output quality |
| Qwen2 familly | 8,192 tokens | High-performance and output quality |
</Accordion>
<Accordion title="Cerebras">
Set the following environment variables in your `.env` file:
```toml Code
# Required
CEREBRAS_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="cerebras/llama3.1-70b",
temperature=0.7,
max_tokens=8192
)
```
<Info>
Cerebras features:
- Fast inference speeds
- Competitive pricing
- Good balance of speed and quality
- Support for long context windows
</Info>
</Accordion>
<Accordion title="Open Router">
Set the following environment variables in your `.env` file:
```toml Code
OPENROUTER_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="openrouter/deepseek/deepseek-r1",
base_url="https://openrouter.ai/api/v1",
api_key=OPENROUTER_API_KEY
)
```
<Info>
Open Router models:
- openrouter/deepseek/deepseek-r1
- openrouter/deepseek/deepseek-chat
</Info>
</Accordion>
</AccordionGroup>
## Structured LLM Calls
CrewAI supports structured responses from LLM calls by allowing you to define a `response_format` using a Pydantic model. This enables the framework to automatically parse and validate the output, making it easier to integrate the response into your application without manual post-processing.
For example, you can define a Pydantic model to represent the expected response structure and pass it as the `response_format` when instantiating the LLM. The model will then be used to convert the LLM output into a structured Python object.
```python Code
from crewai import LLM
class Dog(BaseModel):
name: str
age: int
breed: str
llm = LLM(model="gpt-4o", response_format=Dog)
response = llm.call(
"Analyze the following messages and return the name, age, and breed. "
"Meet Kona! She is 3 years old and is a black german shepherd."
)
print(response)
# Output:
# Dog(name='Kona', age=3, breed='black german shepherd')
```
## Advanced Features and Optimization ## Advanced Features and Optimization
Learn how to get the most out of your LLM configuration: Learn how to get the most out of your LLM configuration:
@@ -635,6 +408,262 @@ Learn how to get the most out of your LLM configuration:
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>
## Provider Configuration Examples
<AccordionGroup>
<Accordion title="OpenAI">
```python Code
# Required
OPENAI_API_KEY=sk-...
# Optional
OPENAI_API_BASE=<custom-base-url>
OPENAI_ORGANIZATION=<your-org-id>
```
Example usage:
```python Code
from crewai import LLM
llm = LLM(
model="gpt-4",
temperature=0.8,
max_tokens=150,
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42
)
```
</Accordion>
<Accordion title="Anthropic">
```python Code
ANTHROPIC_API_KEY=sk-ant-...
```
Example usage:
```python Code
llm = LLM(
model="anthropic/claude-3-sonnet-20240229-v1:0",
temperature=0.7
)
```
</Accordion>
<Accordion title="Google">
```python Code
# Option 1. Gemini accessed with an API key.
# https://ai.google.dev/gemini-api/docs/api-key
GEMINI_API_KEY=<your-api-key>
# Option 2. Vertex AI IAM credentials for Gemini, Anthropic, and anything in the Model Garden.
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
```
Example usage:
```python Code
llm = LLM(
model="gemini/gemini-1.5-pro-latest",
temperature=0.7
)
```
</Accordion>
<Accordion title="Azure">
```python Code
# Required
AZURE_API_KEY=<your-api-key>
AZURE_API_BASE=<your-resource-url>
AZURE_API_VERSION=<api-version>
# Optional
AZURE_AD_TOKEN=<your-azure-ad-token>
AZURE_API_TYPE=<your-azure-api-type>
```
Example usage:
```python Code
llm = LLM(
model="azure/gpt-4",
api_version="2023-05-15"
)
```
</Accordion>
<Accordion title="AWS Bedrock">
```python Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
Example usage:
```python Code
llm = LLM(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
)
```
</Accordion>
<Accordion title="Mistral">
```python Code
MISTRAL_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="mistral/mistral-large-latest",
temperature=0.7
)
```
</Accordion>
<Accordion title="Nvidia NIM">
```python Code
NVIDIA_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="nvidia_nim/meta/llama3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="Groq">
```python Code
GROQ_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="groq/llama-3.2-90b-text-preview",
temperature=0.7
)
```
</Accordion>
<Accordion title="IBM watsonx.ai">
```python Code
# Required
WATSONX_URL=<your-url>
WATSONX_APIKEY=<your-apikey>
WATSONX_PROJECT_ID=<your-project-id>
# Optional
WATSONX_TOKEN=<your-token>
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id>
```
Example usage:
```python Code
llm = LLM(
model="watsonx/meta-llama/llama-3-1-70b-instruct",
base_url="https://api.watsonx.ai/v1"
)
```
</Accordion>
<Accordion title="Ollama (Local LLMs)">
1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama2`
3. Configure:
```python Code
llm = LLM(
model="ollama/llama3:70b",
base_url="http://localhost:11434"
)
```
</Accordion>
<Accordion title="Fireworks AI">
```python Code
FIREWORKS_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="Perplexity AI">
```python Code
PERPLEXITY_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="llama-3.1-sonar-large-128k-online",
base_url="https://api.perplexity.ai/"
)
```
</Accordion>
<Accordion title="Hugging Face">
```python Code
HUGGINGFACE_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct",
base_url="your_api_endpoint"
)
```
</Accordion>
<Accordion title="SambaNova">
```python Code
SAMBANOVA_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="sambanova/Meta-Llama-3.1-8B-Instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="Cerebras">
```python Code
# Required
CEREBRAS_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="cerebras/llama3.1-70b",
temperature=0.7,
max_tokens=8192
)
```
<Info>
Cerebras features:
- Fast inference speeds
- Competitive pricing
- Good balance of speed and quality
- Support for long context windows
</Info>
</Accordion>
</AccordionGroup>
## Common Issues and Solutions ## Common Issues and Solutions
<Tabs> <Tabs>

View File

@@ -58,107 +58,41 @@ my_crew = Crew(
### Example: Use Custom Memory Instances e.g FAISS as the VectorDB ### Example: Use Custom Memory Instances e.g FAISS as the VectorDB
```python Code ```python Code
from crewai import Crew, Process from crewai import Crew, Agent, Task, Process
from crewai.memory import LongTermMemory, ShortTermMemory, EntityMemory
from crewai.memory.storage import LTMSQLiteStorage, RAGStorage
from typing import List, Optional
# Assemble your crew with memory capabilities # Assemble your crew with memory capabilities
my_crew: Crew = Crew( my_crew = Crew(
agents=[...], agents=[...],
tasks=[...], tasks=[...],
process = Process.sequential, process="Process.sequential",
memory=True, memory=True,
# Long-term memory for persistent storage across sessions long_term_memory=EnhanceLongTermMemory(
long_term_memory = LongTermMemory(
storage=LTMSQLiteStorage( storage=LTMSQLiteStorage(
db_path="/my_crew1/long_term_memory_storage.db" db_path="/my_data_dir/my_crew1/long_term_memory_storage.db"
) )
), ),
# Short-term memory for current context using RAG short_term_memory=EnhanceShortTermMemory(
short_term_memory = ShortTermMemory( storage=CustomRAGStorage(
storage = RAGStorage( crew_name="my_crew",
embedder_config={ storage_type="short_term",
"provider": "openai", data_dir="//my_data_dir",
"config": { model=embedder["model"],
"model": 'text-embedding-3-small' dimension=embedder["dimension"],
}
},
type="short_term",
path="/my_crew1/"
)
), ),
), ),
# Entity memory for tracking key information about entities entity_memory=EnhanceEntityMemory(
entity_memory = EntityMemory( storage=CustomRAGStorage(
storage=RAGStorage( crew_name="my_crew",
embedder_config={ storage_type="entities",
"provider": "openai", data_dir="//my_data_dir",
"config": { model=embedder["model"],
"model": 'text-embedding-3-small' dimension=embedder["dimension"],
} ),
},
type="short_term",
path="/my_crew1/"
)
), ),
verbose=True, verbose=True,
) )
``` ```
## Security Considerations
When configuring memory storage:
- Use environment variables for storage paths (e.g., `CREWAI_STORAGE_DIR`)
- Never hardcode sensitive information like database credentials
- Consider access permissions for storage directories
- Use relative paths when possible to maintain portability
Example using environment variables:
```python
import os
from crewai import Crew
from crewai.memory import LongTermMemory
from crewai.memory.storage import LTMSQLiteStorage
# Configure storage path using environment variable
storage_path = os.getenv("CREWAI_STORAGE_DIR", "./storage")
crew = Crew(
memory=True,
long_term_memory=LongTermMemory(
storage=LTMSQLiteStorage(
db_path="{storage_path}/memory.db".format(storage_path=storage_path)
)
)
)
```
## Configuration Examples
### Basic Memory Configuration
```python
from crewai import Crew
from crewai.memory import LongTermMemory
# Simple memory configuration
crew = Crew(memory=True) # Uses default storage locations
```
### Custom Storage Configuration
```python
from crewai import Crew
from crewai.memory import LongTermMemory
from crewai.memory.storage import LTMSQLiteStorage
# Configure custom storage paths
crew = Crew(
memory=True,
long_term_memory=LongTermMemory(
storage=LTMSQLiteStorage(db_path="./memory.db")
)
)
```
## Integrating Mem0 for Enhanced User Memory ## Integrating Mem0 for Enhanced User Memory
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences. [Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences.
@@ -251,12 +185,7 @@ my_crew = Crew(
process=Process.sequential, process=Process.sequential,
memory=True, memory=True,
verbose=True, verbose=True,
embedder={ embedder=OpenAIEmbeddingFunction(api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"),
"provider": "openai",
"config": {
"model": 'text-embedding-3-small'
}
}
) )
``` ```
@@ -282,19 +211,6 @@ my_crew = Crew(
### Using Google AI embeddings ### Using Google AI embeddings
#### Prerequisites
Before using Google AI embeddings, ensure you have:
- Access to the Gemini API
- The necessary API keys and permissions
You will need to update your *pyproject.toml* dependencies:
```YAML
dependencies = [
"google-generativeai>=0.8.4", #main version in January/2025 - crewai v.0.100.0 and crewai-tools 0.33.0
"crewai[tools]>=0.100.0,<1.0.0"
]
```
```python Code ```python Code
from crewai import Crew, Agent, Task, Process from crewai import Crew, Agent, Task, Process
@@ -308,7 +224,7 @@ my_crew = Crew(
"provider": "google", "provider": "google",
"config": { "config": {
"api_key": "<YOUR_API_KEY>", "api_key": "<YOUR_API_KEY>",
"model": "<model_name>" "model_name": "<model_name>"
} }
} }
) )
@@ -326,15 +242,13 @@ my_crew = Crew(
process=Process.sequential, process=Process.sequential,
memory=True, memory=True,
verbose=True, verbose=True,
embedder={ embedder=OpenAIEmbeddingFunction(
"provider": "openai", api_key="YOUR_API_KEY",
"config": { api_base="YOUR_API_BASE_PATH",
"api_key": "YOUR_API_KEY", api_type="azure",
"api_base": "YOUR_API_BASE_PATH", api_version="YOUR_API_VERSION",
"api_version": "YOUR_API_VERSION", model_name="text-embedding-3-small"
"model_name": 'text-embedding-3-small' )
}
}
) )
``` ```
@@ -350,15 +264,12 @@ my_crew = Crew(
process=Process.sequential, process=Process.sequential,
memory=True, memory=True,
verbose=True, verbose=True,
embedder={ embedder=GoogleVertexEmbeddingFunction(
"provider": "vertexai", project_id="YOUR_PROJECT_ID",
"config": { region="YOUR_REGION",
"project_id"="YOUR_PROJECT_ID", api_key="YOUR_API_KEY",
"region"="YOUR_REGION", model_name="textembedding-gecko"
"api_key"="YOUR_API_KEY", )
"model_name"="textembedding-gecko"
}
}
) )
``` ```
@@ -377,7 +288,7 @@ my_crew = Crew(
"provider": "cohere", "provider": "cohere",
"config": { "config": {
"api_key": "YOUR_API_KEY", "api_key": "YOUR_API_KEY",
"model": "<model_name>" "model_name": "<model_name>"
} }
} }
) )
@@ -397,7 +308,7 @@ my_crew = Crew(
"provider": "voyageai", "provider": "voyageai",
"config": { "config": {
"api_key": "YOUR_API_KEY", "api_key": "YOUR_API_KEY",
"model": "<model_name>" "model_name": "<model_name>"
} }
} }
) )
@@ -447,65 +358,6 @@ my_crew = Crew(
) )
``` ```
### Using Amazon Bedrock embeddings
```python Code
# Note: Ensure you have installed `boto3` for Bedrock embeddings to work.
import os
import boto3
from crewai import Crew, Agent, Task, Process
boto3_session = boto3.Session(
region_name=os.environ.get("AWS_REGION_NAME"),
aws_access_key_id=os.environ.get("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.environ.get("AWS_SECRET_ACCESS_KEY")
)
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
embedder={
"provider": "bedrock",
"config":{
"session": boto3_session,
"model": "amazon.titan-embed-text-v2:0",
"vector_dimension": 1024
}
}
verbose=True
)
```
### Adding Custom Embedding Function
```python Code
from crewai import Crew, Agent, Task, Process
from chromadb import Documents, EmbeddingFunction, Embeddings
# Create a custom embedding function
class CustomEmbedder(EmbeddingFunction):
def __call__(self, input: Documents) -> Embeddings:
# generate embeddings
return [1, 2, 3] # this is a dummy embedding
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "custom",
"config": {
"embedder": CustomEmbedder()
}
}
)
```
### Resetting Memory ### Resetting Memory
```shell ```shell

View File

@@ -81,8 +81,8 @@ my_crew.kickoff()
3. **Collect Data:** 3. **Collect Data:**
- Search for the latest papers, articles, and reports published in 2024 and early 2025. - Search for the latest papers, articles, and reports published in 2023 and early 2024.
- Use keywords like "Large Language Models 2025", "AI LLM advancements", "AI ethics 2025", etc. - Use keywords like "Large Language Models 2024", "AI LLM advancements", "AI ethics 2024", etc.
4. **Analyze Findings:** 4. **Analyze Findings:**

View File

@@ -38,7 +38,6 @@ crew = Crew(
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. | | **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. |
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Other tasks whose outputs will be used as context for this task. | | **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Other tasks whose outputs will be used as context for this task. |
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | Whether the task should be executed asynchronously. Defaults to False. | | **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | Whether the task should be executed asynchronously. Defaults to False. |
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Whether the task should have a human review the final answer of the agent. Defaults to False. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. | | **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. | | **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. | | **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. |
@@ -69,7 +68,7 @@ research_task:
description: > description: >
Conduct a thorough research about {topic} Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given Make sure you find any interesting and relevant information given
the current year is 2025. the current year is 2024.
expected_output: > expected_output: >
A list with 10 bullet points of the most relevant information about {topic} A list with 10 bullet points of the most relevant information about {topic}
agent: researcher agent: researcher
@@ -155,7 +154,7 @@ research_task = Task(
description=""" description="""
Conduct a thorough research about AI Agents. Conduct a thorough research about AI Agents.
Make sure you find any interesting and relevant information given Make sure you find any interesting and relevant information given
the current year is 2025. the current year is 2024.
""", """,
expected_output=""" expected_output="""
A list with 10 bullet points of the most relevant information about AI Agents A list with 10 bullet points of the most relevant information about AI Agents
@@ -268,7 +267,7 @@ analysis_task = Task(
Task guardrails provide a way to validate and transform task outputs before they Task guardrails provide a way to validate and transform task outputs before they
are passed to the next task. This feature helps ensure data quality and provides are passed to the next task. This feature helps ensure data quality and provides
feedback to agents when their output doesn't meet specific criteria. efeedback to agents when their output doesn't meet specific criteria.
### Using Task Guardrails ### Using Task Guardrails

View File

@@ -60,12 +60,12 @@ writer = Agent(
# Create tasks for your agents # Create tasks for your agents
task1 = Task( task1 = Task(
description=( description=(
"Conduct a comprehensive analysis of the latest advancements in AI in 2025. " "Conduct a comprehensive analysis of the latest advancements in AI in 2024. "
"Identify key trends, breakthrough technologies, and potential industry impacts. " "Identify key trends, breakthrough technologies, and potential industry impacts. "
"Compile your findings in a detailed report. " "Compile your findings in a detailed report. "
"Make sure to check with a human if the draft is good before finalizing your answer." "Make sure to check with a human if the draft is good before finalizing your answer."
), ),
expected_output='A comprehensive full report on the latest AI advancements in 2025, leave nothing out', expected_output='A comprehensive full report on the latest AI advancements in 2024, leave nothing out',
agent=researcher, agent=researcher,
human_input=True human_input=True
) )
@@ -76,7 +76,7 @@ task2 = Task(
"Your post should be informative yet accessible, catering to a tech-savvy audience. " "Your post should be informative yet accessible, catering to a tech-savvy audience. "
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future." "Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
), ),
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2025', expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024',
agent=writer, agent=writer,
human_input=True human_input=True
) )

View File

@@ -1,100 +0,0 @@
---
title: Agent Monitoring with Langfuse
description: Learn how to integrate Langfuse with CrewAI via OpenTelemetry using OpenLit
icon: magnifying-glass-chart
---
# Integrate Langfuse with CrewAI
This notebook demonstrates how to integrate **Langfuse** with **CrewAI** using OpenTelemetry via the **OpenLit** SDK. By the end of this notebook, you will be able to trace your CrewAI applications with Langfuse for improved observability and debugging.
> **What is Langfuse?** [Langfuse](https://langfuse.com) is an open-source LLM engineering platform. It provides tracing and monitoring capabilities for LLM applications, helping developers debug, analyze, and optimize their AI systems. Langfuse integrates with various tools and frameworks via native integrations, OpenTelemetry, and APIs/SDKs.
[![Langfuse Overview Video](https://github.com/user-attachments/assets/3926b288-ff61-4b95-8aa1-45d041c70866)](https://langfuse.com/watch-demo)
## Get Started
We'll walk through a simple example of using CrewAI and integrating it with Langfuse via OpenTelemetry using OpenLit.
### Step 1: Install Dependencies
```python
%pip install langfuse openlit crewai crewai_tools
```
### Step 2: Set Up Environment Variables
Set your Langfuse API keys and configure OpenTelemetry export settings to send traces to Langfuse. Please refer to the [Langfuse OpenTelemetry Docs](https://langfuse.com/docs/opentelemetry/get-started) for more information on the Langfuse OpenTelemetry endpoint `/api/public/otel` and authentication.
```python
import os
import base64
LANGFUSE_PUBLIC_KEY="pk-lf-..."
LANGFUSE_SECRET_KEY="sk-lf-..."
LANGFUSE_AUTH=base64.b64encode(f"{LANGFUSE_PUBLIC_KEY}:{LANGFUSE_SECRET_KEY}".encode()).decode()
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://cloud.langfuse.com/api/public/otel" # EU data region
# os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://us.cloud.langfuse.com/api/public/otel" # US data region
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = f"Authorization=Basic {LANGFUSE_AUTH}"
# your openai key
os.environ["OPENAI_API_KEY"] = "sk-..."
```
### Step 3: Initialize OpenLit
Initialize the OpenLit OpenTelemetry instrumentation SDK to start capturing OpenTelemetry traces.
```python
import openlit
openlit.init()
```
### Step 4: Create a Simple CrewAI Application
We'll create a simple CrewAI application where multiple agents collaborate to answer a user's question.
```python
from crewai import Agent, Task, Crew
from crewai_tools import (
WebsiteSearchTool
)
web_rag_tool = WebsiteSearchTool()
writer = Agent(
role="Writer",
goal="You make math engaging and understandable for young children through poetry",
backstory="You're an expert in writing haikus but you know nothing of math.",
tools=[web_rag_tool],
)
task = Task(description=("What is {multiplication}?"),
expected_output=("Compose a haiku that includes the answer."),
agent=writer)
crew = Crew(
agents=[writer],
tasks=[task],
share_crew=False
)
```
### Step 5: See Traces in Langfuse
After running the agent, you can view the traces generated by your CrewAI application in [Langfuse](https://cloud.langfuse.com). You should see detailed steps of the LLM interactions, which can help you debug and optimize your AI agent.
![CrewAI example trace in Langfuse](https://langfuse.com/images/cookbook/integration_crewai/crewai-example-trace.png)
_[Public example trace in Langfuse](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/e2cf380ffc8d47d28da98f136140642b?timestamp=2025-02-05T15%3A12%3A02.717Z&observation=3b32338ee6a5d9af)_
## References
- [Langfuse OpenTelemetry Docs](https://langfuse.com/docs/opentelemetry/get-started)

View File

@@ -1,206 +0,0 @@
---
title: Agent Monitoring with MLflow
description: Quickly start monitoring your Agents with MLflow.
icon: bars-staggered
---
# MLflow Overview
[MLflow](https://mlflow.org/) is an open-source platform to assist machine learning practitioners and teams in handling the complexities of the machine learning process.
It provides a tracing feature that enhances LLM observability in your Generative AI applications by capturing detailed information about the execution of your applications services.
Tracing provides a way to record the inputs, outputs, and metadata associated with each intermediate step of a request, enabling you to easily pinpoint the source of bugs and unexpected behaviors.
![Overview of MLflow crewAI tracing usage](/images/mlflow-tracing.gif)
### Features
- **Tracing Dashboard**: Monitor activities of your crewAI agents with detailed dashboards that include inputs, outputs and metadata of spans.
- **Automated Tracing**: A fully automated integration with crewAI, which can be enabled by running `mlflow.crewai.autolog()`.
- **Manual Trace Instrumentation with minor efforts**: Customize trace instrumentation through MLflow's high-level fluent APIs such as decorators, function wrappers and context managers.
- **OpenTelemetry Compatibility**: MLflow Tracing supports exporting traces to an OpenTelemetry Collector, which can then be used to export traces to various backends such as Jaeger, Zipkin, and AWS X-Ray.
- **Package and Deploy Agents**: Package and deploy your crewAI agents to an inference server with a variety of deployment targets.
- **Securely Host LLMs**: Host multiple LLM from various providers in one unified endpoint through MFflow gateway.
- **Evaluation**: Evaluate your crewAI agents with a wide range of metrics using a convenient API `mlflow.evaluate()`.
## Setup Instructions
<Steps>
<Step title="Install MLflow package">
```shell
# The crewAI integration is available in mlflow>=2.19.0
pip install mlflow
```
</Step>
<Step title="Start MFflow tracking server">
```shell
# This process is optional, but it is recommended to use MLflow tracking server for better visualization and broader features.
mlflow server
```
</Step>
<Step title="Initialize MLflow in Your Application">
Add the following two lines to your application code:
```python
import mlflow
mlflow.crewai.autolog()
# Optional: Set a tracking URI and an experiment name if you have a tracking server
mlflow.set_tracking_uri("http://localhost:5000")
mlflow.set_experiment("CrewAI")
```
Example Usage for tracing CrewAI Agents:
```python
from crewai import Agent, Crew, Task
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from crewai_tools import SerperDevTool, WebsiteSearchTool
from textwrap import dedent
content = "Users name is John. He is 30 years old and lives in San Francisco."
string_source = StringKnowledgeSource(
content=content, metadata={"preference": "personal"}
)
search_tool = WebsiteSearchTool()
class TripAgents:
def city_selection_agent(self):
return Agent(
role="City Selection Expert",
goal="Select the best city based on weather, season, and prices",
backstory="An expert in analyzing travel data to pick ideal destinations",
tools=[
search_tool,
],
verbose=True,
)
def local_expert(self):
return Agent(
role="Local Expert at this city",
goal="Provide the BEST insights about the selected city",
backstory="""A knowledgeable local guide with extensive information
about the city, it's attractions and customs""",
tools=[search_tool],
verbose=True,
)
class TripTasks:
def identify_task(self, agent, origin, cities, interests, range):
return Task(
description=dedent(
f"""
Analyze and select the best city for the trip based
on specific criteria such as weather patterns, seasonal
events, and travel costs. This task involves comparing
multiple cities, considering factors like current weather
conditions, upcoming cultural or seasonal events, and
overall travel expenses.
Your final answer must be a detailed
report on the chosen city, and everything you found out
about it, including the actual flight costs, weather
forecast and attractions.
Traveling from: {origin}
City Options: {cities}
Trip Date: {range}
Traveler Interests: {interests}
"""
),
agent=agent,
expected_output="Detailed report on the chosen city including flight costs, weather forecast, and attractions",
)
def gather_task(self, agent, origin, interests, range):
return Task(
description=dedent(
f"""
As a local expert on this city you must compile an
in-depth guide for someone traveling there and wanting
to have THE BEST trip ever!
Gather information about key attractions, local customs,
special events, and daily activity recommendations.
Find the best spots to go to, the kind of place only a
local would know.
This guide should provide a thorough overview of what
the city has to offer, including hidden gems, cultural
hotspots, must-visit landmarks, weather forecasts, and
high level costs.
The final answer must be a comprehensive city guide,
rich in cultural insights and practical tips,
tailored to enhance the travel experience.
Trip Date: {range}
Traveling from: {origin}
Traveler Interests: {interests}
"""
),
agent=agent,
expected_output="Comprehensive city guide including hidden gems, cultural hotspots, and practical travel tips",
)
class TripCrew:
def __init__(self, origin, cities, date_range, interests):
self.cities = cities
self.origin = origin
self.interests = interests
self.date_range = date_range
def run(self):
agents = TripAgents()
tasks = TripTasks()
city_selector_agent = agents.city_selection_agent()
local_expert_agent = agents.local_expert()
identify_task = tasks.identify_task(
city_selector_agent,
self.origin,
self.cities,
self.interests,
self.date_range,
)
gather_task = tasks.gather_task(
local_expert_agent, self.origin, self.interests, self.date_range
)
crew = Crew(
agents=[city_selector_agent, local_expert_agent],
tasks=[identify_task, gather_task],
verbose=True,
memory=True,
knowledge={
"sources": [string_source],
"metadata": {"preference": "personal"},
},
)
result = crew.kickoff()
return result
trip_crew = TripCrew("California", "Tokyo", "Dec 12 - Dec 20", "sports")
result = trip_crew.run()
print(result)
```
Refer to [MLflow Tracing Documentation](https://mlflow.org/docs/latest/llms/tracing/index.html) for more configurations and use cases.
</Step>
<Step title="Visualize Activities of Agents">
Now traces for your crewAI agents are captured by MLflow.
Let's visit MLflow tracking server to view the traces and get insights into your Agents.
Open `127.0.0.1:5000` on your browser to visit MLflow tracking server.
<Frame caption="MLflow Tracing Dashboard">
<img src="/images/mlflow1.png" alt="MLflow tracing example with crewai" />
</Frame>
</Step>
</Steps>

View File

@@ -45,7 +45,6 @@ image_analyst = Agent(
# Create a task for image analysis # Create a task for image analysis
task = Task( task = Task(
description="Analyze the product image at https://example.com/product.jpg and provide a detailed description", description="Analyze the product image at https://example.com/product.jpg and provide a detailed description",
expected_output="A detailed description of the product image",
agent=image_analyst agent=image_analyst
) )
@@ -82,7 +81,6 @@ inspection_task = Task(
3. Compliance with standards 3. Compliance with standards
Provide a detailed report highlighting any issues found. Provide a detailed report highlighting any issues found.
""", """,
expected_output="A detailed report highlighting any issues found",
agent=expert_analyst agent=expert_analyst
) )

View File

@@ -0,0 +1,211 @@
# Portkey Integration with CrewAI
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-CrewAI.png" alt="Portkey CrewAI Header Image" width="70%" />
[Portkey](https://portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) is a 2-line upgrade to make your CrewAI agents reliable, cost-efficient, and fast.
Portkey adds 4 core production capabilities to any CrewAI agent:
1. Routing to **200+ LLMs**
2. Making each LLM call more robust
3. Full-stack tracing & cost, performance analytics
4. Real-time guardrails to enforce behavior
## Getting Started
1. **Install Required Packages:**
```bash
pip install -qU crewai portkey-ai
```
2. **Configure the LLM Client:**
To build CrewAI Agents with Portkey, you'll need two keys:
- **Portkey API Key**: Sign up on the [Portkey app](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) and copy your API key
- **Virtual Key**: Virtual Keys securely manage your LLM API keys in one place. Store your LLM provider API keys securely in Portkey's vault
```python
from crewai import LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
gpt_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy", # We are using Virtual key
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_VIRTUAL_KEY", # Enter your Virtual key from Portkey
)
)
```
3. **Create and Run Your First Agent:**
```python
from crewai import Agent, Task, Crew
# Define your agents with roles and goals
coder = Agent(
role='Software developer',
goal='Write clear, concise code on demand',
backstory='An expert coder with a keen eye for software trends.',
llm=gpt_llm
)
# Create tasks for your agents
task1 = Task(
description="Define the HTML for making a simple website with heading- Hello World! Portkey is working!",
expected_output="A clear and concise HTML code",
agent=coder
)
# Instantiate your crew
crew = Crew(
agents=[coder],
tasks=[task1],
)
result = crew.kickoff()
print(result)
```
## Key Features
| Feature | Description |
|---------|-------------|
| 🌐 Multi-LLM Support | Access OpenAI, Anthropic, Gemini, Azure, and 250+ providers through a unified interface |
| 🛡️ Production Reliability | Implement retries, timeouts, load balancing, and fallbacks |
| 📊 Advanced Observability | Track 40+ metrics including costs, tokens, latency, and custom metadata |
| 🔍 Comprehensive Logging | Debug with detailed execution traces and function call logs |
| 🚧 Security Controls | Set budget limits and implement role-based access control |
| 🔄 Performance Analytics | Capture and analyze feedback for continuous improvement |
| 💾 Intelligent Caching | Reduce costs and latency with semantic or simple caching |
## Production Features with Portkey Configs
All features mentioned below are through Portkey's Config system. Portkey's Config system allows you to define routing strategies using simple JSON objects in your LLM API calls. You can create and manage Configs directly in your code or through the Portkey Dashboard. Each Config has a unique ID for easy reference.
<Frame>
<img src="https://raw.githubusercontent.com/Portkey-AI/docs-core/refs/heads/main/images/libraries/libraries-3.avif"/>
</Frame>
### 1. Use 250+ LLMs
Access various LLMs like Anthropic, Gemini, Mistral, Azure OpenAI, and more with minimal code changes. Switch between providers or use them together seamlessly. [Learn more about Universal API](https://portkey.ai/docs/product/ai-gateway/universal-api)
Easily switch between different LLM providers:
```python
# Anthropic Configuration
anthropic_llm = LLM(
model="claude-3-5-sonnet-latest",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="anthropic_agent"
)
)
# Azure OpenAI Configuration
azure_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_AZURE_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="azure_agent"
)
)
```
### 2. Caching
Improve response times and reduce costs with two powerful caching modes:
- **Simple Cache**: Perfect for exact matches
- **Semantic Cache**: Matches responses for requests that are semantically similar
[Learn more about Caching](https://portkey.ai/docs/product/ai-gateway/cache-simple-and-semantic)
```py
config = {
"cache": {
"mode": "semantic", # or "simple" for exact matching
}
}
```
### 3. Production Reliability
Portkey provides comprehensive reliability features:
- **Automatic Retries**: Handle temporary failures gracefully
- **Request Timeouts**: Prevent hanging operations
- **Conditional Routing**: Route requests based on specific conditions
- **Fallbacks**: Set up automatic provider failovers
- **Load Balancing**: Distribute requests efficiently
[Learn more about Reliability Features](https://portkey.ai/docs/product/ai-gateway/)
### 4. Metrics
Agent runs are complex. Portkey automatically logs **40+ comprehensive metrics** for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Portkey's customizable filters provide the metrics you need.
- Cost per agent interaction
- Response times and latency
- Token usage and efficiency
- Success/failure rates
- Cache hit rates
<img src="https://github.com/siddharthsambharia-portkey/Portkey-Product-Images/blob/main/Portkey-Dashboard.png?raw=true" width="70%" alt="Portkey Dashboard" />
### 5. Detailed Logging
Logs are essential for understanding agent behavior, diagnosing issues, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.
Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.
<details>
<summary><b>Traces</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Traces.png" alt="Portkey Traces" width="70%" />
</details>
<details>
<summary><b>Logs</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Logs.png" alt="Portkey Logs" width="70%" />
</details>
### 6. Enterprise Security Features
- Set budget limit and rate limts per Virtual Key (disposable API keys)
- Implement role-based access control
- Track system changes with audit logs
- Configure data retention policies
For detailed information on creating and managing Configs, visit the [Portkey documentation](https://docs.portkey.ai/product/ai-gateway/configs).
## Resources
- [📘 Portkey Documentation](https://docs.portkey.ai)
- [📊 Portkey Dashboard](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai)
- [🐦 Twitter](https://twitter.com/portkeyai)
- [💬 Discord Community](https://discord.gg/DD7vgKK299)

View File

@@ -1,5 +1,5 @@
--- ---
title: Agent Monitoring with Portkey title: Portkey Observability and Guardrails
description: How to use Portkey with CrewAI description: How to use Portkey with CrewAI
icon: key icon: key
--- ---

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 382 KiB

View File

@@ -15,48 +15,10 @@ icon: wrench
If you need to update Python, visit [python.org/downloads](https://python.org/downloads) If you need to update Python, visit [python.org/downloads](https://python.org/downloads)
</Note> </Note>
# Setting Up Your Environment
Before installing CrewAI, it's recommended to set up a virtual environment. This helps isolate your project dependencies and avoid conflicts.
<Steps>
<Step title="Create a Virtual Environment">
Choose your preferred method to create a virtual environment:
**Using venv (Python's built-in tool):**
```shell Terminal
python3 -m venv .venv
```
**Using conda:**
```shell Terminal
conda create -n crewai-env python=3.12
```
</Step>
<Step title="Activate the Virtual Environment">
Activate your virtual environment based on your platform:
**On macOS/Linux (venv):**
```shell Terminal
source .venv/bin/activate
```
**On Windows (venv):**
```shell Terminal
.venv\Scripts\activate
```
**Using conda (all platforms):**
```shell Terminal
conda activate crewai-env
```
</Step>
</Steps>
# Installing CrewAI # Installing CrewAI
Now let's get you set up! 🚀 CrewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently.
Let's get you set up! 🚀
<Steps> <Steps>
<Step title="Install CrewAI"> <Step title="Install CrewAI">
@@ -110,9 +72,9 @@ Now let's get you set up! 🚀
# Creating a New Project # Creating a New Project
<Tip> <Info>
We recommend using the YAML Template scaffolding for a structured approach to defining agents and tasks. We recommend using the YAML Template scaffolding for a structured approach to defining agents and tasks.
</Tip> </Info>
<Steps> <Steps>
<Step title="Generate Project Structure"> <Step title="Generate Project Structure">
@@ -144,17 +106,6 @@ Now let's get you set up! 🚀
</Frame> </Frame>
</Step> </Step>
<Step title="Install Additional Tools">
You can install additional tools using UV:
```shell Terminal
uv add <tool-name>
```
<Tip>
UV is our preferred package manager as it's significantly faster than pip and provides better dependency resolution.
</Tip>
</Step>
<Step title="Customize Your Project"> <Step title="Customize Your Project">
Your project will contain these essential files: Your project will contain these essential files:

View File

@@ -101,10 +101,8 @@
"how-to/conditional-tasks", "how-to/conditional-tasks",
"how-to/agentops-observability", "how-to/agentops-observability",
"how-to/langtrace-observability", "how-to/langtrace-observability",
"how-to/mlflow-observability",
"how-to/openlit-observability", "how-to/openlit-observability",
"how-to/portkey-observability", "how-to/portkey-observability"
"how-to/langfuse-observability"
] ]
}, },
{ {

View File

@@ -58,7 +58,7 @@ Follow the steps below to get crewing! 🚣‍♂️
description: > description: >
Conduct a thorough research about {topic} Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given Make sure you find any interesting and relevant information given
the current year is 2025. the current year is 2024.
expected_output: > expected_output: >
A list with 10 bullet points of the most relevant information about {topic} A list with 10 bullet points of the most relevant information about {topic}
agent: researcher agent: researcher
@@ -195,10 +195,10 @@ Follow the steps below to get crewing! 🚣‍♂️
<CodeGroup> <CodeGroup>
```markdown output/report.md ```markdown output/report.md
# Comprehensive Report on the Rise and Impact of AI Agents in 2025 # Comprehensive Report on the Rise and Impact of AI Agents in 2024
## 1. Introduction to AI Agents ## 1. Introduction to AI Agents
In 2025, Artificial Intelligence (AI) agents are at the forefront of innovation across various industries. As intelligent systems that can perform tasks typically requiring human cognition, AI agents are paving the way for significant advancements in operational efficiency, decision-making, and overall productivity within sectors like Human Resources (HR) and Finance. This report aims to detail the rise of AI agents, their frameworks, applications, and potential implications on the workforce. In 2024, Artificial Intelligence (AI) agents are at the forefront of innovation across various industries. As intelligent systems that can perform tasks typically requiring human cognition, AI agents are paving the way for significant advancements in operational efficiency, decision-making, and overall productivity within sectors like Human Resources (HR) and Finance. This report aims to detail the rise of AI agents, their frameworks, applications, and potential implications on the workforce.
## 2. Benefits of AI Agents ## 2. Benefits of AI Agents
AI agents bring numerous advantages that are transforming traditional work environments. Key benefits include: AI agents bring numerous advantages that are transforming traditional work environments. Key benefits include:
@@ -252,7 +252,7 @@ Follow the steps below to get crewing! 🚣‍♂️
To stay competitive and harness the full potential of AI agents, organizations must remain vigilant about latest developments in AI technology and consider continuous learning and adaptation in their strategic planning. To stay competitive and harness the full potential of AI agents, organizations must remain vigilant about latest developments in AI technology and consider continuous learning and adaptation in their strategic planning.
## 8. Conclusion ## 8. Conclusion
The emergence of AI agents is undeniably reshaping the workplace landscape in 5. With their ability to automate tasks, enhance efficiency, and improve decision-making, AI agents are critical in driving operational success. Organizations must embrace and adapt to AI developments to thrive in an increasingly digital business environment. The emergence of AI agents is undeniably reshaping the workplace landscape in 2024. With their ability to automate tasks, enhance efficiency, and improve decision-making, AI agents are critical in driving operational success. Organizations must embrace and adapt to AI developments to thrive in an increasingly digital business environment.
``` ```
</CodeGroup> </CodeGroup>
</Step> </Step>

View File

@@ -1,5 +1,5 @@
--- ---
title: Composio Tool title: Composio
description: Composio provides 250+ production-ready tools for AI agents with flexible authentication management. description: Composio provides 250+ production-ready tools for AI agents with flexible authentication management.
icon: gear-code icon: gear-code
--- ---
@@ -75,8 +75,7 @@ filtered_action_enums = toolset.find_actions_by_use_case(
) )
tools = toolset.get_tools(actions=filtered_action_enums) tools = toolset.get_tools(actions=filtered_action_enums)
``` ```<Tip>Set `advanced` to True to get actions for complex use cases</Tip>
<Tip>Set `advanced` to True to get actions for complex use cases</Tip>
- Using specific tools: - Using specific tools:

View File

@@ -8,9 +8,9 @@ icon: file-pen
## Description ## Description
The `FileWriterTool` is a component of the crewai_tools package, designed to simplify the process of writing content to files with cross-platform compatibility (Windows, Linux, macOS). The `FileWriterTool` is a component of the crewai_tools package, designed to simplify the process of writing content to files.
It is particularly useful in scenarios such as generating reports, saving logs, creating configuration files, and more. It is particularly useful in scenarios such as generating reports, saving logs, creating configuration files, and more.
This tool handles path differences across operating systems, supports UTF-8 encoding, and automatically creates directories if they don't exist, making it easier to organize your output reliably across different platforms. This tool supports creating new directories if they don't exist, making it easier to organize your output.
## Installation ## Installation
@@ -43,8 +43,6 @@ print(result)
## Conclusion ## Conclusion
By integrating the `FileWriterTool` into your crews, the agents can reliably write content to files across different operating systems. By integrating the `FileWriterTool` into your crews, the agents can execute the process of writing content to files and creating directories.
This tool is essential for tasks that require saving output data, creating structured file systems, and handling cross-platform file operations. This tool is essential for tasks that require saving output data, creating structured file systems, and more. By adhering to the setup and usage guidelines provided,
It's particularly recommended for Windows users who may encounter file writing issues with standard Python file operations. incorporating this tool into projects is straightforward and efficient.
By adhering to the setup and usage guidelines provided, incorporating this tool into projects is straightforward and ensures consistent file writing behavior across all platforms.

View File

@@ -152,7 +152,6 @@ nav:
- Agent Monitoring with AgentOps: 'how-to/AgentOps-Observability.md' - Agent Monitoring with AgentOps: 'how-to/AgentOps-Observability.md'
- Agent Monitoring with LangTrace: 'how-to/Langtrace-Observability.md' - Agent Monitoring with LangTrace: 'how-to/Langtrace-Observability.md'
- Agent Monitoring with OpenLIT: 'how-to/openlit-Observability.md' - Agent Monitoring with OpenLIT: 'how-to/openlit-Observability.md'
- Agent Monitoring with MLflow: 'how-to/mlflow-Observability.md'
- Tools Docs: - Tools Docs:
- Browserbase Web Loader: 'tools/BrowserbaseLoadTool.md' - Browserbase Web Loader: 'tools/BrowserbaseLoadTool.md'
- Code Docs RAG Search: 'tools/CodeDocsSearchTool.md' - Code Docs RAG Search: 'tools/CodeDocsSearchTool.md'

View File

@@ -1,6 +1,6 @@
[project] [project]
name = "crewai" name = "crewai"
version = "0.102.0" version = "0.98.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks." description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<3.13" requires-python = ">=3.10,<3.13"
@@ -11,7 +11,7 @@ dependencies = [
# Core Dependencies # Core Dependencies
"pydantic>=2.4.2", "pydantic>=2.4.2",
"openai>=1.13.3", "openai>=1.13.3",
"litellm==1.60.2", "litellm==1.57.4",
"instructor>=1.3.3", "instructor>=1.3.3",
# Text Processing # Text Processing
"pdfplumber>=0.11.4", "pdfplumber>=0.11.4",
@@ -36,7 +36,6 @@ dependencies = [
"tomli-w>=1.1.0", "tomli-w>=1.1.0",
"tomli>=2.0.2", "tomli>=2.0.2",
"blinker>=1.9.0", "blinker>=1.9.0",
"json5>=0.10.0",
] ]
[project.urls] [project.urls]
@@ -45,7 +44,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI" Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies] [project.optional-dependencies]
tools = ["crewai-tools>=0.36.0"] tools = ["crewai-tools>=0.32.1"]
embeddings = [ embeddings = [
"tiktoken~=0.7.0" "tiktoken~=0.7.0"
] ]

View File

@@ -14,7 +14,7 @@ warnings.filterwarnings(
category=UserWarning, category=UserWarning,
module="pydantic.main", module="pydantic.main",
) )
__version__ = "0.102.0" __version__ = "0.98.0"
__all__ = [ __all__ = [
"Agent", "Agent",
"Crew", "Crew",

View File

@@ -1,13 +1,14 @@
import re import os
import shutil import shutil
import subprocess import subprocess
from typing import Any, Dict, List, Literal, Optional, Sequence, Union from typing import Any, Dict, List, Literal, Optional, Union
from pydantic import Field, InstanceOf, PrivateAttr, model_validator from pydantic import Field, InstanceOf, PrivateAttr, model_validator
from crewai.agents import CacheHandler from crewai.agents import CacheHandler
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.crew_agent_executor import CrewAgentExecutor from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.cli.constants import ENV_VARS, LITELLM_PARAMS
from crewai.knowledge.knowledge import Knowledge from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context
@@ -16,20 +17,29 @@ from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.task import Task from crewai.task import Task
from crewai.tools import BaseTool from crewai.tools import BaseTool
from crewai.tools.agent_tools.agent_tools import AgentTools from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import Tool
from crewai.utilities import Converter, Prompts from crewai.utilities import Converter, Prompts
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
from crewai.utilities.converter import generate_model_description from crewai.utilities.converter import generate_model_description
from crewai.utilities.events.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
AgentExecutionStartedEvent,
)
from crewai.utilities.events.crewai_event_bus import crewai_event_bus
from crewai.utilities.llm_utils import create_llm from crewai.utilities.llm_utils import create_llm
from crewai.utilities.token_counter_callback import TokenCalcHandler from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.utilities.training_handler import CrewTrainingHandler from crewai.utilities.training_handler import CrewTrainingHandler
agentops = None
try:
import agentops # type: ignore # Name "agentops" is already defined
from agentops import track_agent # type: ignore
except ImportError:
def track_agent():
def noop(f):
return f
return noop
@track_agent()
class Agent(BaseAgent): class Agent(BaseAgent):
"""Represents an agent in a system. """Represents an agent in a system.
@@ -46,13 +56,13 @@ class Agent(BaseAgent):
llm: The language model that will run the agent. llm: The language model that will run the agent.
function_calling_llm: The language model that will handle the tool calling for this agent, it overrides the crew function_calling_llm. function_calling_llm: The language model that will handle the tool calling for this agent, it overrides the crew function_calling_llm.
max_iter: Maximum number of iterations for an agent to execute a task. max_iter: Maximum number of iterations for an agent to execute a task.
memory: Whether the agent should have memory or not.
max_rpm: Maximum number of requests per minute for the agent execution to be respected. max_rpm: Maximum number of requests per minute for the agent execution to be respected.
verbose: Whether the agent execution should be in verbose mode. verbose: Whether the agent execution should be in verbose mode.
allow_delegation: Whether the agent is allowed to delegate tasks to other agents. allow_delegation: Whether the agent is allowed to delegate tasks to other agents.
tools: Tools at agents disposal tools: Tools at agents disposal
step_callback: Callback to be executed after each step of the agent execution. step_callback: Callback to be executed after each step of the agent execution.
knowledge_sources: Knowledge sources for the agent. knowledge_sources: Knowledge sources for the agent.
embedder: Embedder configuration for the agent.
""" """
_times_executed: int = PrivateAttr(default=0) _times_executed: int = PrivateAttr(default=0)
@@ -62,6 +72,9 @@ class Agent(BaseAgent):
) )
agent_ops_agent_name: str = None # type: ignore # Incompatible types in assignment (expression has type "None", variable has type "str") agent_ops_agent_name: str = None # type: ignore # Incompatible types in assignment (expression has type "None", variable has type "str")
agent_ops_agent_id: str = None # type: ignore # Incompatible types in assignment (expression has type "None", variable has type "str") agent_ops_agent_id: str = None # type: ignore # Incompatible types in assignment (expression has type "None", variable has type "str")
cache_handler: InstanceOf[CacheHandler] = Field(
default=None, description="An instance of the CacheHandler class."
)
step_callback: Optional[Any] = Field( step_callback: Optional[Any] = Field(
default=None, default=None,
description="Callback to be executed after each step of the agent execution.", description="Callback to be executed after each step of the agent execution.",
@@ -95,6 +108,10 @@ class Agent(BaseAgent):
default=True, default=True,
description="Keep messages under the context window size by summarizing content.", description="Keep messages under the context window size by summarizing content.",
) )
max_iter: int = Field(
default=20,
description="Maximum number of iterations for an agent to execute a task before giving it's best answer",
)
max_retry_limit: int = Field( max_retry_limit: int = Field(
default=2, default=2,
description="Maximum number of retries for an agent to execute a task when an error occurs.", description="Maximum number of retries for an agent to execute a task when an error occurs.",
@@ -107,10 +124,17 @@ class Agent(BaseAgent):
default="safe", default="safe",
description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).", description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).",
) )
embedder: Optional[Dict[str, Any]] = Field( embedder_config: Optional[Dict[str, Any]] = Field(
default=None, default=None,
description="Embedder configuration for the agent.", description="Embedder configuration for the agent.",
) )
knowledge_sources: Optional[List[BaseKnowledgeSource]] = Field(
default=None,
description="Knowledge sources for the agent.",
)
_knowledge: Optional[Knowledge] = PrivateAttr(
default=None,
)
@model_validator(mode="after") @model_validator(mode="after")
def post_init_setup(self): def post_init_setup(self):
@@ -137,16 +161,14 @@ class Agent(BaseAgent):
def _set_knowledge(self): def _set_knowledge(self):
try: try:
if self.knowledge_sources: if self.knowledge_sources:
full_pattern = re.compile(r"[^a-zA-Z0-9\-_\r\n]|(\.\.)") knowledge_agent_name = f"{self.role.replace(' ', '_')}"
knowledge_agent_name = f"{re.sub(full_pattern, '_', self.role)}"
if isinstance(self.knowledge_sources, list) and all( if isinstance(self.knowledge_sources, list) and all(
isinstance(k, BaseKnowledgeSource) for k in self.knowledge_sources isinstance(k, BaseKnowledgeSource) for k in self.knowledge_sources
): ):
self.knowledge = Knowledge( self._knowledge = Knowledge(
sources=self.knowledge_sources, sources=self.knowledge_sources,
embedder=self.embedder, embedder_config=self.embedder_config,
collection_name=knowledge_agent_name, collection_name=knowledge_agent_name,
storage=self.knowledge_storage or None,
) )
except (TypeError, ValueError) as e: except (TypeError, ValueError) as e:
raise ValueError(f"Invalid Knowledge Configuration: {str(e)}") raise ValueError(f"Invalid Knowledge Configuration: {str(e)}")
@@ -180,15 +202,13 @@ class Agent(BaseAgent):
if task.output_json: if task.output_json:
# schema = json.dumps(task.output_json, indent=2) # schema = json.dumps(task.output_json, indent=2)
schema = generate_model_description(task.output_json) schema = generate_model_description(task.output_json)
task_prompt += "\n" + self.i18n.slice(
"formatted_task_instructions"
).format(output_format=schema)
elif task.output_pydantic: elif task.output_pydantic:
schema = generate_model_description(task.output_pydantic) schema = generate_model_description(task.output_pydantic)
task_prompt += "\n" + self.i18n.slice(
"formatted_task_instructions" task_prompt += "\n" + self.i18n.slice("formatted_task_instructions").format(
).format(output_format=schema) output_format=schema
)
if context: if context:
task_prompt = self.i18n.slice("task_with_context").format( task_prompt = self.i18n.slice("task_with_context").format(
@@ -207,8 +227,8 @@ class Agent(BaseAgent):
if memory.strip() != "": if memory.strip() != "":
task_prompt += self.i18n.slice("memory").format(memory=memory) task_prompt += self.i18n.slice("memory").format(memory=memory)
if self.knowledge: if self._knowledge:
agent_knowledge_snippets = self.knowledge.query([task.prompt()]) agent_knowledge_snippets = self._knowledge.query([task.prompt()])
if agent_knowledge_snippets: if agent_knowledge_snippets:
agent_knowledge_context = extract_knowledge_context( agent_knowledge_context = extract_knowledge_context(
agent_knowledge_snippets agent_knowledge_snippets
@@ -232,15 +252,6 @@ class Agent(BaseAgent):
task_prompt = self._use_trained_data(task_prompt=task_prompt) task_prompt = self._use_trained_data(task_prompt=task_prompt)
try: try:
crewai_event_bus.emit(
self,
event=AgentExecutionStartedEvent(
agent=self,
tools=self.tools,
task_prompt=task_prompt,
task=task,
),
)
result = self.agent_executor.invoke( result = self.agent_executor.invoke(
{ {
"input": task_prompt, "input": task_prompt,
@@ -250,27 +261,8 @@ class Agent(BaseAgent):
} }
)["output"] )["output"]
except Exception as e: except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
agent=self,
task=task,
error=str(e),
),
)
raise e
self._times_executed += 1 self._times_executed += 1
if self._times_executed > self.max_retry_limit: if self._times_executed > self.max_retry_limit:
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
agent=self,
task=task,
error=str(e),
),
)
raise e raise e
result = self.execute_task(task, context, tools) result = self.execute_task(task, context, tools)
@@ -283,10 +275,7 @@ class Agent(BaseAgent):
for tool_result in self.tools_results: # type: ignore # Item "None" of "list[Any] | None" has no attribute "__iter__" (not iterable) for tool_result in self.tools_results: # type: ignore # Item "None" of "list[Any] | None" has no attribute "__iter__" (not iterable)
if tool_result.get("result_as_answer", False): if tool_result.get("result_as_answer", False):
result = tool_result["result"] result = tool_result["result"]
crewai_event_bus.emit(
self,
event=AgentExecutionCompletedEvent(agent=self, task=task, output=result),
)
return result return result
def create_agent_executor( def create_agent_executor(
@@ -344,14 +333,14 @@ class Agent(BaseAgent):
tools = agent_tools.tools() tools = agent_tools.tools()
return tools return tools
def get_multimodal_tools(self) -> Sequence[BaseTool]: def get_multimodal_tools(self) -> List[Tool]:
from crewai.tools.agent_tools.add_image_tool import AddImageTool from crewai.tools.agent_tools.add_image_tool import AddImageTool
return [AddImageTool()] return [AddImageTool()]
def get_code_execution_tools(self): def get_code_execution_tools(self):
try: try:
from crewai_tools import CodeInterpreterTool # type: ignore from crewai_tools import CodeInterpreterTool
# Set the unsafe_mode based on the code_execution_mode attribute # Set the unsafe_mode based on the code_execution_mode attribute
unsafe_mode = self.code_execution_mode == "unsafe" unsafe_mode = self.code_execution_mode == "unsafe"

View File

@@ -18,12 +18,10 @@ from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.agents.cache.cache_handler import CacheHandler from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.tools_handler import ToolsHandler from crewai.agents.tools_handler import ToolsHandler
from crewai.knowledge.knowledge import Knowledge from crewai.tools import BaseTool
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource from crewai.tools.base_tool import Tool
from crewai.tools.base_tool import BaseTool, Tool
from crewai.utilities import I18N, Logger, RPMController from crewai.utilities import I18N, Logger, RPMController
from crewai.utilities.config import process_config from crewai.utilities.config import process_config
from crewai.utilities.converter import Converter
T = TypeVar("T", bound="BaseAgent") T = TypeVar("T", bound="BaseAgent")
@@ -42,7 +40,7 @@ class BaseAgent(ABC, BaseModel):
max_rpm (Optional[int]): Maximum number of requests per minute for the agent execution. max_rpm (Optional[int]): Maximum number of requests per minute for the agent execution.
allow_delegation (bool): Allow delegation of tasks to agents. allow_delegation (bool): Allow delegation of tasks to agents.
tools (Optional[List[Any]]): Tools at the agent's disposal. tools (Optional[List[Any]]): Tools at the agent's disposal.
max_iter (int): Maximum iterations for an agent to execute a task. max_iter (Optional[int]): Maximum iterations for an agent to execute a task.
agent_executor (InstanceOf): An instance of the CrewAgentExecutor class. agent_executor (InstanceOf): An instance of the CrewAgentExecutor class.
llm (Any): Language model that will run the agent. llm (Any): Language model that will run the agent.
crew (Any): Crew to which the agent belongs. crew (Any): Crew to which the agent belongs.
@@ -50,8 +48,6 @@ class BaseAgent(ABC, BaseModel):
cache_handler (InstanceOf[CacheHandler]): An instance of the CacheHandler class. cache_handler (InstanceOf[CacheHandler]): An instance of the CacheHandler class.
tools_handler (InstanceOf[ToolsHandler]): An instance of the ToolsHandler class. tools_handler (InstanceOf[ToolsHandler]): An instance of the ToolsHandler class.
max_tokens: Maximum number of tokens for the agent to generate in a response. max_tokens: Maximum number of tokens for the agent to generate in a response.
knowledge_sources: Knowledge sources for the agent.
knowledge_storage: Custom knowledge storage for the agent.
Methods: Methods:
@@ -111,10 +107,10 @@ class BaseAgent(ABC, BaseModel):
default=False, default=False,
description="Enable agent to delegate and ask questions among each other.", description="Enable agent to delegate and ask questions among each other.",
) )
tools: Optional[List[BaseTool]] = Field( tools: Optional[List[Any]] = Field(
default_factory=list, description="Tools at agents' disposal" default_factory=list, description="Tools at agents' disposal"
) )
max_iter: int = Field( max_iter: Optional[int] = Field(
default=25, description="Maximum iterations for an agent to execute a task" default=25, description="Maximum iterations for an agent to execute a task"
) )
agent_executor: InstanceOf = Field( agent_executor: InstanceOf = Field(
@@ -125,27 +121,15 @@ class BaseAgent(ABC, BaseModel):
) )
crew: Any = Field(default=None, description="Crew to which the agent belongs.") crew: Any = Field(default=None, description="Crew to which the agent belongs.")
i18n: I18N = Field(default=I18N(), description="Internationalization settings.") i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
cache_handler: Optional[InstanceOf[CacheHandler]] = Field( cache_handler: InstanceOf[CacheHandler] = Field(
default=None, description="An instance of the CacheHandler class." default=None, description="An instance of the CacheHandler class."
) )
tools_handler: InstanceOf[ToolsHandler] = Field( tools_handler: InstanceOf[ToolsHandler] = Field(
default_factory=ToolsHandler, default=None, description="An instance of the ToolsHandler class."
description="An instance of the ToolsHandler class.",
) )
max_tokens: Optional[int] = Field( max_tokens: Optional[int] = Field(
default=None, description="Maximum number of tokens for the agent's execution." default=None, description="Maximum number of tokens for the agent's execution."
) )
knowledge: Optional[Knowledge] = Field(
default=None, description="Knowledge for the agent."
)
knowledge_sources: Optional[List[BaseKnowledgeSource]] = Field(
default=None,
description="Knowledge sources for the agent.",
)
knowledge_storage: Optional[Any] = Field(
default=None,
description="Custom knowledge storage for the agent.",
)
@model_validator(mode="before") @model_validator(mode="before")
@classmethod @classmethod
@@ -255,7 +239,7 @@ class BaseAgent(ABC, BaseModel):
@abstractmethod @abstractmethod
def get_output_converter( def get_output_converter(
self, llm: Any, text: str, model: type[BaseModel] | None, instructions: str self, llm: Any, text: str, model: type[BaseModel] | None, instructions: str
) -> Converter: ):
"""Get the converter class for the agent to create json/pydantic outputs.""" """Get the converter class for the agent to create json/pydantic outputs."""
pass pass
@@ -272,44 +256,13 @@ class BaseAgent(ABC, BaseModel):
"tools_handler", "tools_handler",
"cache_handler", "cache_handler",
"llm", "llm",
"knowledge_sources",
"knowledge_storage",
"knowledge",
} }
# Copy llm # Copy llm and clear callbacks
existing_llm = shallow_copy(self.llm) existing_llm = shallow_copy(self.llm)
copied_knowledge = shallow_copy(self.knowledge)
copied_knowledge_storage = shallow_copy(self.knowledge_storage)
# Properly copy knowledge sources if they exist
existing_knowledge_sources = None
if self.knowledge_sources:
# Create a shared storage instance for all knowledge sources
shared_storage = (
self.knowledge_sources[0].storage if self.knowledge_sources else None
)
existing_knowledge_sources = []
for source in self.knowledge_sources:
copied_source = (
source.model_copy()
if hasattr(source, "model_copy")
else shallow_copy(source)
)
# Ensure all copied sources use the same storage instance
copied_source.storage = shared_storage
existing_knowledge_sources.append(copied_source)
copied_data = self.model_dump(exclude=exclude) copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None} copied_data = {k: v for k, v in copied_data.items() if v is not None}
copied_agent = type(self)( copied_agent = type(self)(**copied_data, llm=existing_llm, tools=self.tools)
**copied_data,
llm=existing_llm,
tools=self.tools,
knowledge_sources=existing_knowledge_sources,
knowledge=copied_knowledge,
knowledge_storage=copied_knowledge_storage,
)
return copied_agent return copied_agent

View File

@@ -95,34 +95,18 @@ class CrewAgentExecutorMixin:
pass pass
def _ask_human_input(self, final_answer: str) -> str: def _ask_human_input(self, final_answer: str) -> str:
"""Prompt human input with mode-appropriate messaging.""" """Prompt human input for final decision making."""
self._printer.print( self._printer.print(
content=f"\033[1m\033[95m ## Final Result:\033[00m \033[92m{final_answer}\033[00m" content=f"\033[1m\033[95m ## Final Result:\033[00m \033[92m{final_answer}\033[00m"
) )
# Training mode prompt (single iteration) self._printer.print(
if self.crew and getattr(self.crew, "_train", False): content=(
prompt = (
"\n\n=====\n" "\n\n=====\n"
"## TRAINING MODE: Provide feedback to improve the agent's performance.\n" "## Please provide feedback on the Final Result and the Agent's actions. "
"This will be used to train better versions of the agent.\n" "Respond with 'looks good' or a similar phrase when you're satisfied.\n"
"Please provide detailed feedback about the result quality and reasoning process.\n"
"=====\n" "=====\n"
),
color="bold_yellow",
) )
# Regular human-in-the-loop prompt (multiple iterations) return input()
else:
prompt = (
"\n\n=====\n"
"## HUMAN FEEDBACK: Provide feedback on the Final Result and Agent's actions.\n"
"Please follow these guidelines:\n"
" - If you are happy with the result, simply hit Enter without typing anything.\n"
" - Otherwise, provide specific improvement requests.\n"
" - You can provide multiple rounds of feedback until satisfied.\n"
"=====\n"
)
self._printer.print(content=prompt, color="bold_yellow")
response = input()
if response.strip() != "":
self._printer.print(content="\nProcessing your feedback...", color="cyan")
return response

View File

@@ -31,11 +31,11 @@ class OutputConverter(BaseModel, ABC):
) )
@abstractmethod @abstractmethod
def to_pydantic(self, current_attempt=1) -> BaseModel: def to_pydantic(self, current_attempt=1):
"""Convert text to pydantic.""" """Convert text to pydantic."""
pass pass
@abstractmethod @abstractmethod
def to_json(self, current_attempt=1) -> dict: def to_json(self, current_attempt=1):
"""Convert text to json.""" """Convert text to json."""
pass pass

View File

@@ -13,17 +13,10 @@ from crewai.agents.parser import (
OutputParserException, OutputParserException,
) )
from crewai.agents.tools_handler import ToolsHandler from crewai.agents.tools_handler import ToolsHandler
from crewai.llm import LLM
from crewai.tools.base_tool import BaseTool from crewai.tools.base_tool import BaseTool
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N, Printer from crewai.utilities import I18N, Printer
from crewai.utilities.constants import MAX_LLM_RETRY, TRAINING_DATA_FILE from crewai.utilities.constants import MAX_LLM_RETRY, TRAINING_DATA_FILE
from crewai.utilities.events import (
ToolUsageErrorEvent,
ToolUsageStartedEvent,
crewai_event_bus,
)
from crewai.utilities.events.tool_usage_events import ToolUsageStartedEvent
from crewai.utilities.exceptions.context_window_exceeding_exception import ( from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException, LLMContextLengthExceededException,
) )
@@ -61,7 +54,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
callbacks: List[Any] = [], callbacks: List[Any] = [],
): ):
self._i18n: I18N = I18N() self._i18n: I18N = I18N()
self.llm: LLM = llm self.llm = llm
self.task = task self.task = task
self.agent = agent self.agent = agent
self.crew = crew self.crew = crew
@@ -87,8 +80,10 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self.tool_name_to_tool_map: Dict[str, BaseTool] = { self.tool_name_to_tool_map: Dict[str, BaseTool] = {
tool.name: tool for tool in self.tools tool.name: tool for tool in self.tools
} }
self.stop = stop_words if self.llm.stop:
self.llm.stop = list(set(self.llm.stop + self.stop)) self.llm.stop = list(set(self.llm.stop + self.stop))
else:
self.llm.stop = self.stop
def invoke(self, inputs: Dict[str, str]) -> Dict[str, Any]: def invoke(self, inputs: Dict[str, str]) -> Dict[str, Any]:
if "system" in self.prompt: if "system" in self.prompt:
@@ -103,22 +98,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._show_start_logs() self._show_start_logs()
self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False)) self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
try:
formatted_answer = self._invoke_loop() formatted_answer = self._invoke_loop()
except AssertionError:
self._printer.print(
content="Agent failed to reach a final answer. This is likely a bug - please report it.",
color="red",
)
raise
except Exception as e:
self._handle_unknown_error(e)
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
else:
raise e
if self.ask_for_human_input: if self.ask_for_human_input:
formatted_answer = self._handle_human_feedback(formatted_answer) formatted_answer = self._handle_human_feedback(formatted_answer)
@@ -127,7 +107,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._create_long_term_memory(formatted_answer) self._create_long_term_memory(formatted_answer)
return {"output": formatted_answer.output} return {"output": formatted_answer.output}
def _invoke_loop(self) -> AgentFinish: def _invoke_loop(self):
""" """
Main loop to invoke the agent's thought process until it reaches a conclusion Main loop to invoke the agent's thought process until it reaches a conclusion
or the maximum number of iterations is reached. or the maximum number of iterations is reached.
@@ -144,6 +124,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._enforce_rpm_limit() self._enforce_rpm_limit()
answer = self._get_llm_response() answer = self._get_llm_response()
formatted_answer = self._process_llm_response(answer) formatted_answer = self._process_llm_response(answer)
if isinstance(formatted_answer, AgentAction): if isinstance(formatted_answer, AgentAction):
@@ -161,37 +142,13 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
formatted_answer = self._handle_output_parser_exception(e) formatted_answer = self._handle_output_parser_exception(e)
except Exception as e: except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
if self._is_context_length_exceeded(e): if self._is_context_length_exceeded(e):
self._handle_context_length() self._handle_context_length()
continue continue
else:
self._handle_unknown_error(e)
raise e
finally:
self.iterations += 1
# During the invoke loop, formatted_answer alternates between AgentAction
# (when the agent is using tools) and eventually becomes AgentFinish
# (when the agent reaches a final answer). This assertion confirms we've
# reached a final answer and helps type checking understand this transition.
assert isinstance(formatted_answer, AgentFinish)
self._show_logs(formatted_answer) self._show_logs(formatted_answer)
return formatted_answer return formatted_answer
def _handle_unknown_error(self, exception: Exception) -> None:
"""Handle unknown errors by informing the user."""
self._printer.print(
content="An unknown error occurred. Please check the details below.",
color="red",
)
self._printer.print(
content=f"Error details: {exception}",
color="red",
)
def _has_reached_max_iterations(self) -> bool: def _has_reached_max_iterations(self) -> bool:
"""Check if the maximum number of iterations has been reached.""" """Check if the maximum number of iterations has been reached."""
return self.iterations >= self.max_iter return self.iterations >= self.max_iter
@@ -203,17 +160,10 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
def _get_llm_response(self) -> str: def _get_llm_response(self) -> str:
"""Call the LLM and return the response, handling any invalid responses.""" """Call the LLM and return the response, handling any invalid responses."""
try:
answer = self.llm.call( answer = self.llm.call(
self.messages, self.messages,
callbacks=self.callbacks, callbacks=self.callbacks,
) )
except Exception as e:
self._printer.print(
content=f"Error during LLM call: {e}",
color="red",
)
raise e
if not answer: if not answer:
self._printer.print( self._printer.print(
@@ -234,6 +184,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE in e.error: if FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE in e.error:
answer = answer.split("Observation:")[0].strip() answer = answer.split("Observation:")[0].strip()
self.iterations += 1
return self._format_answer(answer) return self._format_answer(answer)
def _handle_agent_action( def _handle_agent_action(
@@ -309,11 +260,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._printer.print( self._printer.print(
content=f"\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{agent_role}\033[00m" content=f"\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{agent_role}\033[00m"
) )
description = (
getattr(self.task, "description") if self.task else "Not Found"
)
self._printer.print( self._printer.print(
content=f"\033[95m## Task:\033[00m \033[92m{description}\033[00m" content=f"\033[95m## Task:\033[00m \033[92m{self.task.description}\033[00m"
) )
def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]): def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]):
@@ -355,18 +303,6 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
) )
def _execute_tool_and_check_finality(self, agent_action: AgentAction) -> ToolResult: def _execute_tool_and_check_finality(self, agent_action: AgentAction) -> ToolResult:
try:
if self.agent:
crewai_event_bus.emit(
self,
event=ToolUsageStartedEvent(
agent_key=self.agent.key,
agent_role=self.agent.role,
tool_name=agent_action.tool,
tool_args=agent_action.tool_input,
tool_class=agent_action.tool,
),
)
tool_usage = ToolUsage( tool_usage = ToolUsage(
tools_handler=self.tools_handler, tools_handler=self.tools_handler,
tools=self.tools, tools=self.tools,
@@ -402,22 +338,6 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
) )
return ToolResult(result=tool_result, result_as_answer=False) return ToolResult(result=tool_result, result_as_answer=False)
except Exception as e:
# TODO: drop
if self.agent:
crewai_event_bus.emit(
self,
event=ToolUsageErrorEvent( # validation error
agent_key=self.agent.key,
agent_role=self.agent.role,
tool_name=agent_action.tool,
tool_args=agent_action.tool_input,
tool_class=agent_action.tool,
error=str(e),
),
)
raise e
def _summarize_messages(self) -> None: def _summarize_messages(self) -> None:
messages_groups = [] messages_groups = []
for message in self.messages: for message in self.messages:
@@ -466,50 +386,58 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
) )
def _handle_crew_training_output( def _handle_crew_training_output(
self, result: AgentFinish, human_feedback: Optional[str] = None self, result: AgentFinish, human_feedback: str | None = None
) -> None: ) -> None:
"""Handle the process of saving training data.""" """Function to handle the process of the training data."""
agent_id = str(self.agent.id) # type: ignore agent_id = str(self.agent.id) # type: ignore
train_iteration = (
getattr(self.crew, "_train_iteration", None) if self.crew else None
)
if train_iteration is None or not isinstance(train_iteration, int): # Load training data
training_handler = CrewTrainingHandler(TRAINING_DATA_FILE)
training_data = training_handler.load()
# Check if training data exists, human input is not requested, and self.crew is valid
if training_data and not self.ask_for_human_input:
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
train_iteration = self.crew._train_iteration
if agent_id in training_data and isinstance(train_iteration, int):
training_data[agent_id][train_iteration][
"improved_output"
] = result.output
training_handler.save(training_data)
else:
self._printer.print( self._printer.print(
content="Invalid or missing train iteration. Cannot save training data.", content="Invalid train iteration type or agent_id not in training data.",
color="red",
)
else:
self._printer.print(
content="Crew is None or does not have _train_iteration attribute.",
color="red", color="red",
) )
return
training_handler = CrewTrainingHandler(TRAINING_DATA_FILE) if self.ask_for_human_input and human_feedback is not None:
training_data = training_handler.load() or {} training_data = {
# Initialize or retrieve agent's training data
agent_training_data = training_data.get(agent_id, {})
if human_feedback is not None:
# Save initial output and human feedback
agent_training_data[train_iteration] = {
"initial_output": result.output, "initial_output": result.output,
"human_feedback": human_feedback, "human_feedback": human_feedback,
"agent": agent_id,
"agent_role": self.agent.role, # type: ignore
} }
else: if self.crew is not None and hasattr(self.crew, "_train_iteration"):
# Save improved output train_iteration = self.crew._train_iteration
if train_iteration in agent_training_data: if isinstance(train_iteration, int):
agent_training_data[train_iteration]["improved_output"] = result.output CrewTrainingHandler(TRAINING_DATA_FILE).append(
train_iteration, agent_id, training_data
)
else: else:
self._printer.print( self._printer.print(
content=( content="Invalid train iteration type. Expected int.",
f"No existing training data for agent {agent_id} and iteration " color="red",
f"{train_iteration}. Cannot save improved output." )
), else:
self._printer.print(
content="Crew is None or does not have _train_iteration attribute.",
color="red", color="red",
) )
return
# Update the training data and save
training_data[agent_id] = agent_training_data
training_handler.save(training_data)
def _format_prompt(self, prompt: str, inputs: Dict[str, str]) -> str: def _format_prompt(self, prompt: str, inputs: Dict[str, str]) -> str:
prompt = prompt.replace("{input}", inputs["input"]) prompt = prompt.replace("{input}", inputs["input"])
@@ -525,85 +453,82 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
return {"role": role, "content": prompt} return {"role": role, "content": prompt}
def _handle_human_feedback(self, formatted_answer: AgentFinish) -> AgentFinish: def _handle_human_feedback(self, formatted_answer: AgentFinish) -> AgentFinish:
"""Handle human feedback with different flows for training vs regular use. """
Handles the human feedback loop, allowing the user to provide feedback
on the agent's output and determining if additional iterations are needed.
Args: Parameters:
formatted_answer: The initial AgentFinish result to get feedback on formatted_answer (AgentFinish): The initial output from the agent.
Returns: Returns:
AgentFinish: The final answer after processing feedback AgentFinish: The final output after incorporating human feedback.
""" """
while self.ask_for_human_input:
human_feedback = self._ask_human_input(formatted_answer.output) human_feedback = self._ask_human_input(formatted_answer.output)
if self._is_training_mode(): if self.crew and self.crew._train:
return self._handle_training_feedback(formatted_answer, human_feedback) self._handle_crew_training_output(formatted_answer, human_feedback)
return self._handle_regular_feedback(formatted_answer, human_feedback) # Make an LLM call to verify if additional changes are requested based on human feedback
additional_changes_prompt = self._i18n.slice(
"human_feedback_classification"
).format(feedback=human_feedback)
def _is_training_mode(self) -> bool: retry_count = 0
"""Check if crew is in training mode.""" llm_call_successful = False
return bool(self.crew and self.crew._train) additional_changes_response = None
def _handle_training_feedback( while retry_count < MAX_LLM_RETRY and not llm_call_successful:
self, initial_answer: AgentFinish, feedback: str try:
) -> AgentFinish: additional_changes_response = (
"""Process feedback for training scenarios with single iteration.""" self.llm.call(
self._handle_crew_training_output(initial_answer, feedback) [
self.messages.append(
self._format_msg( self._format_msg(
self._i18n.slice("feedback_instructions").format(feedback=feedback) additional_changes_prompt, role="system"
) )
],
callbacks=self.callbacks,
) )
improved_answer = self._invoke_loop() .strip()
self._handle_crew_training_output(improved_answer) .lower()
self.ask_for_human_input = False )
return improved_answer llm_call_successful = True
except Exception as e:
retry_count += 1
def _handle_regular_feedback( self._printer.print(
self, current_answer: AgentFinish, initial_feedback: str content=f"Error during LLM call to classify human feedback: {e}. Retrying... ({retry_count}/{MAX_LLM_RETRY})",
) -> AgentFinish: color="red",
"""Process feedback for regular use with potential multiple iterations.""" )
feedback = initial_feedback
answer = current_answer
while self.ask_for_human_input: if not llm_call_successful:
# If the user provides a blank response, assume they are happy with the result self._printer.print(
if feedback.strip() == "": content="Error processing feedback after multiple attempts.",
color="red",
)
self.ask_for_human_input = False self.ask_for_human_input = False
break
if additional_changes_response == "false":
self.ask_for_human_input = False
elif additional_changes_response == "true":
self.ask_for_human_input = True
# Add human feedback to messages
self.messages.append(self._format_msg(f"Feedback: {human_feedback}"))
# Invoke the loop again with updated messages
formatted_answer = self._invoke_loop()
if self.crew and self.crew._train:
self._handle_crew_training_output(formatted_answer)
else: else:
answer = self._process_feedback_iteration(feedback) # Unexpected response
feedback = self._ask_human_input(answer.output)
return answer
def _process_feedback_iteration(self, feedback: str) -> AgentFinish:
"""Process a single feedback iteration."""
self.messages.append(
self._format_msg(
self._i18n.slice("feedback_instructions").format(feedback=feedback)
)
)
return self._invoke_loop()
def _log_feedback_error(self, retry_count: int, error: Exception) -> None:
"""Log feedback processing errors."""
self._printer.print( self._printer.print(
content=( content=f"Unexpected response from LLM: '{additional_changes_response}'. Assuming no additional changes requested.",
f"Error processing feedback: {error}. "
f"Retrying... ({retry_count + 1}/{MAX_LLM_RETRY})"
),
color="red", color="red",
) )
self.ask_for_human_input = False
def _log_max_retries_exceeded(self) -> None: return formatted_answer
"""Log when max retries for feedback processing are exceeded."""
self._printer.print(
content=(
f"Failed to process feedback after {MAX_LLM_RETRY} attempts. "
"Ending feedback loop."
),
color="red",
)
def _handle_max_iterations_exceeded(self, formatted_answer): def _handle_max_iterations_exceeded(self, formatted_answer):
""" """

View File

@@ -94,13 +94,6 @@ class CrewAgentParser:
elif includes_answer: elif includes_answer:
final_answer = text.split(FINAL_ANSWER_ACTION)[-1].strip() final_answer = text.split(FINAL_ANSWER_ACTION)[-1].strip()
# Check whether the final answer ends with triple backticks.
if final_answer.endswith("```"):
# Count occurrences of triple backticks in the final answer.
count = final_answer.count("```")
# If count is odd then it's an unmatched trailing set; remove it.
if count % 2 != 0:
final_answer = final_answer[:-3].rstrip()
return AgentFinish(thought, final_answer, text) return AgentFinish(thought, final_answer, text)
if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL): if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL):
@@ -127,10 +120,7 @@ class CrewAgentParser:
regex = r"(.*?)(?:\n\nAction|\n\nFinal Answer)" regex = r"(.*?)(?:\n\nAction|\n\nFinal Answer)"
thought_match = re.search(regex, text, re.DOTALL) thought_match = re.search(regex, text, re.DOTALL)
if thought_match: if thought_match:
thought = thought_match.group(1).strip() return thought_match.group(1).strip()
# Remove any triple backticks from the thought string
thought = thought.replace("```", "").strip()
return thought
return "" return ""
def _clean_action(self, text: str) -> str: def _clean_action(self, text: str) -> str:

View File

@@ -350,10 +350,7 @@ def chat():
Start a conversation with the Crew, collecting user-supplied inputs, Start a conversation with the Crew, collecting user-supplied inputs,
and using the Chat LLM to generate responses. and using the Chat LLM to generate responses.
""" """
click.secho( click.echo("Starting a conversation with the Crew")
"\nStarting a conversation with the Crew\n" "Type 'exit' or Ctrl+C to quit.\n",
)
run_chat() run_chat()

View File

@@ -1,52 +1,17 @@
import json import json
import platform
import re import re
import sys import sys
import threading
import time
from pathlib import Path from pathlib import Path
from typing import Any, Dict, List, Optional, Set, Tuple from typing import Any, Dict, List, Optional, Set, Tuple
import click import click
import tomli import tomli
from packaging import version
from crewai.cli.utils import read_toml
from crewai.cli.version import get_crewai_version
from crewai.crew import Crew from crewai.crew import Crew
from crewai.llm import LLM from crewai.llm import LLM
from crewai.types.crew_chat import ChatInputField, ChatInputs from crewai.types.crew_chat import ChatInputField, ChatInputs
from crewai.utilities.llm_utils import create_llm from crewai.utilities.llm_utils import create_llm
MIN_REQUIRED_VERSION = "0.98.0"
def check_conversational_crews_version(
crewai_version: str, pyproject_data: dict
) -> bool:
"""
Check if the installed crewAI version supports conversational crews.
Args:
crewai_version: The current version of crewAI.
pyproject_data: Dictionary containing pyproject.toml data.
Returns:
bool: True if version check passes, False otherwise.
"""
try:
if version.parse(crewai_version) < version.parse(MIN_REQUIRED_VERSION):
click.secho(
"You are using an older version of crewAI that doesn't support conversational crews. "
"Run 'uv upgrade crewai' to get the latest version.",
fg="red",
)
return False
except version.InvalidVersion:
click.secho("Invalid crewAI version format detected.", fg="red")
return False
return True
def run_chat(): def run_chat():
""" """
@@ -54,30 +19,11 @@ def run_chat():
Incorporates crew_name, crew_description, and input fields to build a tool schema. Incorporates crew_name, crew_description, and input fields to build a tool schema.
Exits if crew_name or crew_description are missing. Exits if crew_name or crew_description are missing.
""" """
crewai_version = get_crewai_version()
pyproject_data = read_toml()
if not check_conversational_crews_version(crewai_version, pyproject_data):
return
crew, crew_name = load_crew_and_name() crew, crew_name = load_crew_and_name()
chat_llm = initialize_chat_llm(crew) chat_llm = initialize_chat_llm(crew)
if not chat_llm: if not chat_llm:
return return
# Indicate that the crew is being analyzed
click.secho(
"\nAnalyzing crew and required inputs - this may take 3 to 30 seconds "
"depending on the complexity of your crew.",
fg="white",
)
# Start loading indicator
loading_complete = threading.Event()
loading_thread = threading.Thread(target=show_loading, args=(loading_complete,))
loading_thread.start()
try:
crew_chat_inputs = generate_crew_chat_inputs(crew, crew_name, chat_llm) crew_chat_inputs = generate_crew_chat_inputs(crew, crew_name, chat_llm)
crew_tool_schema = generate_crew_tool_schema(crew_chat_inputs) crew_tool_schema = generate_crew_tool_schema(crew_chat_inputs)
system_message = build_system_message(crew_chat_inputs) system_message = build_system_message(crew_chat_inputs)
@@ -86,15 +32,7 @@ def run_chat():
introductory_message = chat_llm.call( introductory_message = chat_llm.call(
messages=[{"role": "system", "content": system_message}] messages=[{"role": "system", "content": system_message}]
) )
finally: click.secho(f"\nAssistant: {introductory_message}\n", fg="green")
# Stop loading indicator
loading_complete.set()
loading_thread.join()
# Indicate that the analysis is complete
click.secho("\nFinished analyzing crew.\n", fg="white")
click.secho(f"Assistant: {introductory_message}\n", fg="green")
messages = [ messages = [
{"role": "system", "content": system_message}, {"role": "system", "content": system_message},
@@ -105,17 +43,15 @@ def run_chat():
crew_chat_inputs.crew_name: create_tool_function(crew, messages), crew_chat_inputs.crew_name: create_tool_function(crew, messages),
} }
click.secho(
"\nEntering an interactive chat loop with function-calling.\n"
"Type 'exit' or Ctrl+C to quit.\n",
fg="cyan",
)
chat_loop(chat_llm, messages, crew_tool_schema, available_functions) chat_loop(chat_llm, messages, crew_tool_schema, available_functions)
def show_loading(event: threading.Event):
"""Display animated loading dots while processing."""
while not event.is_set():
print(".", end="", flush=True)
time.sleep(1)
print()
def initialize_chat_llm(crew: Crew) -> Optional[LLM]: def initialize_chat_llm(crew: Crew) -> Optional[LLM]:
"""Initializes the chat LLM and handles exceptions.""" """Initializes the chat LLM and handles exceptions."""
try: try:
@@ -166,80 +102,16 @@ def create_tool_function(crew: Crew, messages: List[Dict[str, str]]) -> Any:
return run_crew_tool_with_messages return run_crew_tool_with_messages
def flush_input():
"""Flush any pending input from the user."""
if platform.system() == "Windows":
# Windows platform
import msvcrt
while msvcrt.kbhit():
msvcrt.getch()
else:
# Unix-like platforms (Linux, macOS)
import termios
termios.tcflush(sys.stdin, termios.TCIFLUSH)
def chat_loop(chat_llm, messages, crew_tool_schema, available_functions): def chat_loop(chat_llm, messages, crew_tool_schema, available_functions):
"""Main chat loop for interacting with the user.""" """Main chat loop for interacting with the user."""
while True: while True:
try: try:
# Flush any pending input before accepting new input user_input = click.prompt("You", type=str)
flush_input() if user_input.strip().lower() in ["exit", "quit"]:
user_input = get_user_input()
handle_user_input(
user_input, chat_llm, messages, crew_tool_schema, available_functions
)
except KeyboardInterrupt:
click.echo("\nExiting chat. Goodbye!")
break
except Exception as e:
click.secho(f"An error occurred: {e}", fg="red")
break
def get_user_input() -> str:
"""Collect multi-line user input with exit handling."""
click.secho(
"\nYou (type your message below. Press 'Enter' twice when you're done):",
fg="blue",
)
user_input_lines = []
while True:
line = input()
if line.strip().lower() == "exit":
return "exit"
if line == "":
break
user_input_lines.append(line)
return "\n".join(user_input_lines)
def handle_user_input(
user_input: str,
chat_llm: LLM,
messages: List[Dict[str, str]],
crew_tool_schema: Dict[str, Any],
available_functions: Dict[str, Any],
) -> None:
if user_input.strip().lower() == "exit":
click.echo("Exiting chat. Goodbye!") click.echo("Exiting chat. Goodbye!")
return break
if not user_input.strip():
click.echo("Empty message. Please provide input or type 'exit' to quit.")
return
messages.append({"role": "user", "content": user_input}) messages.append({"role": "user", "content": user_input})
# Indicate that assistant is processing
click.echo()
click.secho("Assistant is processing your input. Please wait...", fg="green")
# Process assistant's response
final_response = chat_llm.call( final_response = chat_llm.call(
messages=messages, messages=messages,
tools=[crew_tool_schema], tools=[crew_tool_schema],
@@ -249,6 +121,13 @@ def handle_user_input(
messages.append({"role": "assistant", "content": final_response}) messages.append({"role": "assistant", "content": final_response})
click.secho(f"\nAssistant: {final_response}\n", fg="green") click.secho(f"\nAssistant: {final_response}\n", fg="green")
except KeyboardInterrupt:
click.echo("\nExiting chat. Goodbye!")
break
except Exception as e:
click.secho(f"An error occurred: {e}", fg="red")
break
def generate_crew_tool_schema(crew_inputs: ChatInputs) -> dict: def generate_crew_tool_schema(crew_inputs: ChatInputs) -> dict:
""" """
@@ -444,10 +323,10 @@ def generate_input_description_with_ai(input_name: str, crew: Crew, chat_llm) ->
): ):
# Replace placeholders with input names # Replace placeholders with input names
task_description = placeholder_pattern.sub( task_description = placeholder_pattern.sub(
lambda m: m.group(1), task.description or "" lambda m: m.group(1), task.description
) )
expected_output = placeholder_pattern.sub( expected_output = placeholder_pattern.sub(
lambda m: m.group(1), task.expected_output or "" lambda m: m.group(1), task.expected_output
) )
context_texts.append(f"Task Description: {task_description}") context_texts.append(f"Task Description: {task_description}")
context_texts.append(f"Expected Output: {expected_output}") context_texts.append(f"Expected Output: {expected_output}")
@@ -458,10 +337,10 @@ def generate_input_description_with_ai(input_name: str, crew: Crew, chat_llm) ->
or f"{{{input_name}}}" in agent.backstory or f"{{{input_name}}}" in agent.backstory
): ):
# Replace placeholders with input names # Replace placeholders with input names
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role or "") agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role)
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal or "") agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal)
agent_backstory = placeholder_pattern.sub( agent_backstory = placeholder_pattern.sub(
lambda m: m.group(1), agent.backstory or "" lambda m: m.group(1), agent.backstory
) )
context_texts.append(f"Agent Role: {agent_role}") context_texts.append(f"Agent Role: {agent_role}")
context_texts.append(f"Agent Goal: {agent_goal}") context_texts.append(f"Agent Goal: {agent_goal}")
@@ -502,20 +381,18 @@ def generate_crew_description_with_ai(crew: Crew, chat_llm) -> str:
for task in crew.tasks: for task in crew.tasks:
# Replace placeholders with input names # Replace placeholders with input names
task_description = placeholder_pattern.sub( task_description = placeholder_pattern.sub(
lambda m: m.group(1), task.description or "" lambda m: m.group(1), task.description
) )
expected_output = placeholder_pattern.sub( expected_output = placeholder_pattern.sub(
lambda m: m.group(1), task.expected_output or "" lambda m: m.group(1), task.expected_output
) )
context_texts.append(f"Task Description: {task_description}") context_texts.append(f"Task Description: {task_description}")
context_texts.append(f"Expected Output: {expected_output}") context_texts.append(f"Expected Output: {expected_output}")
for agent in crew.agents: for agent in crew.agents:
# Replace placeholders with input names # Replace placeholders with input names
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role or "") agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role)
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal or "") agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal)
agent_backstory = placeholder_pattern.sub( agent_backstory = placeholder_pattern.sub(lambda m: m.group(1), agent.backstory)
lambda m: m.group(1), agent.backstory or ""
)
context_texts.append(f"Agent Role: {agent_role}") context_texts.append(f"Agent Role: {agent_role}")
context_texts.append(f"Agent Goal: {agent_goal}") context_texts.append(f"Agent Goal: {agent_goal}")
context_texts.append(f"Agent Backstory: {agent_backstory}") context_texts.append(f"Agent Backstory: {agent_backstory}")

View File

@@ -2,7 +2,11 @@ import subprocess
import click import click
from crewai.cli.utils import get_crew from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
def reset_memories_command( def reset_memories_command(
@@ -26,34 +30,29 @@ def reset_memories_command(
""" """
try: try:
crew = get_crew()
if not crew:
raise ValueError("No crew found.")
if all: if all:
crew.reset_memories(command_type="all") ShortTermMemory().reset()
EntityMemory().reset()
LongTermMemory().reset()
TaskOutputStorageHandler().reset()
KnowledgeStorage().reset()
click.echo("All memories have been reset.") click.echo("All memories have been reset.")
return else:
if not any([long, short, entity, kickoff_outputs, knowledge]):
click.echo(
"No memory type specified. Please specify at least one type to reset."
)
return
if long: if long:
crew.reset_memories(command_type="long") LongTermMemory().reset()
click.echo("Long term memory has been reset.") click.echo("Long term memory has been reset.")
if short: if short:
crew.reset_memories(command_type="short") ShortTermMemory().reset()
click.echo("Short term memory has been reset.") click.echo("Short term memory has been reset.")
if entity: if entity:
crew.reset_memories(command_type="entity") EntityMemory().reset()
click.echo("Entity memory has been reset.") click.echo("Entity memory has been reset.")
if kickoff_outputs: if kickoff_outputs:
crew.reset_memories(command_type="kickoff_outputs") TaskOutputStorageHandler().reset()
click.echo("Latest Kickoff outputs stored has been reset.") click.echo("Latest Kickoff outputs stored has been reset.")
if knowledge: if knowledge:
crew.reset_memories(command_type="knowledge") KnowledgeStorage().reset()
click.echo("Knowledge has been reset.") click.echo("Knowledge has been reset.")
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:

View File

@@ -1,3 +1,2 @@
.env .env
__pycache__/ __pycache__/
.DS_Store

View File

@@ -56,8 +56,7 @@ def test():
Test the crew execution and returns the results. Test the crew execution and returns the results.
""" """
inputs = { inputs = {
"topic": "AI LLMs", "topic": "AI LLMs"
"current_year": str(datetime.now().year)
} }
try: try:
{{crew_name}}().crew().test(n_iterations=int(sys.argv[1]), openai_model_name=sys.argv[2], inputs=inputs) {{crew_name}}().crew().test(n_iterations=int(sys.argv[1]), openai_model_name=sys.argv[2], inputs=inputs)

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13" requires-python = ">=3.10,<3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.102.0,<1.0.0" "crewai[tools]>=0.98.0,<1.0.0"
] ]
[project.scripts] [project.scripts]

View File

@@ -1,4 +1,3 @@
.env .env
__pycache__/ __pycache__/
lib/ lib/
.DS_Store

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13" requires-python = ">=3.10,<3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.102.0,<1.0.0", "crewai[tools]>=0.98.0,<1.0.0",
] ]
[project.scripts] [project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<3.13" requires-python = ">=3.10,<3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.102.0" "crewai[tools]>=0.98.0"
] ]
[tool.crewai] [tool.crewai]

View File

@@ -9,7 +9,6 @@ import tomli
from rich.console import Console from rich.console import Console
from crewai.cli.constants import ENV_VARS from crewai.cli.constants import ENV_VARS
from crewai.crew import Crew
if sys.version_info >= (3, 11): if sys.version_info >= (3, 11):
import tomllib import tomllib
@@ -248,64 +247,3 @@ def write_env_file(folder_path, env_vars):
with open(env_file_path, "w") as file: with open(env_file_path, "w") as file:
for key, value in env_vars.items(): for key, value in env_vars.items():
file.write(f"{key}={value}\n") file.write(f"{key}={value}\n")
def get_crew(crew_path: str = "crew.py", require: bool = False) -> Crew | None:
"""Get the crew instance from the crew.py file."""
try:
import importlib.util
import os
for root, _, files in os.walk("."):
if "crew.py" in files:
crew_path = os.path.join(root, "crew.py")
try:
spec = importlib.util.spec_from_file_location(
"crew_module", crew_path
)
if not spec or not spec.loader:
continue
module = importlib.util.module_from_spec(spec)
try:
sys.modules[spec.name] = module
spec.loader.exec_module(module)
for attr_name in dir(module):
attr = getattr(module, attr_name)
try:
if callable(attr) and hasattr(attr, "crew"):
crew_instance = attr().crew()
return crew_instance
except Exception as e:
print(f"Error processing attribute {attr_name}: {e}")
continue
except Exception as exec_error:
print(f"Error executing module: {exec_error}")
import traceback
print(f"Traceback: {traceback.format_exc()}")
except (ImportError, AttributeError) as e:
if require:
console.print(
f"Error importing crew from {crew_path}: {str(e)}",
style="bold red",
)
continue
break
if require:
console.print("No valid Crew instance found in crew.py", style="bold red")
raise SystemExit
return None
except Exception as e:
if require:
console.print(
f"Unexpected error while loading crew: {str(e)}", style="bold red"
)
raise SystemExit
return None

View File

@@ -4,7 +4,6 @@ import re
import uuid import uuid
import warnings import warnings
from concurrent.futures import Future from concurrent.futures import Future
from copy import copy as shallow_copy
from hashlib import md5 from hashlib import md5
from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Union from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Union
@@ -38,24 +37,11 @@ from crewai.tasks.task_output import TaskOutput
from crewai.telemetry import Telemetry from crewai.telemetry import Telemetry
from crewai.tools.agent_tools.agent_tools import AgentTools from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import Tool from crewai.tools.base_tool import Tool
from crewai.traces.unified_trace_controller import init_crew_main_trace
from crewai.types.usage_metrics import UsageMetrics from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities import I18N, FileHandler, Logger, RPMController from crewai.utilities import I18N, FileHandler, Logger, RPMController
from crewai.utilities.constants import TRAINING_DATA_FILE from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.evaluators.crew_evaluator_handler import CrewEvaluator from crewai.utilities.evaluators.crew_evaluator_handler import CrewEvaluator
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
from crewai.utilities.events.crew_events import (
CrewKickoffCompletedEvent,
CrewKickoffFailedEvent,
CrewKickoffStartedEvent,
CrewTestCompletedEvent,
CrewTestFailedEvent,
CrewTestStartedEvent,
CrewTrainCompletedEvent,
CrewTrainFailedEvent,
CrewTrainStartedEvent,
)
from crewai.utilities.events.crewai_event_bus import crewai_event_bus
from crewai.utilities.formatter import ( from crewai.utilities.formatter import (
aggregate_raw_outputs_from_task_outputs, aggregate_raw_outputs_from_task_outputs,
aggregate_raw_outputs_from_tasks, aggregate_raw_outputs_from_tasks,
@@ -65,6 +51,12 @@ from crewai.utilities.planning_handler import CrewPlanner
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
from crewai.utilities.training_handler import CrewTrainingHandler from crewai.utilities.training_handler import CrewTrainingHandler
try:
import agentops # type: ignore
except ImportError:
agentops = None
warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd") warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")
@@ -190,9 +182,9 @@ class Crew(BaseModel):
default=None, default=None,
description="Path to the prompt json file to be used for the crew.", description="Path to the prompt json file to be used for the crew.",
) )
output_log_file: Optional[Union[bool, str]] = Field( output_log_file: Optional[str] = Field(
default=None, default=None,
description="Path to the log file to be saved", description="output_log_file",
) )
planning: Optional[bool] = Field( planning: Optional[bool] = Field(
default=False, default=False,
@@ -218,9 +210,8 @@ class Crew(BaseModel):
default=None, default=None,
description="LLM used to handle chatting with the crew.", description="LLM used to handle chatting with the crew.",
) )
knowledge: Optional[Knowledge] = Field( _knowledge: Optional[Knowledge] = PrivateAttr(
default=None, default=None,
description="Knowledge for the crew.",
) )
@field_validator("id", mode="before") @field_validator("id", mode="before")
@@ -282,26 +273,12 @@ class Crew(BaseModel):
if self.entity_memory if self.entity_memory
else EntityMemory(crew=self, embedder_config=self.embedder) else EntityMemory(crew=self, embedder_config=self.embedder)
) )
if ( if hasattr(self, "memory_config") and self.memory_config is not None:
self.memory_config and "user_memory" in self.memory_config self._user_memory = (
): # Check for user_memory in config self.user_memory if self.user_memory else UserMemory(crew=self)
user_memory_config = self.memory_config["user_memory"]
if isinstance(
user_memory_config, UserMemory
): # Check if it is already an instance
self._user_memory = user_memory_config
elif isinstance(
user_memory_config, dict
): # Check if it's a configuration dict
self._user_memory = UserMemory(
crew=self, **user_memory_config
) # Initialize with config
else:
raise TypeError(
"user_memory must be a UserMemory instance or a configuration dictionary"
) )
else: else:
self._user_memory = None # No user memory if not in config self._user_memory = None
return self return self
@model_validator(mode="after") @model_validator(mode="after")
@@ -312,9 +289,9 @@ class Crew(BaseModel):
if isinstance(self.knowledge_sources, list) and all( if isinstance(self.knowledge_sources, list) and all(
isinstance(k, BaseKnowledgeSource) for k in self.knowledge_sources isinstance(k, BaseKnowledgeSource) for k in self.knowledge_sources
): ):
self.knowledge = Knowledge( self._knowledge = Knowledge(
sources=self.knowledge_sources, sources=self.knowledge_sources,
embedder=self.embedder, embedder_config=self.embedder,
collection_name="crew", collection_name="crew",
) )
@@ -401,22 +378,6 @@ class Crew(BaseModel):
return self return self
@model_validator(mode="after")
def validate_must_have_non_conditional_task(self) -> "Crew":
"""Ensure that a crew has at least one non-conditional task."""
if not self.tasks:
return self
non_conditional_count = sum(
1 for task in self.tasks if not isinstance(task, ConditionalTask)
)
if non_conditional_count == 0:
raise PydanticCustomError(
"only_conditional_tasks",
"Crew must include at least one non-conditional task",
{},
)
return self
@model_validator(mode="after") @model_validator(mode="after")
def validate_first_task(self) -> "Crew": def validate_first_task(self) -> "Crew":
"""Ensure the first task is not a ConditionalTask.""" """Ensure the first task is not a ConditionalTask."""
@@ -528,16 +489,6 @@ class Crew(BaseModel):
self, n_iterations: int, filename: str, inputs: Optional[Dict[str, Any]] = {} self, n_iterations: int, filename: str, inputs: Optional[Dict[str, Any]] = {}
) -> None: ) -> None:
"""Trains the crew for a given number of iterations.""" """Trains the crew for a given number of iterations."""
try:
crewai_event_bus.emit(
self,
CrewTrainStartedEvent(
crew_name=self.name or "crew",
n_iterations=n_iterations,
filename=filename,
inputs=inputs,
),
)
train_crew = self.copy() train_crew = self.copy()
train_crew._setup_for_training(filename) train_crew._setup_for_training(filename)
@@ -552,45 +503,22 @@ class Crew(BaseModel):
result = TaskEvaluator(agent).evaluate_training_data( result = TaskEvaluator(agent).evaluate_training_data(
training_data=training_data, agent_id=str(agent.id) training_data=training_data, agent_id=str(agent.id)
) )
CrewTrainingHandler(filename).save_trained_data( CrewTrainingHandler(filename).save_trained_data(
agent_id=str(agent.role), trained_data=result.model_dump() agent_id=str(agent.role), trained_data=result.model_dump()
) )
crewai_event_bus.emit(
self,
CrewTrainCompletedEvent(
crew_name=self.name or "crew",
n_iterations=n_iterations,
filename=filename,
),
)
except Exception as e:
crewai_event_bus.emit(
self,
CrewTrainFailedEvent(error=str(e), crew_name=self.name or "crew"),
)
self._logger.log("error", f"Training failed: {e}", color="red")
CrewTrainingHandler(TRAINING_DATA_FILE).clear()
CrewTrainingHandler(filename).clear()
raise
@init_crew_main_trace
def kickoff( def kickoff(
self, self,
inputs: Optional[Dict[str, Any]] = None, inputs: Optional[Dict[str, Any]] = None,
) -> CrewOutput: ) -> CrewOutput:
try:
for before_callback in self.before_kickoff_callbacks: for before_callback in self.before_kickoff_callbacks:
if inputs is None: if inputs is None:
inputs = {} inputs = {}
inputs = before_callback(inputs) inputs = before_callback(inputs)
crewai_event_bus.emit( """Starts the crew to work on its assigned tasks."""
self, self._execution_span = self._telemetry.crew_execution_span(self, inputs)
CrewKickoffStartedEvent(crew_name=self.name or "crew", inputs=inputs),
)
# Starts the crew to work on its assigned tasks.
self._task_output_handler.reset() self._task_output_handler.reset()
self._logging_color = "bold_purple" self._logging_color = "bold_purple"
@@ -636,13 +564,8 @@ class Crew(BaseModel):
self.usage_metrics = UsageMetrics() self.usage_metrics = UsageMetrics()
for metric in metrics: for metric in metrics:
self.usage_metrics.add_usage_metrics(metric) self.usage_metrics.add_usage_metrics(metric)
return result return result
except Exception as e:
crewai_event_bus.emit(
self,
CrewKickoffFailedEvent(error=str(e), crew_name=self.name or "crew"),
)
raise
def kickoff_for_each(self, inputs: List[Dict[str, Any]]) -> List[CrewOutput]: def kickoff_for_each(self, inputs: List[Dict[str, Any]]) -> List[CrewOutput]:
"""Executes the Crew's workflow for each input in the list and aggregates results.""" """Executes the Crew's workflow for each input in the list and aggregates results."""
@@ -751,7 +674,12 @@ class Crew(BaseModel):
manager.tools = [] manager.tools = []
raise Exception("Manager agent should not have tools") raise Exception("Manager agent should not have tools")
else: else:
self.manager_llm = create_llm(self.manager_llm) self.manager_llm = (
getattr(self.manager_llm, "model_name", None)
or getattr(self.manager_llm, "model", None)
or getattr(self.manager_llm, "deployment_name", None)
or self.manager_llm
)
manager = Agent( manager = Agent(
role=i18n.retrieve("hierarchical_manager_agent", "role"), role=i18n.retrieve("hierarchical_manager_agent", "role"),
goal=i18n.retrieve("hierarchical_manager_agent", "goal"), goal=i18n.retrieve("hierarchical_manager_agent", "goal"),
@@ -811,7 +739,6 @@ class Crew(BaseModel):
task, task_outputs, futures, task_index, was_replayed task, task_outputs, futures, task_index, was_replayed
) )
if skipped_task_output: if skipped_task_output:
task_outputs.append(skipped_task_output)
continue continue
if task.async_execution: if task.async_execution:
@@ -835,7 +762,7 @@ class Crew(BaseModel):
context=context, context=context,
tools=tools_for_task, tools=tools_for_task,
) )
task_outputs.append(task_output) task_outputs = [task_output]
self._process_task_result(task, task_output) self._process_task_result(task, task_output)
self._store_execution_log(task, task_output, task_index, was_replayed) self._store_execution_log(task, task_output, task_index, was_replayed)
@@ -856,7 +783,7 @@ class Crew(BaseModel):
task_outputs = self._process_async_tasks(futures, was_replayed) task_outputs = self._process_async_tasks(futures, was_replayed)
futures.clear() futures.clear()
previous_output = task_outputs[-1] if task_outputs else None previous_output = task_outputs[task_index - 1] if task_outputs else None
if previous_output is not None and not task.should_execute(previous_output): if previous_output is not None and not task.should_execute(previous_output):
self._logger.log( self._logger.log(
"debug", "debug",
@@ -978,29 +905,20 @@ class Crew(BaseModel):
) )
def _create_crew_output(self, task_outputs: List[TaskOutput]) -> CrewOutput: def _create_crew_output(self, task_outputs: List[TaskOutput]) -> CrewOutput:
if not task_outputs: if len(task_outputs) != 1:
raise ValueError("No task outputs available to create crew output.") raise ValueError(
"Something went wrong. Kickoff should return only one task output."
# Filter out empty outputs and get the last valid one as the main output )
valid_outputs = [t for t in task_outputs if t.raw] final_task_output = task_outputs[0]
if not valid_outputs:
raise ValueError("No valid task outputs available to create crew output.")
final_task_output = valid_outputs[-1]
final_string_output = final_task_output.raw final_string_output = final_task_output.raw
self._finish_execution(final_string_output) self._finish_execution(final_string_output)
token_usage = self.calculate_usage_metrics() token_usage = self.calculate_usage_metrics()
crewai_event_bus.emit(
self,
CrewKickoffCompletedEvent(
crew_name=self.name or "crew", output=final_task_output
),
)
return CrewOutput( return CrewOutput(
raw=final_task_output.raw, raw=final_task_output.raw,
pydantic=final_task_output.pydantic, pydantic=final_task_output.pydantic,
json_dict=final_task_output.json_dict, json_dict=final_task_output.json_dict,
tasks_output=task_outputs, tasks_output=[task.output for task in self.tasks if task.output],
token_usage=token_usage, token_usage=token_usage,
) )
@@ -1073,8 +991,8 @@ class Crew(BaseModel):
return result return result
def query_knowledge(self, query: List[str]) -> Union[List[Dict[str, Any]], None]: def query_knowledge(self, query: List[str]) -> Union[List[Dict[str, Any]], None]:
if self.knowledge: if self._knowledge:
return self.knowledge.query(query) return self._knowledge.query(query)
return None return None
def fetch_inputs(self) -> Set[str]: def fetch_inputs(self) -> Set[str]:
@@ -1118,8 +1036,6 @@ class Crew(BaseModel):
"_telemetry", "_telemetry",
"agents", "agents",
"tasks", "tasks",
"knowledge_sources",
"knowledge",
} }
cloned_agents = [agent.copy() for agent in self.agents] cloned_agents = [agent.copy() for agent in self.agents]
@@ -1127,9 +1043,6 @@ class Crew(BaseModel):
task_mapping = {} task_mapping = {}
cloned_tasks = [] cloned_tasks = []
existing_knowledge_sources = shallow_copy(self.knowledge_sources)
existing_knowledge = shallow_copy(self.knowledge)
for task in self.tasks: for task in self.tasks:
cloned_task = task.copy(cloned_agents, task_mapping) cloned_task = task.copy(cloned_agents, task_mapping)
cloned_tasks.append(cloned_task) cloned_tasks.append(cloned_task)
@@ -1149,13 +1062,7 @@ class Crew(BaseModel):
copied_data.pop("agents", None) copied_data.pop("agents", None)
copied_data.pop("tasks", None) copied_data.pop("tasks", None)
copied_crew = Crew( copied_crew = Crew(**copied_data, agents=cloned_agents, tasks=cloned_tasks)
**copied_data,
agents=cloned_agents,
tasks=cloned_tasks,
knowledge_sources=existing_knowledge_sources,
knowledge=existing_knowledge,
)
return copied_crew return copied_crew
@@ -1181,6 +1088,13 @@ class Crew(BaseModel):
def _finish_execution(self, final_string_output: str) -> None: def _finish_execution(self, final_string_output: str) -> None:
if self.max_rpm: if self.max_rpm:
self._rpm_controller.stop_rpm_counter() self._rpm_controller.stop_rpm_counter()
if agentops:
agentops.end_session(
end_state="Success",
end_state_reason="Finished Execution",
is_auto_end=True,
)
self._telemetry.end_crew(self, final_string_output)
def calculate_usage_metrics(self) -> UsageMetrics: def calculate_usage_metrics(self) -> UsageMetrics:
"""Calculates and returns the usage metrics.""" """Calculates and returns the usage metrics."""
@@ -1198,26 +1112,19 @@ class Crew(BaseModel):
def test( def test(
self, self,
n_iterations: int, n_iterations: int,
eval_llm: Union[str, InstanceOf[LLM]], openai_model_name: Optional[str] = None,
inputs: Optional[Dict[str, Any]] = None, inputs: Optional[Dict[str, Any]] = None,
) -> None: ) -> None:
"""Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures.""" """Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures."""
try:
eval_llm = create_llm(eval_llm)
if not eval_llm:
raise ValueError("Failed to create LLM instance.")
crewai_event_bus.emit(
self,
CrewTestStartedEvent(
crew_name=self.name or "crew",
n_iterations=n_iterations,
eval_llm=eval_llm,
inputs=inputs,
),
)
test_crew = self.copy() test_crew = self.copy()
evaluator = CrewEvaluator(test_crew, eval_llm) # type: ignore[arg-type]
self._test_execution_span = test_crew._telemetry.test_execution_span(
test_crew,
n_iterations,
inputs,
openai_model_name, # type: ignore[arg-type]
) # type: ignore[arg-type]
evaluator = CrewEvaluator(test_crew, openai_model_name) # type: ignore[arg-type]
for i in range(1, n_iterations + 1): for i in range(1, n_iterations + 1):
evaluator.set_iteration(i) evaluator.set_iteration(i)
@@ -1225,95 +1132,5 @@ class Crew(BaseModel):
evaluator.print_crew_evaluation_result() evaluator.print_crew_evaluation_result()
crewai_event_bus.emit(
self,
CrewTestCompletedEvent(
crew_name=self.name or "crew",
),
)
except Exception as e:
crewai_event_bus.emit(
self,
CrewTestFailedEvent(error=str(e), crew_name=self.name or "crew"),
)
raise
def __repr__(self): def __repr__(self):
return f"Crew(id={self.id}, process={self.process}, number_of_agents={len(self.agents)}, number_of_tasks={len(self.tasks)})" return f"Crew(id={self.id}, process={self.process}, number_of_agents={len(self.agents)}, number_of_tasks={len(self.tasks)})"
def reset_memories(self, command_type: str) -> None:
"""Reset specific or all memories for the crew.
Args:
command_type: Type of memory to reset.
Valid options: 'long', 'short', 'entity', 'knowledge',
'kickoff_outputs', or 'all'
Raises:
ValueError: If an invalid command type is provided.
RuntimeError: If memory reset operation fails.
"""
VALID_TYPES = frozenset(
["long", "short", "entity", "knowledge", "kickoff_outputs", "all"]
)
if command_type not in VALID_TYPES:
raise ValueError(
f"Invalid command type. Must be one of: {', '.join(sorted(VALID_TYPES))}"
)
try:
if command_type == "all":
self._reset_all_memories()
else:
self._reset_specific_memory(command_type)
self._logger.log("info", f"{command_type} memory has been reset")
except Exception as e:
error_msg = f"Failed to reset {command_type} memory: {str(e)}"
self._logger.log("error", error_msg)
raise RuntimeError(error_msg) from e
def _reset_all_memories(self) -> None:
"""Reset all available memory systems."""
memory_systems = [
("short term", self._short_term_memory),
("entity", self._entity_memory),
("long term", self._long_term_memory),
("task output", self._task_output_handler),
("knowledge", self.knowledge),
]
for name, system in memory_systems:
if system is not None:
try:
system.reset()
except Exception as e:
raise RuntimeError(f"Failed to reset {name} memory") from e
def _reset_specific_memory(self, memory_type: str) -> None:
"""Reset a specific memory system.
Args:
memory_type: Type of memory to reset
Raises:
RuntimeError: If the specified memory system fails to reset
"""
reset_functions = {
"long": (self._long_term_memory, "long term"),
"short": (self._short_term_memory, "short term"),
"entity": (self._entity_memory, "entity"),
"knowledge": (self.knowledge, "knowledge"),
"kickoff_outputs": (self._task_output_handler, "task output"),
}
memory_system, name = reset_functions[memory_type]
if memory_system is None:
raise RuntimeError(f"{name} memory system is not initialized")
try:
memory_system.reset()
except Exception as e:
raise RuntimeError(f"Failed to reset {name} memory") from e

View File

@@ -1,5 +1,4 @@
import asyncio import asyncio
import copy
import inspect import inspect
import logging import logging
from typing import ( from typing import (
@@ -17,25 +16,19 @@ from typing import (
) )
from uuid import uuid4 from uuid import uuid4
from blinker import Signal
from pydantic import BaseModel, Field, ValidationError from pydantic import BaseModel, Field, ValidationError
from crewai.flow.flow_visualizer import plot_flow from crewai.flow.flow_events import (
from crewai.flow.persistence.base import FlowPersistence
from crewai.flow.utils import get_possible_return_constants
from crewai.traces.unified_trace_controller import (
init_flow_main_trace,
trace_flow_step,
)
from crewai.utilities.events.crewai_event_bus import crewai_event_bus
from crewai.utilities.events.flow_events import (
FlowCreatedEvent,
FlowFinishedEvent, FlowFinishedEvent,
FlowPlotEvent,
FlowStartedEvent, FlowStartedEvent,
MethodExecutionFailedEvent,
MethodExecutionFinishedEvent, MethodExecutionFinishedEvent,
MethodExecutionStartedEvent, MethodExecutionStartedEvent,
) )
from crewai.flow.flow_visualizer import plot_flow
from crewai.flow.persistence.base import FlowPersistence
from crewai.flow.utils import get_possible_return_constants
from crewai.telemetry import Telemetry
from crewai.utilities.printer import Printer from crewai.utilities.printer import Printer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -401,6 +394,7 @@ class FlowMeta(type):
or hasattr(attr_value, "__trigger_methods__") or hasattr(attr_value, "__trigger_methods__")
or hasattr(attr_value, "__is_router__") or hasattr(attr_value, "__is_router__")
): ):
# Register start methods # Register start methods
if hasattr(attr_value, "__is_start_method__"): if hasattr(attr_value, "__is_start_method__"):
start_methods.append(attr_name) start_methods.append(attr_name)
@@ -433,6 +427,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
Type parameter T must be either Dict[str, Any] or a subclass of BaseModel.""" Type parameter T must be either Dict[str, Any] or a subclass of BaseModel."""
_telemetry = Telemetry()
_printer = Printer() _printer = Printer()
_start_methods: List[str] = [] _start_methods: List[str] = []
@@ -440,6 +435,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
_routers: Set[str] = set() _routers: Set[str] = set()
_router_paths: Dict[str, List[str]] = {} _router_paths: Dict[str, List[str]] = {}
initial_state: Union[Type[T], T, None] = None initial_state: Union[Type[T], T, None] = None
event_emitter = Signal("event_emitter")
def __class_getitem__(cls: Type["Flow"], item: Type[T]) -> Type["Flow"]: def __class_getitem__(cls: Type["Flow"], item: Type[T]) -> Type["Flow"]:
class _FlowGeneric(cls): # type: ignore class _FlowGeneric(cls): # type: ignore
@@ -473,13 +469,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
if kwargs: if kwargs:
self._initialize_state(kwargs) self._initialize_state(kwargs)
crewai_event_bus.emit( self._telemetry.flow_creation_span(self.__class__.__name__)
self,
FlowCreatedEvent(
type="flow_created",
flow_name=self.__class__.__name__,
),
)
# Register all flow-related methods # Register all flow-related methods
for method_name in dir(self): for method_name in dir(self):
@@ -579,9 +569,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
f"Initial state must be dict or BaseModel, got {type(self.initial_state)}" f"Initial state must be dict or BaseModel, got {type(self.initial_state)}"
) )
def _copy_state(self) -> T:
return copy.deepcopy(self._state)
@property @property
def state(self) -> T: def state(self) -> T:
return self._state return self._state
@@ -613,7 +600,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
``` ```
""" """
try: try:
if not hasattr(self, "_state"): if not hasattr(self, '_state'):
return "" return ""
if isinstance(self._state, dict): if isinstance(self._state, dict):
@@ -713,90 +700,69 @@ class Flow(Generic[T], metaclass=FlowMeta):
raise TypeError(f"State must be dict or BaseModel, got {type(self._state)}") raise TypeError(f"State must be dict or BaseModel, got {type(self._state)}")
def kickoff(self, inputs: Optional[Dict[str, Any]] = None) -> Any: def kickoff(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
""" """Start the flow execution.
Start the flow execution in a synchronous context.
This method wraps kickoff_async so that all state initialization and event
emission is handled in the asynchronous method.
"""
async def run_flow():
return await self.kickoff_async(inputs)
return asyncio.run(run_flow())
@init_flow_main_trace
async def kickoff_async(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
"""
Start the flow execution asynchronously.
This method performs state restoration (if an 'id' is provided and persistence is available)
and updates the flow state with any additional inputs. It then emits the FlowStartedEvent,
logs the flow startup, and executes all start methods. Once completed, it emits the
FlowFinishedEvent and returns the final output.
Args: Args:
inputs: Optional dictionary containing input values and/or a state ID for restoration. inputs: Optional dictionary containing input values and potentially a state ID to restore
Returns:
The final output from the flow, which is the result of the last executed method.
""" """
if inputs: # Handle state restoration if ID is provided in inputs
# Override the id in the state if it exists in inputs if inputs and 'id' in inputs and self._persistence is not None:
if "id" in inputs: restore_uuid = inputs['id']
if isinstance(self._state, dict):
self._state["id"] = inputs["id"]
elif isinstance(self._state, BaseModel):
setattr(self._state, "id", inputs["id"])
# If persistence is enabled, attempt to restore the stored state using the provided id.
if "id" in inputs and self._persistence is not None:
restore_uuid = inputs["id"]
stored_state = self._persistence.load_state(restore_uuid) stored_state = self._persistence.load_state(restore_uuid)
# Override the id in the state if it exists in inputs
if 'id' in inputs:
if isinstance(self._state, dict):
self._state['id'] = inputs['id']
elif isinstance(self._state, BaseModel):
setattr(self._state, 'id', inputs['id'])
if stored_state: if stored_state:
self._log_flow_event( self._log_flow_event(f"Loading flow state from memory for UUID: {restore_uuid}", color="yellow")
f"Loading flow state from memory for UUID: {restore_uuid}", # Restore the state
color="yellow",
)
self._restore_state(stored_state) self._restore_state(stored_state)
else: else:
self._log_flow_event( self._log_flow_event(f"No flow state found for UUID: {restore_uuid}", color="red")
f"No flow state found for UUID: {restore_uuid}", color="red"
)
# Update state with any additional inputs (ignoring the 'id' key) # Apply any additional inputs after restoration
filtered_inputs = {k: v for k, v in inputs.items() if k != "id"} filtered_inputs = {k: v for k, v in inputs.items() if k != 'id'}
if filtered_inputs: if filtered_inputs:
self._initialize_state(filtered_inputs) self._initialize_state(filtered_inputs)
# Emit FlowStartedEvent and log the start of the flow. # Start flow execution
crewai_event_bus.emit( self.event_emitter.send(
self, self,
FlowStartedEvent( event=FlowStartedEvent(
type="flow_started", type="flow_started",
flow_name=self.__class__.__name__, flow_name=self.__class__.__name__,
inputs=inputs,
), ),
) )
self._log_flow_event( self._log_flow_event(f"Flow started with ID: {self.flow_id}", color="bold_magenta")
f"Flow started with ID: {self.flow_id}", color="bold_magenta"
)
if inputs is not None and 'id' not in inputs:
self._initialize_state(inputs)
return asyncio.run(self.kickoff_async())
async def kickoff_async(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
if not self._start_methods: if not self._start_methods:
raise ValueError("No start method defined") raise ValueError("No start method defined")
# Execute all start methods concurrently. self._telemetry.flow_execution_span(
self.__class__.__name__, list(self._methods.keys())
)
tasks = [ tasks = [
self._execute_start_method(start_method) self._execute_start_method(start_method)
for start_method in self._start_methods for start_method in self._start_methods
] ]
await asyncio.gather(*tasks) await asyncio.gather(*tasks)
final_output = self._method_outputs[-1] if self._method_outputs else None final_output = self._method_outputs[-1] if self._method_outputs else None
# Emit FlowFinishedEvent after all processing is complete. self.event_emitter.send(
crewai_event_bus.emit(
self, self,
FlowFinishedEvent( event=FlowFinishedEvent(
type="flow_finished", type="flow_finished",
flow_name=self.__class__.__name__, flow_name=self.__class__.__name__,
result=final_output, result=final_output,
@@ -827,59 +793,19 @@ class Flow(Generic[T], metaclass=FlowMeta):
) )
await self._execute_listeners(start_method_name, result) await self._execute_listeners(start_method_name, result)
@trace_flow_step
async def _execute_method( async def _execute_method(
self, method_name: str, method: Callable, *args: Any, **kwargs: Any self, method_name: str, method: Callable, *args: Any, **kwargs: Any
) -> Any: ) -> Any:
try:
dumped_params = {f"_{i}": arg for i, arg in enumerate(args)} | (
kwargs or {}
)
crewai_event_bus.emit(
self,
MethodExecutionStartedEvent(
type="method_execution_started",
method_name=method_name,
flow_name=self.__class__.__name__,
params=dumped_params,
state=self._copy_state(),
),
)
result = ( result = (
await method(*args, **kwargs) await method(*args, **kwargs)
if asyncio.iscoroutinefunction(method) if asyncio.iscoroutinefunction(method)
else method(*args, **kwargs) else method(*args, **kwargs)
) )
self._method_outputs.append(result) self._method_outputs.append(result)
self._method_execution_counts[method_name] = ( self._method_execution_counts[method_name] = (
self._method_execution_counts.get(method_name, 0) + 1 self._method_execution_counts.get(method_name, 0) + 1
) )
crewai_event_bus.emit(
self,
MethodExecutionFinishedEvent(
type="method_execution_finished",
method_name=method_name,
flow_name=self.__class__.__name__,
state=self._copy_state(),
result=result,
),
)
return result return result
except Exception as e:
crewai_event_bus.emit(
self,
MethodExecutionFailedEvent(
type="method_execution_failed",
method_name=method_name,
flow_name=self.__class__.__name__,
error=e,
),
)
raise e
async def _execute_listeners(self, trigger_method: str, result: Any) -> None: async def _execute_listeners(self, trigger_method: str, result: Any) -> None:
""" """
@@ -1018,6 +944,15 @@ class Flow(Generic[T], metaclass=FlowMeta):
try: try:
method = self._methods[listener_name] method = self._methods[listener_name]
self.event_emitter.send(
self,
event=MethodExecutionStartedEvent(
type="method_execution_started",
method_name=listener_name,
flow_name=self.__class__.__name__,
),
)
sig = inspect.signature(method) sig = inspect.signature(method)
params = list(sig.parameters.values()) params = list(sig.parameters.values())
method_params = [p for p in params if p.name != "self"] method_params = [p for p in params if p.name != "self"]
@@ -1029,6 +964,15 @@ class Flow(Generic[T], metaclass=FlowMeta):
else: else:
listener_result = await self._execute_method(listener_name, method) listener_result = await self._execute_method(listener_name, method)
self.event_emitter.send(
self,
event=MethodExecutionFinishedEvent(
type="method_execution_finished",
method_name=listener_name,
flow_name=self.__class__.__name__,
),
)
# Execute listeners (and possibly routers) of this listener # Execute listeners (and possibly routers) of this listener
await self._execute_listeners(listener_name, listener_result) await self._execute_listeners(listener_name, listener_result)
@@ -1040,9 +984,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
traceback.print_exc() traceback.print_exc()
def _log_flow_event( def _log_flow_event(self, message: str, color: str = "yellow", level: str = "info") -> None:
self, message: str, color: str = "yellow", level: str = "info"
) -> None:
"""Centralized logging method for flow events. """Centralized logging method for flow events.
This method provides a consistent interface for logging flow-related events, This method provides a consistent interface for logging flow-related events,
@@ -1067,11 +1009,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
logger.warning(message) logger.warning(message)
def plot(self, filename: str = "crewai_flow") -> None: def plot(self, filename: str = "crewai_flow") -> None:
crewai_event_bus.emit( self._telemetry.flow_plotting_span(
self, self.__class__.__name__, list(self._methods.keys())
FlowPlotEvent(
type="flow_plot",
flow_name=self.__class__.__name__,
),
) )
plot_flow(self, filename) plot_flow(self, filename)

View File

@@ -0,0 +1,33 @@
from dataclasses import dataclass, field
from datetime import datetime
from typing import Any, Optional
@dataclass
class Event:
type: str
flow_name: str
timestamp: datetime = field(init=False)
def __post_init__(self):
self.timestamp = datetime.now()
@dataclass
class FlowStartedEvent(Event):
pass
@dataclass
class MethodExecutionStartedEvent(Event):
method_name: str
@dataclass
class MethodExecutionFinishedEvent(Event):
method_name: str
@dataclass
class FlowFinishedEvent(Event):
result: Optional[Any] = None

View File

@@ -58,7 +58,7 @@ class PersistenceDecorator:
_printer = Printer() # Class-level printer instance _printer = Printer() # Class-level printer instance
@classmethod @classmethod
def persist_state(cls, flow_instance: Any, method_name: str, persistence_instance: FlowPersistence, verbose: bool = False) -> None: def persist_state(cls, flow_instance: Any, method_name: str, persistence_instance: FlowPersistence) -> None:
"""Persist flow state with proper error handling and logging. """Persist flow state with proper error handling and logging.
This method handles the persistence of flow state data, including proper This method handles the persistence of flow state data, including proper
@@ -68,7 +68,6 @@ class PersistenceDecorator:
flow_instance: The flow instance whose state to persist flow_instance: The flow instance whose state to persist
method_name: Name of the method that triggered persistence method_name: Name of the method that triggered persistence
persistence_instance: The persistence backend to use persistence_instance: The persistence backend to use
verbose: Whether to log persistence operations
Raises: Raises:
ValueError: If flow has no state or state lacks an ID ValueError: If flow has no state or state lacks an ID
@@ -89,8 +88,7 @@ class PersistenceDecorator:
if not flow_uuid: if not flow_uuid:
raise ValueError("Flow state must have an 'id' field for persistence") raise ValueError("Flow state must have an 'id' field for persistence")
# Log state saving only if verbose is True # Log state saving with consistent message
if verbose:
cls._printer.print(LOG_MESSAGES["save_state"].format(flow_uuid), color="cyan") cls._printer.print(LOG_MESSAGES["save_state"].format(flow_uuid), color="cyan")
logger.info(LOG_MESSAGES["save_state"].format(flow_uuid)) logger.info(LOG_MESSAGES["save_state"].format(flow_uuid))
@@ -117,7 +115,7 @@ class PersistenceDecorator:
raise ValueError(error_msg) from e raise ValueError(error_msg) from e
def persist(persistence: Optional[FlowPersistence] = None, verbose: bool = False): def persist(persistence: Optional[FlowPersistence] = None):
"""Decorator to persist flow state. """Decorator to persist flow state.
This decorator can be applied at either the class level or method level. This decorator can be applied at either the class level or method level.
@@ -128,7 +126,6 @@ def persist(persistence: Optional[FlowPersistence] = None, verbose: bool = False
Args: Args:
persistence: Optional FlowPersistence implementation to use. persistence: Optional FlowPersistence implementation to use.
If not provided, uses SQLiteFlowPersistence. If not provided, uses SQLiteFlowPersistence.
verbose: Whether to log persistence operations. Defaults to False.
Returns: Returns:
A decorator that can be applied to either a class or method A decorator that can be applied to either a class or method
@@ -138,12 +135,13 @@ def persist(persistence: Optional[FlowPersistence] = None, verbose: bool = False
RuntimeError: If state persistence fails RuntimeError: If state persistence fails
Example: Example:
@persist(verbose=True) # Class-level persistence with logging @persist # Class-level persistence with default SQLite
class MyFlow(Flow[MyState]): class MyFlow(Flow[MyState]):
@start() @start()
def begin(self): def begin(self):
pass pass
""" """
def decorator(target: Union[Type, Callable[..., T]]) -> Union[Type, Callable[..., T]]: def decorator(target: Union[Type, Callable[..., T]]) -> Union[Type, Callable[..., T]]:
"""Decorator that handles both class and method decoration.""" """Decorator that handles both class and method decoration."""
actual_persistence = persistence or SQLiteFlowPersistence() actual_persistence = persistence or SQLiteFlowPersistence()
@@ -181,7 +179,7 @@ def persist(persistence: Optional[FlowPersistence] = None, verbose: bool = False
@functools.wraps(original_method) @functools.wraps(original_method)
async def method_wrapper(self: Any, *args: Any, **kwargs: Any) -> Any: async def method_wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
result = await original_method(self, *args, **kwargs) result = await original_method(self, *args, **kwargs)
PersistenceDecorator.persist_state(self, method_name, actual_persistence, verbose) PersistenceDecorator.persist_state(self, method_name, actual_persistence)
return result return result
return method_wrapper return method_wrapper
@@ -201,7 +199,7 @@ def persist(persistence: Optional[FlowPersistence] = None, verbose: bool = False
@functools.wraps(original_method) @functools.wraps(original_method)
def method_wrapper(self: Any, *args: Any, **kwargs: Any) -> Any: def method_wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
result = original_method(self, *args, **kwargs) result = original_method(self, *args, **kwargs)
PersistenceDecorator.persist_state(self, method_name, actual_persistence, verbose) PersistenceDecorator.persist_state(self, method_name, actual_persistence)
return result return result
return method_wrapper return method_wrapper
@@ -230,7 +228,7 @@ def persist(persistence: Optional[FlowPersistence] = None, verbose: bool = False
result = await method_coro result = await method_coro
else: else:
result = method_coro result = method_coro
PersistenceDecorator.persist_state(flow_instance, method.__name__, actual_persistence, verbose) PersistenceDecorator.persist_state(flow_instance, method.__name__, actual_persistence)
return result return result
for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]: for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]:
@@ -242,7 +240,7 @@ def persist(persistence: Optional[FlowPersistence] = None, verbose: bool = False
@functools.wraps(method) @functools.wraps(method)
def method_sync_wrapper(flow_instance: Any, *args: Any, **kwargs: Any) -> T: def method_sync_wrapper(flow_instance: Any, *args: Any, **kwargs: Any) -> T:
result = method(flow_instance, *args, **kwargs) result = method(flow_instance, *args, **kwargs)
PersistenceDecorator.persist_state(flow_instance, method.__name__, actual_persistence, verbose) PersistenceDecorator.persist_state(flow_instance, method.__name__, actual_persistence)
return result return result
for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]: for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]:

View File

@@ -1,91 +0,0 @@
import json
from datetime import date, datetime
from typing import Any, Dict, List, Union
from pydantic import BaseModel
from crewai.flow import Flow
SerializablePrimitive = Union[str, int, float, bool, None]
Serializable = Union[
SerializablePrimitive, List["Serializable"], Dict[str, "Serializable"]
]
def export_state(flow: Flow) -> dict[str, Serializable]:
"""Exports the Flow's internal state as JSON-compatible data structures.
Performs a one-way transformation of a Flow's state into basic Python types
that can be safely serialized to JSON. To prevent infinite recursion with
circular references, the conversion is limited to a depth of 5 levels.
Args:
flow: The Flow object whose state needs to be exported
Returns:
dict[str, Any]: The transformed state using JSON-compatible Python
types.
"""
result = to_serializable(flow._state)
assert isinstance(result, dict)
return result
def to_serializable(
obj: Any, max_depth: int = 5, _current_depth: int = 0
) -> Serializable:
"""Converts a Python object into a JSON-compatible representation.
Supports primitives, datetime objects, collections, dictionaries, and
Pydantic models. Recursion depth is limited to prevent infinite nesting.
Non-convertible objects default to their string representations.
Args:
obj (Any): Object to transform.
max_depth (int, optional): Maximum recursion depth. Defaults to 5.
Returns:
Serializable: A JSON-compatible structure.
"""
if _current_depth >= max_depth:
return repr(obj)
if isinstance(obj, (str, int, float, bool, type(None))):
return obj
elif isinstance(obj, (date, datetime)):
return obj.isoformat()
elif isinstance(obj, (list, tuple, set)):
return [to_serializable(item, max_depth, _current_depth + 1) for item in obj]
elif isinstance(obj, dict):
return {
_to_serializable_key(key): to_serializable(
value, max_depth, _current_depth + 1
)
for key, value in obj.items()
}
elif isinstance(obj, BaseModel):
return to_serializable(obj.model_dump(), max_depth, _current_depth + 1)
else:
return repr(obj)
def _to_serializable_key(key: Any) -> str:
if isinstance(key, (str, int)):
return str(key)
return f"key_{id(key)}_{repr(key)}"
def to_string(obj: Any) -> str | None:
"""Serializes an object into a JSON string.
Args:
obj (Any): Object to serialize.
Returns:
str | None: A JSON-formatted string or `None` if empty.
"""
serializable = to_serializable(obj)
if serializable is None:
return None
else:
return json.dumps(serializable)

View File

@@ -15,20 +15,20 @@ class Knowledge(BaseModel):
Args: Args:
sources: List[BaseKnowledgeSource] = Field(default_factory=list) sources: List[BaseKnowledgeSource] = Field(default_factory=list)
storage: Optional[KnowledgeStorage] = Field(default=None) storage: Optional[KnowledgeStorage] = Field(default=None)
embedder: Optional[Dict[str, Any]] = None embedder_config: Optional[Dict[str, Any]] = None
""" """
sources: List[BaseKnowledgeSource] = Field(default_factory=list) sources: List[BaseKnowledgeSource] = Field(default_factory=list)
model_config = ConfigDict(arbitrary_types_allowed=True) model_config = ConfigDict(arbitrary_types_allowed=True)
storage: Optional[KnowledgeStorage] = Field(default=None) storage: Optional[KnowledgeStorage] = Field(default=None)
embedder: Optional[Dict[str, Any]] = None embedder_config: Optional[Dict[str, Any]] = None
collection_name: Optional[str] = None collection_name: Optional[str] = None
def __init__( def __init__(
self, self,
collection_name: str, collection_name: str,
sources: List[BaseKnowledgeSource], sources: List[BaseKnowledgeSource],
embedder: Optional[Dict[str, Any]] = None, embedder_config: Optional[Dict[str, Any]] = None,
storage: Optional[KnowledgeStorage] = None, storage: Optional[KnowledgeStorage] = None,
**data, **data,
): ):
@@ -37,11 +37,13 @@ class Knowledge(BaseModel):
self.storage = storage self.storage = storage
else: else:
self.storage = KnowledgeStorage( self.storage = KnowledgeStorage(
embedder=embedder, collection_name=collection_name embedder_config=embedder_config, collection_name=collection_name
) )
self.sources = sources self.sources = sources
self.storage.initialize_knowledge_storage() self.storage.initialize_knowledge_storage()
self._add_sources() for source in sources:
source.storage = self.storage
source.add()
def query(self, query: List[str], limit: int = 3) -> List[Dict[str, Any]]: def query(self, query: List[str], limit: int = 3) -> List[Dict[str, Any]]:
""" """
@@ -61,15 +63,6 @@ class Knowledge(BaseModel):
return results return results
def _add_sources(self): def _add_sources(self):
try:
for source in self.sources: for source in self.sources:
source.storage = self.storage source.storage = self.storage
source.add() source.add()
except Exception as e:
raise e
def reset(self) -> None:
if self.storage:
self.storage.reset()
else:
raise ValueError("Storage is not initialized.")

View File

@@ -29,13 +29,7 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
def validate_file_path(cls, v, info): def validate_file_path(cls, v, info):
"""Validate that at least one of file_path or file_paths is provided.""" """Validate that at least one of file_path or file_paths is provided."""
# Single check if both are None, O(1) instead of nested conditions # Single check if both are None, O(1) instead of nested conditions
if ( if v is None and info.data.get("file_path" if info.field_name == "file_paths" else "file_paths") is None:
v is None
and info.data.get(
"file_path" if info.field_name == "file_paths" else "file_paths"
)
is None
):
raise ValueError("Either file_path or file_paths must be provided") raise ValueError("Either file_path or file_paths must be provided")
return v return v

View File

@@ -1,138 +1,28 @@
from pathlib import Path from pathlib import Path
from typing import Dict, Iterator, List, Optional, Union from typing import Dict, List
from urllib.parse import urlparse
from pydantic import Field, field_validator from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.utilities.constants import KNOWLEDGE_DIRECTORY
from crewai.utilities.logger import Logger
class ExcelKnowledgeSource(BaseKnowledgeSource): class ExcelKnowledgeSource(BaseFileKnowledgeSource):
"""A knowledge source that stores and queries Excel file content using embeddings.""" """A knowledge source that stores and queries Excel file content using embeddings."""
# override content to be a dict of file paths to sheet names to csv content def load_content(self) -> Dict[Path, str]:
"""Load and preprocess Excel file content."""
_logger: Logger = Logger(verbose=True)
file_path: Optional[Union[Path, List[Path], str, List[str]]] = Field(
default=None,
description="[Deprecated] The path to the file. Use file_paths instead.",
)
file_paths: Optional[Union[Path, List[Path], str, List[str]]] = Field(
default_factory=list, description="The path to the file"
)
chunks: List[str] = Field(default_factory=list)
content: Dict[Path, Dict[str, str]] = Field(default_factory=dict)
safe_file_paths: List[Path] = Field(default_factory=list)
@field_validator("file_path", "file_paths", mode="before")
def validate_file_path(cls, v, info):
"""Validate that at least one of file_path or file_paths is provided."""
# Single check if both are None, O(1) instead of nested conditions
if (
v is None
and info.data.get(
"file_path" if info.field_name == "file_paths" else "file_paths"
)
is None
):
raise ValueError("Either file_path or file_paths must be provided")
return v
def _process_file_paths(self) -> List[Path]:
"""Convert file_path to a list of Path objects."""
if hasattr(self, "file_path") and self.file_path is not None:
self._logger.log(
"warning",
"The 'file_path' attribute is deprecated and will be removed in a future version. Please use 'file_paths' instead.",
color="yellow",
)
self.file_paths = self.file_path
if self.file_paths is None:
raise ValueError("Your source must be provided with a file_paths: []")
# Convert single path to list
path_list: List[Union[Path, str]] = (
[self.file_paths]
if isinstance(self.file_paths, (str, Path))
else list(self.file_paths)
if isinstance(self.file_paths, list)
else []
)
if not path_list:
raise ValueError(
"file_path/file_paths must be a Path, str, or a list of these types"
)
return [self.convert_to_path(path) for path in path_list]
def validate_content(self):
"""Validate the paths."""
for path in self.safe_file_paths:
if not path.exists():
self._logger.log(
"error",
f"File not found: {path}. Try adding sources to the knowledge directory. If it's inside the knowledge directory, use the relative path.",
color="red",
)
raise FileNotFoundError(f"File not found: {path}")
if not path.is_file():
self._logger.log(
"error",
f"Path is not a file: {path}",
color="red",
)
def model_post_init(self, _) -> None:
if self.file_path:
self._logger.log(
"warning",
"The 'file_path' attribute is deprecated and will be removed in a future version. Please use 'file_paths' instead.",
color="yellow",
)
self.file_paths = self.file_path
self.safe_file_paths = self._process_file_paths()
self.validate_content()
self.content = self._load_content()
def _load_content(self) -> Dict[Path, Dict[str, str]]:
"""Load and preprocess Excel file content from multiple sheets.
Each sheet's content is converted to CSV format and stored.
Returns:
Dict[Path, Dict[str, str]]: A mapping of file paths to their respective sheet contents.
Raises:
ImportError: If required dependencies are missing.
FileNotFoundError: If the specified Excel file cannot be opened.
"""
pd = self._import_dependencies() pd = self._import_dependencies()
content_dict = {} content_dict = {}
for file_path in self.safe_file_paths: for file_path in self.safe_file_paths:
file_path = self.convert_to_path(file_path) file_path = self.convert_to_path(file_path)
with pd.ExcelFile(file_path) as xl: df = pd.read_excel(file_path)
sheet_dict = { content = df.to_csv(index=False)
str(sheet_name): str( content_dict[file_path] = content
pd.read_excel(xl, sheet_name).to_csv(index=False)
)
for sheet_name in xl.sheet_names
}
content_dict[file_path] = sheet_dict
return content_dict return content_dict
def convert_to_path(self, path: Union[Path, str]) -> Path:
"""Convert a path to a Path object."""
return Path(KNOWLEDGE_DIRECTORY + "/" + path) if isinstance(path, str) else path
def _import_dependencies(self): def _import_dependencies(self):
"""Dynamically import dependencies.""" """Dynamically import dependencies."""
try: try:
import openpyxl # noqa
import pandas as pd import pandas as pd
return pd return pd
@@ -148,14 +38,10 @@ class ExcelKnowledgeSource(BaseKnowledgeSource):
and save the embeddings. and save the embeddings.
""" """
# Convert dictionary values to a single string if content is a dictionary # Convert dictionary values to a single string if content is a dictionary
# Updated to account for .xlsx workbooks with multiple tabs/sheets if isinstance(self.content, dict):
content_str = "" content_str = "\n".join(str(value) for value in self.content.values())
for value in self.content.values():
if isinstance(value, dict):
for sheet_value in value.values():
content_str += str(sheet_value) + "\n"
else: else:
content_str += str(value) + "\n" content_str = str(self.content)
new_chunks = self._chunk_text(content_str) new_chunks = self._chunk_text(content_str)
self.chunks.extend(new_chunks) self.chunks.extend(new_chunks)

View File

@@ -48,11 +48,11 @@ class KnowledgeStorage(BaseKnowledgeStorage):
def __init__( def __init__(
self, self,
embedder: Optional[Dict[str, Any]] = None, embedder_config: Optional[Dict[str, Any]] = None,
collection_name: Optional[str] = None, collection_name: Optional[str] = None,
): ):
self.collection_name = collection_name self.collection_name = collection_name
self._set_embedder_config(embedder) self._set_embedder_config(embedder_config)
def search( def search(
self, self,
@@ -76,7 +76,7 @@ class KnowledgeStorage(BaseKnowledgeStorage):
"context": fetched["documents"][0][i], # type: ignore "context": fetched["documents"][0][i], # type: ignore
"score": fetched["distances"][0][i], # type: ignore "score": fetched["distances"][0][i], # type: ignore
} }
if result["score"] >= score_threshold: if result["score"] >= score_threshold: # type: ignore
results.append(result) results.append(result)
return results return results
else: else:
@@ -99,7 +99,7 @@ class KnowledgeStorage(BaseKnowledgeStorage):
) )
if self.app: if self.app:
self.collection = self.app.get_or_create_collection( self.collection = self.app.get_or_create_collection(
name=collection_name, embedding_function=self.embedder name=collection_name, embedding_function=self.embedder_config
) )
else: else:
raise Exception("Vector Database Client not initialized") raise Exception("Vector Database Client not initialized")
@@ -187,15 +187,17 @@ class KnowledgeStorage(BaseKnowledgeStorage):
api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small" api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"
) )
def _set_embedder_config(self, embedder: Optional[Dict[str, Any]] = None) -> None: def _set_embedder_config(
self, embedder_config: Optional[Dict[str, Any]] = None
) -> None:
"""Set the embedding configuration for the knowledge storage. """Set the embedding configuration for the knowledge storage.
Args: Args:
embedder_config (Optional[Dict[str, Any]]): Configuration dictionary for the embedder. embedder_config (Optional[Dict[str, Any]]): Configuration dictionary for the embedder.
If None or empty, defaults to the default embedding function. If None or empty, defaults to the default embedding function.
""" """
self.embedder = ( self.embedder_config = (
EmbeddingConfigurator().configure_embedder(embedder) EmbeddingConfigurator().configure_embedder(embedder_config)
if embedder if embedder_config
else self._create_default_embedding_function() else self._create_default_embedding_function()
) )

View File

@@ -1,4 +1,3 @@
import inspect
import json import json
import logging import logging
import os import os
@@ -6,37 +5,20 @@ import sys
import threading import threading
import warnings import warnings
from contextlib import contextmanager from contextlib import contextmanager
from typing import ( from typing import Any, Dict, List, Optional, Union, cast
Any,
Dict,
List,
Literal,
Optional,
Tuple,
Type,
Union,
cast,
)
from dotenv import load_dotenv from dotenv import load_dotenv
from pydantic import BaseModel
from crewai.utilities.events.tool_usage_events import ToolExecutionErrorEvent
with warnings.catch_warnings(): with warnings.catch_warnings():
warnings.simplefilter("ignore", UserWarning) warnings.simplefilter("ignore", UserWarning)
import litellm import litellm
from litellm import Choices from litellm import Choices, get_supported_openai_params
from litellm.types.utils import ModelResponse from litellm.types.utils import ModelResponse
from litellm.utils import get_supported_openai_params, supports_response_schema
from crewai.traces.unified_trace_controller import trace_llm_call
from crewai.utilities.events import crewai_event_bus
from crewai.utilities.exceptions.context_window_exceeding_exception import ( from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException, LLMContextLengthExceededException,
) )
from crewai.utilities.protocols import AgentExecutorProtocol
load_dotenv() load_dotenv()
@@ -146,23 +128,21 @@ class LLM:
presence_penalty: Optional[float] = None, presence_penalty: Optional[float] = None,
frequency_penalty: Optional[float] = None, frequency_penalty: Optional[float] = None,
logit_bias: Optional[Dict[int, float]] = None, logit_bias: Optional[Dict[int, float]] = None,
response_format: Optional[Type[BaseModel]] = None, response_format: Optional[Dict[str, Any]] = None,
seed: Optional[int] = None, seed: Optional[int] = None,
logprobs: Optional[int] = None, logprobs: Optional[int] = None,
top_logprobs: Optional[int] = None, top_logprobs: Optional[int] = None,
base_url: Optional[str] = None, base_url: Optional[str] = None,
api_base: Optional[str] = None,
api_version: Optional[str] = None, api_version: Optional[str] = None,
api_key: Optional[str] = None, api_key: Optional[str] = None,
callbacks: List[Any] = [], callbacks: List[Any] = [],
reasoning_effort: Optional[Literal["none", "low", "medium", "high"]] = None,
**kwargs,
): ):
self.model = model self.model = model
self.timeout = timeout self.timeout = timeout
self.temperature = temperature self.temperature = temperature
self.top_p = top_p self.top_p = top_p
self.n = n self.n = n
self.stop = stop
self.max_completion_tokens = max_completion_tokens self.max_completion_tokens = max_completion_tokens
self.max_tokens = max_tokens self.max_tokens = max_tokens
self.presence_penalty = presence_penalty self.presence_penalty = presence_penalty
@@ -173,117 +153,47 @@ class LLM:
self.logprobs = logprobs self.logprobs = logprobs
self.top_logprobs = top_logprobs self.top_logprobs = top_logprobs
self.base_url = base_url self.base_url = base_url
self.api_base = api_base
self.api_version = api_version self.api_version = api_version
self.api_key = api_key self.api_key = api_key
self.callbacks = callbacks self.callbacks = callbacks
self.context_window_size = 0 self.context_window_size = 0
self.reasoning_effort = reasoning_effort
self.additional_params = kwargs
self._message_history: List[Dict[str, str]] = []
self.is_anthropic = self._is_anthropic_model(model)
litellm.drop_params = True litellm.drop_params = True
# Normalize self.stop to always be a List[str]
if stop is None:
self.stop: List[str] = []
elif isinstance(stop, str):
self.stop = [stop]
else:
self.stop = stop
self.set_callbacks(callbacks) self.set_callbacks(callbacks)
self.set_env_callbacks() self.set_env_callbacks()
@trace_llm_call
def _call_llm(self, params: Dict[str, Any]) -> Any:
with suppress_warnings():
response = litellm.completion(**params)
return response
def _is_anthropic_model(self, model: str) -> bool:
"""Determine if the model is from Anthropic provider.
Args:
model: The model identifier string.
Returns:
bool: True if the model is from Anthropic, False otherwise.
"""
ANTHROPIC_PREFIXES = ("anthropic/", "claude-", "claude/")
return any(prefix in model.lower() for prefix in ANTHROPIC_PREFIXES)
def call( def call(
self, self,
messages: Union[str, List[Dict[str, str]]], messages: List[Dict[str, str]],
tools: Optional[List[dict]] = None, tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None, callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None, available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]: ) -> str:
"""High-level LLM call method.
Args:
messages: Input messages for the LLM.
Can be a string or list of message dictionaries.
If string, it will be converted to a single user message.
If list, each dict must have 'role' and 'content' keys.
tools: Optional list of tool schemas for function calling.
Each tool should define its name, description, and parameters.
callbacks: Optional list of callback functions to be executed
during and after the LLM call.
available_functions: Optional dict mapping function names to callables
that can be invoked by the LLM.
Returns:
Union[str, Any]: Either a text response from the LLM (str) or
the result of a tool function call (Any).
Raises:
TypeError: If messages format is invalid
ValueError: If response format is not supported
LLMContextLengthExceededException: If input exceeds model's context limit
Examples:
# Example 1: Simple string input
>>> response = llm.call("Return the name of a random city.")
>>> print(response)
"Paris"
# Example 2: Message list with system and user messages
>>> messages = [
... {"role": "system", "content": "You are a geography expert"},
... {"role": "user", "content": "What is France's capital?"}
... ]
>>> response = llm.call(messages)
>>> print(response)
"The capital of France is Paris."
""" """
# Validate parameters before proceeding with the call. High-level call method that:
self._validate_call_params() 1) Calls litellm.completion
2) Checks for function/tool calls
if isinstance(messages, str): 3) If a tool call is found:
messages = [{"role": "user", "content": messages}] a) executes the function
b) returns the result
# For O1 models, system messages are not supported. 4) If no tool call, returns the text response
# Convert any system messages into assistant messages.
if "o1" in self.model.lower():
for message in messages:
if message.get("role") == "system":
message["role"] = "assistant"
:param messages: The conversation messages
:param tools: Optional list of function schemas for function calling
:param callbacks: Optional list of callbacks
:param available_functions: A dictionary mapping function_name -> actual Python function
:return: Final text response from the LLM or the tool result
"""
with suppress_warnings(): with suppress_warnings():
if callbacks and len(callbacks) > 0: if callbacks and len(callbacks) > 0:
self.set_callbacks(callbacks) self.set_callbacks(callbacks)
try: try:
# --- 1) Format messages according to provider requirements # --- 1) Make the completion call
formatted_messages = self._format_messages_for_provider(messages)
# --- 2) Prepare the parameters for the completion call
params = { params = {
"model": self.model, "model": self.model,
"messages": formatted_messages, "messages": messages,
"timeout": self.timeout, "timeout": self.timeout,
"temperature": self.temperature, "temperature": self.temperature,
"top_p": self.top_p, "top_p": self.top_p,
@@ -297,28 +207,23 @@ class LLM:
"seed": self.seed, "seed": self.seed,
"logprobs": self.logprobs, "logprobs": self.logprobs,
"top_logprobs": self.top_logprobs, "top_logprobs": self.top_logprobs,
"api_base": self.api_base, "api_base": self.base_url,
"base_url": self.base_url,
"api_version": self.api_version, "api_version": self.api_version,
"api_key": self.api_key, "api_key": self.api_key,
"stream": False, "stream": False,
"tools": tools, "tools": tools, # pass the tool schema
"reasoning_effort": self.reasoning_effort,
**self.additional_params,
} }
# Remove None values from params
params = {k: v for k, v in params.items() if v is not None} params = {k: v for k, v in params.items() if v is not None}
# --- 2) Make the completion call response = litellm.completion(**params)
response = self._call_llm(params)
response_message = cast(Choices, cast(ModelResponse, response).choices)[ response_message = cast(Choices, cast(ModelResponse, response).choices)[
0 0
].message ].message
text_response = response_message.content or "" text_response = response_message.content or ""
tool_calls = getattr(response_message, "tool_calls", []) tool_calls = getattr(response_message, "tool_calls", [])
# --- 3) Handle callbacks with usage info # Ensure callbacks get the full response object with usage info
if callbacks and len(callbacks) > 0: if callbacks and len(callbacks) > 0:
for callback in callbacks: for callback in callbacks:
if hasattr(callback, "log_success_event"): if hasattr(callback, "log_success_event"):
@@ -331,14 +236,14 @@ class LLM:
end_time=0, end_time=0,
) )
# --- 4) If no tool calls, return the text response # --- 2) If no tool calls, return the text response
if not tool_calls or not available_functions: if not tool_calls or not available_functions:
return text_response return text_response
# --- 5) Handle the tool call # --- 3) Handle the tool call
tool_call = tool_calls[0] tool_call = tool_calls[0]
function_name = tool_call.function.name function_name = tool_call.function.name
print("function_name", function_name)
if function_name in available_functions: if function_name in available_functions:
try: try:
function_args = json.loads(tool_call.function.arguments) function_args = json.loads(tool_call.function.arguments)
@@ -350,21 +255,13 @@ class LLM:
try: try:
# Call the actual tool function # Call the actual tool function
result = fn(**function_args) result = fn(**function_args)
return result return result
except Exception as e: except Exception as e:
logging.error( logging.error(
f"Error executing function '{function_name}': {e}" f"Error executing function '{function_name}': {e}"
) )
crewai_event_bus.emit(
self,
event=ToolExecutionErrorEvent(
tool_name=function_name,
tool_args=function_args,
tool_class=fn,
error=str(e),
),
)
return text_response return text_response
else: else:
@@ -380,76 +277,10 @@ class LLM:
logging.error(f"LiteLLM call failed: {str(e)}") logging.error(f"LiteLLM call failed: {str(e)}")
raise raise
def _format_messages_for_provider(
self, messages: List[Dict[str, str]]
) -> List[Dict[str, str]]:
"""Format messages according to provider requirements.
Args:
messages: List of message dictionaries with 'role' and 'content' keys.
Can be empty or None.
Returns:
List of formatted messages according to provider requirements.
For Anthropic models, ensures first message has 'user' role.
Raises:
TypeError: If messages is None or contains invalid message format.
"""
if messages is None:
raise TypeError("Messages cannot be None")
# Validate message format first
for msg in messages:
if not isinstance(msg, dict) or "role" not in msg or "content" not in msg:
raise TypeError(
"Invalid message format. Each message must be a dict with 'role' and 'content' keys"
)
if not self.is_anthropic:
return messages
# Anthropic requires messages to start with 'user' role
if not messages or messages[0]["role"] == "system":
# If first message is system or empty, add a placeholder user message
return [{"role": "user", "content": "."}, *messages]
return messages
def _get_custom_llm_provider(self) -> str:
"""
Derives the custom_llm_provider from the model string.
- For example, if the model is "openrouter/deepseek/deepseek-chat", returns "openrouter".
- If the model is "gemini/gemini-1.5-pro", returns "gemini".
- If there is no '/', defaults to "openai".
"""
if "/" in self.model:
return self.model.split("/")[0]
return "openai"
def _validate_call_params(self) -> None:
"""
Validate parameters before making a call. Currently this only checks if
a response_format is provided and whether the model supports it.
The custom_llm_provider is dynamically determined from the model:
- E.g., "openrouter/deepseek/deepseek-chat" yields "openrouter"
- "gemini/gemini-1.5-pro" yields "gemini"
- If no slash is present, "openai" is assumed.
"""
provider = self._get_custom_llm_provider()
if self.response_format is not None and not supports_response_schema(
model=self.model,
custom_llm_provider=provider,
):
raise ValueError(
f"The model {self.model} does not support response_format for provider '{provider}'. "
"Please remove response_format or use a supported model."
)
def supports_function_calling(self) -> bool: def supports_function_calling(self) -> bool:
try: try:
params = get_supported_openai_params(model=self.model) params = get_supported_openai_params(model=self.model)
return params is not None and "tools" in params return "response_format" in params
except Exception as e: except Exception as e:
logging.error(f"Failed to get supported params: {str(e)}") logging.error(f"Failed to get supported params: {str(e)}")
return False return False
@@ -457,7 +288,7 @@ class LLM:
def supports_stop_words(self) -> bool: def supports_stop_words(self) -> bool:
try: try:
params = get_supported_openai_params(model=self.model) params = get_supported_openai_params(model=self.model)
return params is not None and "stop" in params return "stop" in params
except Exception as e: except Exception as e:
logging.error(f"Failed to get supported params: {str(e)}") logging.error(f"Failed to get supported params: {str(e)}")
return False return False
@@ -531,95 +362,3 @@ class LLM:
litellm.success_callback = success_callbacks litellm.success_callback = success_callbacks
litellm.failure_callback = failure_callbacks litellm.failure_callback = failure_callbacks
def _get_execution_context(self) -> Tuple[Optional[Any], Optional[Any]]:
"""Get the agent and task from the execution context.
Returns:
tuple: (agent, task) from any AgentExecutor context, or (None, None) if not found
"""
frame = inspect.currentframe()
caller_frame = frame.f_back if frame else None
agent = None
task = None
# Add a maximum depth to prevent infinite loops
max_depth = 100 # Reasonable limit for call stack depth
current_depth = 0
while caller_frame and current_depth < max_depth:
if "self" in caller_frame.f_locals:
caller_self = caller_frame.f_locals["self"]
if isinstance(caller_self, AgentExecutorProtocol):
agent = caller_self.agent
task = caller_self.task
break
caller_frame = caller_frame.f_back
current_depth += 1
return agent, task
def _get_new_messages(self, messages: List[Dict[str, str]]) -> List[Dict[str, str]]:
"""Get only the new messages that haven't been processed before."""
if not hasattr(self, "_message_history"):
self._message_history = []
new_messages = []
for message in messages:
message_key = (message["role"], message["content"])
if message_key not in [
(m["role"], m["content"]) for m in self._message_history
]:
new_messages.append(message)
self._message_history.append(message)
return new_messages
def _get_new_tool_results(self, agent) -> List[Dict]:
"""Get only the new tool results that haven't been processed before."""
if not agent or not agent.tools_results:
return []
if not hasattr(self, "_tool_results_history"):
self._tool_results_history: List[Dict] = []
new_tool_results = []
for result in agent.tools_results:
# Process tool arguments to extract actual values
processed_args = {}
if isinstance(result["tool_args"], dict):
for key, value in result["tool_args"].items():
if isinstance(value, dict) and "type" in value:
# Skip metadata and just store the actual value
continue
processed_args[key] = value
# Create a clean result with processed arguments
clean_result = {
"tool_name": result["tool_name"],
"tool_args": processed_args,
"result": result["result"],
"content": result.get("content", ""),
"start_time": result.get("start_time", ""),
}
# Check if this exact tool execution exists in history
is_duplicate = False
for history_result in self._tool_results_history:
if (
clean_result["tool_name"] == history_result["tool_name"]
and str(clean_result["tool_args"])
== str(history_result["tool_args"])
and str(clean_result["result"]) == str(history_result["result"])
and clean_result["content"] == history_result.get("content", "")
and clean_result["start_time"]
== history_result.get("start_time", "")
):
is_duplicate = True
break
if not is_duplicate:
new_tool_results.append(clean_result)
self._tool_results_history.append(clean_result)
return new_tool_results

View File

@@ -1,7 +1,3 @@
from typing import Optional
from pydantic import PrivateAttr
from crewai.memory.entity.entity_memory_item import EntityMemoryItem from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.memory import Memory from crewai.memory.memory import Memory
from crewai.memory.storage.rag_storage import RAGStorage from crewai.memory.storage.rag_storage import RAGStorage
@@ -14,15 +10,13 @@ class EntityMemory(Memory):
Inherits from the Memory class. Inherits from the Memory class.
""" """
_memory_provider: Optional[str] = PrivateAttr()
def __init__(self, crew=None, embedder_config=None, storage=None, path=None): def __init__(self, crew=None, embedder_config=None, storage=None, path=None):
if crew and hasattr(crew, "memory_config") and crew.memory_config is not None: if hasattr(crew, "memory_config") and crew.memory_config is not None:
memory_provider = crew.memory_config.get("provider") self.memory_provider = crew.memory_config.get("provider")
else: else:
memory_provider = None self.memory_provider = None
if memory_provider == "mem0": if self.memory_provider == "mem0":
try: try:
from crewai.memory.storage.mem0_storage import Mem0Storage from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError: except ImportError:
@@ -42,13 +36,11 @@ class EntityMemory(Memory):
path=path, path=path,
) )
) )
super().__init__(storage)
super().__init__(storage=storage)
self._memory_provider = memory_provider
def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory" def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
"""Saves an entity item into the SQLite storage.""" """Saves an entity item into the SQLite storage."""
if self._memory_provider == "mem0": if self.memory_provider == "mem0":
data = f""" data = f"""
Remember details about the following entity: Remember details about the following entity:
Name: {item.name} Name: {item.name}

View File

@@ -17,7 +17,7 @@ class LongTermMemory(Memory):
def __init__(self, storage=None, path=None): def __init__(self, storage=None, path=None):
if not storage: if not storage:
storage = LTMSQLiteStorage(db_path=path) if path else LTMSQLiteStorage() storage = LTMSQLiteStorage(db_path=path) if path else LTMSQLiteStorage()
super().__init__(storage=storage) super().__init__(storage)
def save(self, item: LongTermMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory" def save(self, item: LongTermMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
metadata = item.metadata metadata = item.metadata

View File

@@ -1,19 +1,15 @@
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
from pydantic import BaseModel from crewai.memory.storage.rag_storage import RAGStorage
class Memory(BaseModel): class Memory:
""" """
Base class for memory, now supporting agent tags and generic metadata. Base class for memory, now supporting agent tags and generic metadata.
""" """
embedder_config: Optional[Dict[str, Any]] = None def __init__(self, storage: RAGStorage):
self.storage = storage
storage: Any
def __init__(self, storage: Any, **data: Any):
super().__init__(storage=storage, **data)
def save( def save(
self, self,

View File

@@ -1,7 +1,5 @@
from typing import Any, Dict, Optional from typing import Any, Dict, Optional
from pydantic import PrivateAttr
from crewai.memory.memory import Memory from crewai.memory.memory import Memory
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.memory.storage.rag_storage import RAGStorage from crewai.memory.storage.rag_storage import RAGStorage
@@ -16,15 +14,13 @@ class ShortTermMemory(Memory):
MemoryItem instances. MemoryItem instances.
""" """
_memory_provider: Optional[str] = PrivateAttr()
def __init__(self, crew=None, embedder_config=None, storage=None, path=None): def __init__(self, crew=None, embedder_config=None, storage=None, path=None):
if crew and hasattr(crew, "memory_config") and crew.memory_config is not None: if hasattr(crew, "memory_config") and crew.memory_config is not None:
memory_provider = crew.memory_config.get("provider") self.memory_provider = crew.memory_config.get("provider")
else: else:
memory_provider = None self.memory_provider = None
if memory_provider == "mem0": if self.memory_provider == "mem0":
try: try:
from crewai.memory.storage.mem0_storage import Mem0Storage from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError: except ImportError:
@@ -43,8 +39,7 @@ class ShortTermMemory(Memory):
path=path, path=path,
) )
) )
super().__init__(storage=storage) super().__init__(storage)
self._memory_provider = memory_provider
def save( def save(
self, self,
@@ -53,7 +48,7 @@ class ShortTermMemory(Memory):
agent: Optional[str] = None, agent: Optional[str] = None,
) -> None: ) -> None:
item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent) item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent)
if self._memory_provider == "mem0": if self.memory_provider == "mem0":
item.data = f"Remember the following insights from Agent run: {item.data}" item.data = f"Remember the following insights from Agent run: {item.data}"
super().save(value=item.data, metadata=item.metadata, agent=item.agent) super().save(value=item.data, metadata=item.metadata, agent=item.agent)

View File

@@ -13,7 +13,7 @@ class BaseRAGStorage(ABC):
self, self,
type: str, type: str,
allow_reset: bool = True, allow_reset: bool = True,
embedder_config: Optional[Dict[str, Any]] = None, embedder_config: Optional[Any] = None,
crew: Any = None, crew: Any = None,
): ):
self.type = type self.type = type

View File

@@ -21,6 +21,7 @@ from typing import (
Union, Union,
) )
from opentelemetry.trace import Span
from pydantic import ( from pydantic import (
UUID4, UUID4,
BaseModel, BaseModel,
@@ -35,15 +36,10 @@ from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tasks.guardrail_result import GuardrailResult from crewai.tasks.guardrail_result import GuardrailResult
from crewai.tasks.output_format import OutputFormat from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput from crewai.tasks.task_output import TaskOutput
from crewai.telemetry.telemetry import Telemetry
from crewai.tools.base_tool import BaseTool from crewai.tools.base_tool import BaseTool
from crewai.utilities.config import process_config from crewai.utilities.config import process_config
from crewai.utilities.converter import Converter, convert_to_model from crewai.utilities.converter import Converter, convert_to_model
from crewai.utilities.events import (
TaskCompletedEvent,
TaskFailedEvent,
TaskStartedEvent,
)
from crewai.utilities.events.crewai_event_bus import crewai_event_bus
from crewai.utilities.i18n import I18N from crewai.utilities.i18n import I18N
from crewai.utilities.printer import Printer from crewai.utilities.printer import Printer
@@ -187,6 +183,8 @@ class Task(BaseModel):
) )
return v return v
_telemetry: Telemetry = PrivateAttr(default_factory=Telemetry)
_execution_span: Optional[Span] = PrivateAttr(default=None)
_original_description: Optional[str] = PrivateAttr(default=None) _original_description: Optional[str] = PrivateAttr(default=None)
_original_expected_output: Optional[str] = PrivateAttr(default=None) _original_expected_output: Optional[str] = PrivateAttr(default=None)
_original_output_file: Optional[str] = PrivateAttr(default=None) _original_output_file: Optional[str] = PrivateAttr(default=None)
@@ -350,7 +348,6 @@ class Task(BaseModel):
tools: Optional[List[Any]], tools: Optional[List[Any]],
) -> TaskOutput: ) -> TaskOutput:
"""Run the core execution logic of the task.""" """Run the core execution logic of the task."""
try:
agent = agent or self.agent agent = agent or self.agent
self.agent = agent self.agent = agent
if not agent: if not agent:
@@ -359,12 +356,13 @@ class Task(BaseModel):
) )
self.start_time = datetime.datetime.now() self.start_time = datetime.datetime.now()
self._execution_span = self._telemetry.task_started(crew=agent.crew, task=self)
self.prompt_context = context self.prompt_context = context
tools = tools or self.tools or [] tools = tools or self.tools or []
self.processed_by_agents.add(agent.role) self.processed_by_agents.add(agent.role)
crewai_event_bus.emit(self, TaskStartedEvent(context=context))
result = agent.execute_task( result = agent.execute_task(
task=self, task=self,
context=context, context=context,
@@ -384,9 +382,7 @@ class Task(BaseModel):
) )
if self.guardrail: if self.guardrail:
guardrail_result = GuardrailResult.from_tuple( guardrail_result = GuardrailResult.from_tuple(self.guardrail(task_output))
self.guardrail(task_output)
)
if not guardrail_result.success: if not guardrail_result.success:
if self.retry_count >= self.max_retries: if self.retry_count >= self.max_retries:
raise Exception( raise Exception(
@@ -427,25 +423,19 @@ class Task(BaseModel):
if self.callback: if self.callback:
self.callback(self.output) self.callback(self.output)
crew = self.agent.crew # type: ignore[union-attr] if self._execution_span:
if crew and crew.task_callback and crew.task_callback != self.callback: self._telemetry.task_ended(self._execution_span, self, agent.crew)
crew.task_callback(self.output) self._execution_span = None
if self.output_file: if self.output_file:
content = ( content = (
json_output json_output
if json_output if json_output
else pydantic_output.model_dump_json() else pydantic_output.model_dump_json() if pydantic_output else result
if pydantic_output
else result
) )
self._save_file(content) self._save_file(content)
crewai_event_bus.emit(self, TaskCompletedEvent(output=task_output))
return task_output return task_output
except Exception as e:
self.end_time = datetime.datetime.now()
crewai_event_bus.emit(self, TaskFailedEvent(error=str(e)))
raise e # Re-raise the exception after emitting the event
def prompt(self) -> str: def prompt(self) -> str:
"""Prompt the task. """Prompt the task.
@@ -462,7 +452,7 @@ class Task(BaseModel):
return "\n".join(tasks_slices) return "\n".join(tasks_slices)
def interpolate_inputs_and_add_conversation_history( def interpolate_inputs_and_add_conversation_history(
self, inputs: Dict[str, Union[str, int, float, Dict[str, Any], List[Any]]] self, inputs: Dict[str, Union[str, int, float]]
) -> None: ) -> None:
"""Interpolate inputs into the task description, expected output, and output file path. """Interpolate inputs into the task description, expected output, and output file path.
Add conversation history if present. Add conversation history if present.
@@ -534,9 +524,7 @@ class Task(BaseModel):
) )
def interpolate_only( def interpolate_only(
self, self, input_string: Optional[str], inputs: Dict[str, Union[str, int, float]]
input_string: Optional[str],
inputs: Dict[str, Union[str, int, float, Dict[str, Any], List[Any]]],
) -> str: ) -> str:
"""Interpolate placeholders (e.g., {key}) in a string while leaving JSON untouched. """Interpolate placeholders (e.g., {key}) in a string while leaving JSON untouched.
@@ -544,39 +532,17 @@ class Task(BaseModel):
input_string: The string containing template variables to interpolate. input_string: The string containing template variables to interpolate.
Can be None or empty, in which case an empty string is returned. Can be None or empty, in which case an empty string is returned.
inputs: Dictionary mapping template variables to their values. inputs: Dictionary mapping template variables to their values.
Supported value types are strings, integers, floats, and dicts/lists Supported value types are strings, integers, and floats.
containing only these types and other nested dicts/lists. If input_string is empty or has no placeholders, inputs can be empty.
Returns: Returns:
The interpolated string with all template variables replaced with their values. The interpolated string with all template variables replaced with their values.
Empty string if input_string is None or empty. Empty string if input_string is None or empty.
Raises: Raises:
ValueError: If a value contains unsupported types ValueError: If a required template variable is missing from inputs.
KeyError: If a template variable is not found in the inputs dictionary.
""" """
# Validation function for recursive type checking
def validate_type(value: Any) -> None:
if value is None:
return
if isinstance(value, (str, int, float, bool)):
return
if isinstance(value, (dict, list)):
for item in value.values() if isinstance(value, dict) else value:
validate_type(item)
return
raise ValueError(
f"Unsupported type {type(value).__name__} in inputs. "
"Only str, int, float, bool, dict, and list are allowed."
)
# Validate all input values
for key, value in inputs.items():
try:
validate_type(value)
except ValueError as e:
raise ValueError(f"Invalid value for key '{key}': {str(e)}") from e
if input_string is None or not input_string: if input_string is None or not input_string:
return "" return ""
if "{" not in input_string and "}" not in input_string: if "{" not in input_string and "}" not in input_string:
@@ -585,7 +551,15 @@ class Task(BaseModel):
raise ValueError( raise ValueError(
"Inputs dictionary cannot be empty when interpolating variables" "Inputs dictionary cannot be empty when interpolating variables"
) )
try: try:
# Validate input types
for key, value in inputs.items():
if not isinstance(value, (str, int, float)):
raise ValueError(
f"Value for key '{key}' must be a string, integer, or float, got {type(value).__name__}"
)
escaped_string = input_string.replace("{", "{{").replace("}", "}}") escaped_string = input_string.replace("{", "{{").replace("}", "}}")
for key in inputs.keys(): for key in inputs.keys():
@@ -678,32 +652,19 @@ class Task(BaseModel):
return OutputFormat.PYDANTIC return OutputFormat.PYDANTIC
return OutputFormat.RAW return OutputFormat.RAW
def _save_file(self, result: Union[Dict, str, Any]) -> None: def _save_file(self, result: Any) -> None:
"""Save task output to a file. """Save task output to a file.
Note:
For cross-platform file writing, especially on Windows, consider using FileWriterTool
from the crewai_tools package:
pip install 'crewai[tools]'
from crewai_tools import FileWriterTool
Args: Args:
result: The result to save to the file. Can be a dict or any stringifiable object. result: The result to save to the file. Can be a dict or any stringifiable object.
Raises: Raises:
ValueError: If output_file is not set ValueError: If output_file is not set
RuntimeError: If there is an error writing to the file. For cross-platform RuntimeError: If there is an error writing to the file
compatibility, especially on Windows, use FileWriterTool from crewai_tools
package.
""" """
if self.output_file is None: if self.output_file is None:
raise ValueError("output_file is not set.") raise ValueError("output_file is not set.")
FILEWRITER_RECOMMENDATION = (
"For cross-platform file writing, especially on Windows, "
"use FileWriterTool from crewai_tools package."
)
try: try:
resolved_path = Path(self.output_file).expanduser().resolve() resolved_path = Path(self.output_file).expanduser().resolve()
directory = resolved_path.parent directory = resolved_path.parent
@@ -719,11 +680,7 @@ class Task(BaseModel):
else: else:
file.write(str(result)) file.write(str(result))
except (OSError, IOError) as e: except (OSError, IOError) as e:
raise RuntimeError( raise RuntimeError(f"Failed to save output file: {e}")
"\n".join(
[f"Failed to save output file: {e}", FILEWRITER_RECOMMENDATION]
)
)
return None return None
def __repr__(self): def __repr__(self):

View File

@@ -7,11 +7,11 @@ from crewai.utilities import I18N
i18n = I18N() i18n = I18N()
class AddImageToolSchema(BaseModel): class AddImageToolSchema(BaseModel):
image_url: str = Field(..., description="The URL or path of the image to add") image_url: str = Field(..., description="The URL or path of the image to add")
action: Optional[str] = Field( action: Optional[str] = Field(
default=None, description="Optional context or question about the image" default=None,
description="Optional context or question about the image"
) )
@@ -36,7 +36,10 @@ class AddImageTool(BaseTool):
"image_url": { "image_url": {
"url": image_url, "url": image_url,
}, },
}, }
] ]
return {"role": "user", "content": content} return {
"role": "user",
"content": content
}

View File

@@ -1,31 +1,28 @@
import ast import ast
import datetime import datetime
import json import json
import re
import time import time
from datetime import UTC
from difflib import SequenceMatcher from difflib import SequenceMatcher
from json import JSONDecodeError
from textwrap import dedent from textwrap import dedent
from typing import Any, Dict, List, Optional, Union from typing import Any, Dict, List, Union
import json5
from json_repair import repair_json from json_repair import repair_json
import crewai.utilities.events as events
from crewai.agents.tools_handler import ToolsHandler from crewai.agents.tools_handler import ToolsHandler
from crewai.task import Task from crewai.task import Task
from crewai.telemetry import Telemetry from crewai.telemetry import Telemetry
from crewai.tools import BaseTool from crewai.tools import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool from crewai.tools.structured_tool import CrewStructuredTool
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished
from crewai.utilities import I18N, Converter, ConverterError, Printer from crewai.utilities import I18N, Converter, ConverterError, Printer
from crewai.utilities.events.crewai_event_bus import crewai_event_bus
from crewai.utilities.events.tool_usage_events import (
ToolSelectionErrorEvent,
ToolUsageErrorEvent,
ToolUsageFinishedEvent,
ToolValidateInputErrorEvent,
)
try:
import agentops # type: ignore
except ImportError:
agentops = None
OPENAI_BIGGER_MODELS = [ OPENAI_BIGGER_MODELS = [
"gpt-4", "gpt-4",
"gpt-4o", "gpt-4o",
@@ -118,10 +115,7 @@ class ToolUsage:
self._printer.print(content=f"\n\n{error}\n", color="red") self._printer.print(content=f"\n\n{error}\n", color="red")
return error return error
if ( if isinstance(tool, CrewStructuredTool) and tool.name == self._i18n.tools("add_image")["name"]: # type: ignore
isinstance(tool, CrewStructuredTool)
and tool.name == self._i18n.tools("add_image")["name"] # type: ignore
):
try: try:
result = self._use(tool_string=tool_string, tool=tool, calling=calling) result = self._use(tool_string=tool_string, tool=tool, calling=calling)
return result return result
@@ -141,6 +135,7 @@ class ToolUsage:
tool: Any, tool: Any,
calling: Union[ToolCalling, InstructorToolCalling], calling: Union[ToolCalling, InstructorToolCalling],
) -> str: # TODO: Fix this return type ) -> str: # TODO: Fix this return type
tool_event = agentops.ToolEvent(name=calling.tool_name) if agentops else None # type: ignore
if self._check_tool_repeated_usage(calling=calling): # type: ignore # _check_tool_repeated_usage of "ToolUsage" does not return a value (it only ever returns None) if self._check_tool_repeated_usage(calling=calling): # type: ignore # _check_tool_repeated_usage of "ToolUsage" does not return a value (it only ever returns None)
try: try:
result = self._i18n.errors("task_repeated_usage").format( result = self._i18n.errors("task_repeated_usage").format(
@@ -158,7 +153,6 @@ class ToolUsage:
self.task.increment_tools_errors() self.task.increment_tools_errors()
started_at = time.time() started_at = time.time()
started_at_trace = datetime.datetime.now(UTC)
from_cache = False from_cache = False
result = None # type: ignore # Incompatible types in assignment (expression has type "None", variable has type "str") result = None # type: ignore # Incompatible types in assignment (expression has type "None", variable has type "str")
@@ -186,9 +180,7 @@ class ToolUsage:
if calling.arguments: if calling.arguments:
try: try:
acceptable_args = tool.args_schema.model_json_schema()[ acceptable_args = tool.args_schema.model_json_schema()["properties"].keys() # type: ignore
"properties"
].keys() # type: ignore
arguments = { arguments = {
k: v k: v
for k, v in calling.arguments.items() for k, v in calling.arguments.items()
@@ -209,7 +201,7 @@ class ToolUsage:
error=e, tool=tool.name, tool_inputs=tool.description error=e, tool=tool.name, tool_inputs=tool.description
) )
error = ToolUsageErrorException( error = ToolUsageErrorException(
f"\n{error_message}.\nMoving on then. {self._i18n.slice('format').format(tool_names=self.tools_names)}" f'\n{error_message}.\nMoving on then. {self._i18n.slice("format").format(tool_names=self.tools_names)}'
).message ).message
self.task.increment_tools_errors() self.task.increment_tools_errors()
if self.agent.verbose: if self.agent.verbose:
@@ -219,6 +211,10 @@ class ToolUsage:
return error # type: ignore # No return value expected return error # type: ignore # No return value expected
self.task.increment_tools_errors() self.task.increment_tools_errors()
if agentops:
agentops.record(
agentops.ErrorEvent(exception=e, trigger_event=tool_event)
)
return self.use(calling=calling, tool_string=tool_string) # type: ignore # No return value expected return self.use(calling=calling, tool_string=tool_string) # type: ignore # No return value expected
if self.tools_handler: if self.tools_handler:
@@ -234,6 +230,9 @@ class ToolUsage:
self.tools_handler.on_tool_use( self.tools_handler.on_tool_use(
calling=calling, output=result, should_cache=should_cache calling=calling, output=result, should_cache=should_cache
) )
if agentops:
agentops.record(tool_event)
self._telemetry.tool_usage( self._telemetry.tool_usage(
llm=self.function_calling_llm, llm=self.function_calling_llm,
tool_name=tool.name, tool_name=tool.name,
@@ -244,7 +243,6 @@ class ToolUsage:
"result": result, "result": result,
"tool_name": tool.name, "tool_name": tool.name,
"tool_args": calling.arguments, "tool_args": calling.arguments,
"start_time": started_at_trace,
} }
self.on_tool_use_finished( self.on_tool_use_finished(
@@ -309,33 +307,14 @@ class ToolUsage:
): ):
return tool return tool
self.task.increment_tools_errors() self.task.increment_tools_errors()
tool_selection_data = {
"agent_key": self.agent.key,
"agent_role": self.agent.role,
"tool_name": tool_name,
"tool_args": {},
"tool_class": self.tools_description,
}
if tool_name and tool_name != "": if tool_name and tool_name != "":
error = f"Action '{tool_name}' don't exist, these are the only available Actions:\n{self.tools_description}" raise Exception(
crewai_event_bus.emit( f"Action '{tool_name}' don't exist, these are the only available Actions:\n{self.tools_description}"
self,
ToolSelectionErrorEvent(
**tool_selection_data,
error=error,
),
) )
raise Exception(error)
else: else:
error = f"I forgot the Action name, these are the only available Actions: {self.tools_description}" raise Exception(
crewai_event_bus.emit( f"I forgot the Action name, these are the only available Actions: {self.tools_description}"
self,
ToolSelectionErrorEvent(
**tool_selection_data,
error=error,
),
) )
raise Exception(error)
def _render(self) -> str: def _render(self) -> str:
"""Render the tool name and description in plain text.""" """Render the tool name and description in plain text."""
@@ -388,7 +367,7 @@ class ToolUsage:
raise raise
else: else:
return ToolUsageErrorException( return ToolUsageErrorException(
f"{self._i18n.errors('tool_arguments_error')}" f'{self._i18n.errors("tool_arguments_error")}'
) )
if not isinstance(arguments, dict): if not isinstance(arguments, dict):
@@ -396,7 +375,7 @@ class ToolUsage:
raise raise
else: else:
return ToolUsageErrorException( return ToolUsageErrorException(
f"{self._i18n.errors('tool_arguments_error')}" f'{self._i18n.errors("tool_arguments_error")}'
) )
return ToolCalling( return ToolCalling(
@@ -424,80 +403,38 @@ class ToolUsage:
if self.agent.verbose: if self.agent.verbose:
self._printer.print(content=f"\n\n{e}\n", color="red") self._printer.print(content=f"\n\n{e}\n", color="red")
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling") return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
f"{self._i18n.errors('tool_usage_error').format(error=e)}\nMoving on then. {self._i18n.slice('format').format(tool_names=self.tools_names)}" f'{self._i18n.errors("tool_usage_error").format(error=e)}\nMoving on then. {self._i18n.slice("format").format(tool_names=self.tools_names)}'
) )
return self._tool_calling(tool_string) return self._tool_calling(tool_string)
def _validate_tool_input(self, tool_input: Optional[str]) -> Dict[str, Any]: def _validate_tool_input(self, tool_input: str) -> Dict[str, Any]:
if tool_input is None:
return {}
if not isinstance(tool_input, str) or not tool_input.strip():
raise Exception(
"Tool input must be a valid dictionary in JSON or Python literal format"
)
# Attempt 1: Parse as JSON
try: try:
arguments = json.loads(tool_input) # Replace Python literals with JSON equivalents
if isinstance(arguments, dict): replacements = {
return arguments r"'": '"',
except (JSONDecodeError, TypeError): r"None": "null",
pass # Continue to the next parsing attempt r"True": "true",
r"False": "false",
# Attempt 2: Parse as Python literal
try:
arguments = ast.literal_eval(tool_input)
if isinstance(arguments, dict):
return arguments
except (ValueError, SyntaxError):
pass # Continue to the next parsing attempt
# Attempt 3: Parse as JSON5
try:
arguments = json5.loads(tool_input)
if isinstance(arguments, dict):
return arguments
except (JSONDecodeError, ValueError, TypeError):
pass # Continue to the next parsing attempt
# Attempt 4: Repair JSON
try:
repaired_input = repair_json(tool_input)
self._printer.print(
content=f"Repaired JSON: {repaired_input}", color="blue"
)
arguments = json.loads(repaired_input)
if isinstance(arguments, dict):
return arguments
except Exception as e:
error = f"Failed to repair JSON: {e}"
self._printer.print(content=error, color="red")
error_message = (
"Tool input must be a valid dictionary in JSON or Python literal format"
)
self._emit_validate_input_error(error_message)
# If all parsing attempts fail, raise an error
raise Exception(error_message)
def _emit_validate_input_error(self, final_error: str):
tool_selection_data = {
"agent_key": self.agent.key,
"agent_role": self.agent.role,
"tool_name": self.action.tool,
"tool_args": str(self.action.tool_input),
"tool_class": self.__class__.__name__,
} }
for pattern, replacement in replacements.items():
tool_input = re.sub(pattern, replacement, tool_input)
crewai_event_bus.emit( arguments = json.loads(tool_input)
self, except json.JSONDecodeError:
ToolValidateInputErrorEvent(**tool_selection_data, error=final_error), # Attempt to repair JSON string
) repaired_input = repair_json(tool_input)
try:
arguments = json.loads(repaired_input)
except json.JSONDecodeError as e:
raise Exception(f"Invalid tool input JSON: {e}")
return arguments
def on_tool_error(self, tool: Any, tool_calling: ToolCalling, e: Exception) -> None: def on_tool_error(self, tool: Any, tool_calling: ToolCalling, e: Exception) -> None:
event_data = self._prepare_event_data(tool, tool_calling) event_data = self._prepare_event_data(tool, tool_calling)
crewai_event_bus.emit(self, ToolUsageErrorEvent(**{**event_data, "error": e})) events.emit(
source=self, event=ToolUsageError(**{**event_data, "error": str(e)})
)
def on_tool_use_finished( def on_tool_use_finished(
self, tool: Any, tool_calling: ToolCalling, from_cache: bool, started_at: float self, tool: Any, tool_calling: ToolCalling, from_cache: bool, started_at: float
@@ -511,7 +448,7 @@ class ToolUsage:
"from_cache": from_cache, "from_cache": from_cache,
} }
) )
crewai_event_bus.emit(self, ToolUsageFinishedEvent(**event_data)) events.emit(source=self, event=ToolUsageFinished(**event_data))
def _prepare_event_data(self, tool: Any, tool_calling: ToolCalling) -> dict: def _prepare_event_data(self, tool: Any, tool_calling: ToolCalling) -> dict:
return { return {

View File

@@ -0,0 +1,24 @@
from datetime import datetime
from typing import Any, Dict
from pydantic import BaseModel
class ToolUsageEvent(BaseModel):
agent_key: str
agent_role: str
tool_name: str
tool_args: Dict[str, Any]
tool_class: str
run_attempts: int | None = None
delegations: int | None = None
class ToolUsageFinished(ToolUsageEvent):
started_at: datetime
finished_at: datetime
from_cache: bool = False
class ToolUsageError(ToolUsageEvent):
error: str

View File

@@ -1,39 +0,0 @@
from contextlib import contextmanager
from contextvars import ContextVar
from typing import Generator
class TraceContext:
"""Maintains the current trace context throughout the execution stack.
This class provides a context manager for tracking trace execution across
async and sync code paths using ContextVars.
"""
_context: ContextVar = ContextVar("trace_context", default=None)
@classmethod
def get_current(cls):
"""Get the current trace context.
Returns:
Optional[UnifiedTraceController]: The current trace controller or None if not set.
"""
return cls._context.get()
@classmethod
@contextmanager
def set_current(cls, trace):
"""Set the current trace context within a context manager.
Args:
trace: The trace controller to set as current.
Yields:
UnifiedTraceController: The current trace controller.
"""
token = cls._context.set(trace)
try:
yield trace
finally:
cls._context.reset(token)

View File

@@ -1,19 +0,0 @@
from enum import Enum
class TraceType(Enum):
LLM_CALL = "llm_call"
TOOL_CALL = "tool_call"
FLOW_STEP = "flow_step"
START_CALL = "start_call"
class RunType(Enum):
KICKOFF = "kickoff"
TRAIN = "train"
TEST = "test"
class CrewType(Enum):
CREW = "crew"
FLOW = "flow"

View File

@@ -1,89 +0,0 @@
from datetime import datetime
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Field
class ToolCall(BaseModel):
"""Model representing a tool call during execution"""
name: str
arguments: Dict[str, Any]
output: str
start_time: datetime
end_time: Optional[datetime] = None
latency_ms: Optional[int] = None
error: Optional[str] = None
class LLMRequest(BaseModel):
"""Model representing the LLM request details"""
model: str
messages: List[Dict[str, str]]
temperature: Optional[float] = None
max_tokens: Optional[int] = None
stop_sequences: Optional[List[str]] = None
additional_params: Dict[str, Any] = Field(default_factory=dict)
class LLMResponse(BaseModel):
"""Model representing the LLM response details"""
content: str
finish_reason: Optional[str] = None
class FlowStepIO(BaseModel):
"""Model representing flow step input/output details"""
function_name: str
inputs: Dict[str, Any] = Field(default_factory=dict)
outputs: Any
metadata: Dict[str, Any] = Field(default_factory=dict)
class CrewTrace(BaseModel):
"""Model for tracking detailed information about LLM interactions and Flow steps"""
deployment_instance_id: Optional[str] = Field(
description="ID of the deployment instance"
)
trace_id: str = Field(description="Unique identifier for this trace")
run_id: str = Field(description="Identifier for the execution run")
agent_role: Optional[str] = Field(description="Role of the agent")
task_id: Optional[str] = Field(description="ID of the current task being executed")
task_name: Optional[str] = Field(description="Name of the current task")
task_description: Optional[str] = Field(
description="Description of the current task"
)
trace_type: str = Field(description="Type of the trace")
crew_type: str = Field(description="Type of the crew")
run_type: str = Field(description="Type of the run")
# Timing information
start_time: Optional[datetime] = None
end_time: Optional[datetime] = None
latency_ms: Optional[int] = None
# Request/Response for LLM calls
request: Optional[LLMRequest] = None
response: Optional[LLMResponse] = None
# Input/Output for Flow steps
flow_step: Optional[FlowStepIO] = None
# Tool usage
tool_calls: List[ToolCall] = Field(default_factory=list)
# Metrics
tokens_used: Optional[int] = None
prompt_tokens: Optional[int] = None
completion_tokens: Optional[int] = None
cost: Optional[float] = None
# Additional metadata
status: str = "running" # running, completed, error
error: Optional[str] = None
metadata: Dict[str, Any] = Field(default_factory=dict)
tags: List[str] = Field(default_factory=list)

View File

@@ -1,543 +0,0 @@
import inspect
import os
from datetime import UTC, datetime
from functools import wraps
from typing import Any, Awaitable, Callable, Dict, List, Optional
from uuid import uuid4
from crewai.traces.context import TraceContext
from crewai.traces.enums import CrewType, RunType, TraceType
from crewai.traces.models import (
CrewTrace,
FlowStepIO,
LLMRequest,
LLMResponse,
ToolCall,
)
class UnifiedTraceController:
"""Controls and manages trace execution and recording.
This class handles the lifecycle of traces including creation, execution tracking,
and recording of results for various types of operations (LLM calls, tool calls, flow steps).
"""
_task_traces: Dict[str, List["UnifiedTraceController"]] = {}
def __init__(
self,
trace_type: TraceType,
run_type: RunType,
crew_type: CrewType,
run_id: str,
deployment_instance_id: str = os.environ.get(
"CREWAI_DEPLOYMENT_INSTANCE_ID", ""
),
parent_trace_id: Optional[str] = None,
agent_role: Optional[str] = "unknown",
task_name: Optional[str] = None,
task_description: Optional[str] = None,
task_id: Optional[str] = None,
flow_step: Dict[str, Any] = {},
tool_calls: List[ToolCall] = [],
**context: Any,
) -> None:
"""Initialize a new trace controller.
Args:
trace_type: Type of trace being recorded.
run_type: Type of run being executed.
crew_type: Type of crew executing the trace.
run_id: Unique identifier for the run.
deployment_instance_id: Optional deployment instance identifier.
parent_trace_id: Optional parent trace identifier for nested traces.
agent_role: Role of the agent executing the trace.
task_name: Optional name of the task being executed.
task_description: Optional description of the task.
task_id: Optional unique identifier for the task.
flow_step: Optional flow step information.
tool_calls: Optional list of tool calls made during execution.
**context: Additional context parameters.
"""
self.trace_id = str(uuid4())
self.run_id = run_id
self.parent_trace_id = parent_trace_id
self.trace_type = trace_type
self.run_type = run_type
self.crew_type = crew_type
self.context = context
self.agent_role = agent_role
self.task_name = task_name
self.task_description = task_description
self.task_id = task_id
self.deployment_instance_id = deployment_instance_id
self.children: List[Dict[str, Any]] = []
self.start_time: Optional[datetime] = None
self.end_time: Optional[datetime] = None
self.error: Optional[str] = None
self.tool_calls = tool_calls
self.flow_step = flow_step
self.status: str = "running"
# Add trace to task's trace collection if task_id is present
if task_id:
self._add_to_task_traces()
def _add_to_task_traces(self) -> None:
"""Add this trace to the task's trace collection."""
if not hasattr(UnifiedTraceController, "_task_traces"):
UnifiedTraceController._task_traces = {}
if self.task_id is None:
return
if self.task_id not in UnifiedTraceController._task_traces:
UnifiedTraceController._task_traces[self.task_id] = []
UnifiedTraceController._task_traces[self.task_id].append(self)
@classmethod
def get_task_traces(cls, task_id: str) -> List["UnifiedTraceController"]:
"""Get all traces for a specific task.
Args:
task_id: The ID of the task to get traces for
Returns:
List of traces associated with the task
"""
return cls._task_traces.get(task_id, [])
@classmethod
def clear_task_traces(cls, task_id: str) -> None:
"""Clear traces for a specific task.
Args:
task_id: The ID of the task to clear traces for
"""
if hasattr(cls, "_task_traces") and task_id in cls._task_traces:
del cls._task_traces[task_id]
def _get_current_trace(self) -> "UnifiedTraceController":
return TraceContext.get_current()
def start_trace(self) -> "UnifiedTraceController":
"""Start the trace execution.
Returns:
UnifiedTraceController: Self for method chaining.
"""
self.start_time = datetime.now(UTC)
return self
def end_trace(self, result: Any = None, error: Optional[str] = None) -> None:
"""End the trace execution and record results.
Args:
result: Optional result from the trace execution.
error: Optional error message if the trace failed.
"""
self.end_time = datetime.now(UTC)
self.status = "error" if error else "completed"
self.error = error
self._record_trace(result)
def add_child_trace(self, child_trace: Dict[str, Any]) -> None:
"""Add a child trace to this trace's execution history.
Args:
child_trace: The child trace information to add.
"""
self.children.append(child_trace)
def to_crew_trace(self) -> CrewTrace:
"""Convert to CrewTrace format for storage.
Returns:
CrewTrace: The trace data in CrewTrace format.
"""
latency_ms = None
if self.tool_calls and hasattr(self.tool_calls[0], "start_time"):
self.start_time = self.tool_calls[0].start_time
if self.start_time and self.end_time:
latency_ms = int((self.end_time - self.start_time).total_seconds() * 1000)
request = None
response = None
flow_step_obj = None
if self.trace_type in [TraceType.LLM_CALL, TraceType.TOOL_CALL]:
request = LLMRequest(
model=self.context.get("model", "unknown"),
messages=self.context.get("messages", []),
temperature=self.context.get("temperature"),
max_tokens=self.context.get("max_tokens"),
stop_sequences=self.context.get("stop_sequences"),
)
if "response" in self.context:
response = LLMResponse(
content=self.context["response"].get("content", ""),
finish_reason=self.context["response"].get("finish_reason"),
)
elif self.trace_type == TraceType.FLOW_STEP:
flow_step_obj = FlowStepIO(
function_name=self.flow_step.get("function_name", "unknown"),
inputs=self.flow_step.get("inputs", {}),
outputs={"result": self.context.get("response")},
metadata=self.flow_step.get("metadata", {}),
)
return CrewTrace(
deployment_instance_id=self.deployment_instance_id,
trace_id=self.trace_id,
task_id=self.task_id,
run_id=self.run_id,
agent_role=self.agent_role,
task_name=self.task_name,
task_description=self.task_description,
trace_type=self.trace_type.value,
crew_type=self.crew_type.value,
run_type=self.run_type.value,
start_time=self.start_time,
end_time=self.end_time,
latency_ms=latency_ms,
request=request,
response=response,
flow_step=flow_step_obj,
tool_calls=self.tool_calls,
tokens_used=self.context.get("tokens_used"),
prompt_tokens=self.context.get("prompt_tokens"),
completion_tokens=self.context.get("completion_tokens"),
status=self.status,
error=self.error,
)
def _record_trace(self, result: Any = None) -> None:
"""Record the trace.
This method is called when a trace is completed. It ensures the trace
is properly recorded and associated with its task if applicable.
Args:
result: Optional result to include in the trace
"""
if result:
self.context["response"] = result
# Add to task traces if this trace belongs to a task
if self.task_id:
self._add_to_task_traces()
def should_trace() -> bool:
"""Check if tracing is enabled via environment variable."""
return os.getenv("CREWAI_ENABLE_TRACING", "false").lower() == "true"
# Crew main trace
def init_crew_main_trace(func: Callable[..., Any]) -> Callable[..., Any]:
"""Decorator to initialize and track the main crew execution trace.
This decorator sets up the trace context for the main crew execution,
handling both synchronous and asynchronous crew operations.
Args:
func: The crew function to be traced.
Returns:
Wrapped function that creates and manages the main crew trace context.
"""
@wraps(func)
def wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
if not should_trace():
return func(self, *args, **kwargs)
trace = build_crew_main_trace(self)
with TraceContext.set_current(trace):
try:
return func(self, *args, **kwargs)
except Exception as e:
trace.end_trace(error=str(e))
raise
return wrapper
def build_crew_main_trace(self: Any) -> "UnifiedTraceController":
"""Build the main trace controller for a crew execution.
This function creates a trace controller configured for the main crew execution,
handling different run types (kickoff, test, train) and maintaining context.
Args:
self: The crew instance.
Returns:
UnifiedTraceController: The configured trace controller for the crew.
"""
run_type = RunType.KICKOFF
if hasattr(self, "_test") and self._test:
run_type = RunType.TEST
elif hasattr(self, "_train") and self._train:
run_type = RunType.TRAIN
current_trace = TraceContext.get_current()
trace = UnifiedTraceController(
trace_type=TraceType.LLM_CALL,
run_type=run_type,
crew_type=current_trace.crew_type if current_trace else CrewType.CREW,
run_id=current_trace.run_id if current_trace else str(self.id),
parent_trace_id=current_trace.trace_id if current_trace else None,
)
return trace
# Flow main trace
def init_flow_main_trace(
func: Callable[..., Awaitable[Any]],
) -> Callable[..., Awaitable[Any]]:
"""Decorator to initialize and track the main flow execution trace.
Args:
func: The async flow function to be traced.
Returns:
Wrapped async function that creates and manages the main flow trace context.
"""
@wraps(func)
async def wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
if not should_trace():
return await func(self, *args, **kwargs)
trace = build_flow_main_trace(self, *args, **kwargs)
with TraceContext.set_current(trace):
try:
return await func(self, *args, **kwargs)
except Exception:
raise
return wrapper
def build_flow_main_trace(
self: Any, *args: Any, **kwargs: Any
) -> "UnifiedTraceController":
"""Build the main trace controller for a flow execution.
Args:
self: The flow instance.
*args: Variable positional arguments.
**kwargs: Variable keyword arguments.
Returns:
UnifiedTraceController: The configured trace controller for the flow.
"""
current_trace = TraceContext.get_current()
trace = UnifiedTraceController(
trace_type=TraceType.FLOW_STEP,
run_id=current_trace.run_id if current_trace else str(self.flow_id),
parent_trace_id=current_trace.trace_id if current_trace else None,
crew_type=CrewType.FLOW,
run_type=RunType.KICKOFF,
context={
"crew_name": self.__class__.__name__,
"inputs": kwargs.get("inputs", {}),
"agents": [],
"tasks": [],
},
)
return trace
# Flow step trace
def trace_flow_step(
func: Callable[..., Awaitable[Any]],
) -> Callable[..., Awaitable[Any]]:
"""Decorator to trace individual flow step executions.
Args:
func: The async flow step function to be traced.
Returns:
Wrapped async function that creates and manages the flow step trace context.
"""
@wraps(func)
async def wrapper(
self: Any,
method_name: str,
method: Callable[..., Any],
*args: Any,
**kwargs: Any,
) -> Any:
if not should_trace():
return await func(self, method_name, method, *args, **kwargs)
trace = build_flow_step_trace(self, method_name, method, *args, **kwargs)
with TraceContext.set_current(trace):
trace.start_trace()
try:
result = await func(self, method_name, method, *args, **kwargs)
trace.end_trace(result=result)
return result
except Exception as e:
trace.end_trace(error=str(e))
raise
return wrapper
def build_flow_step_trace(
self: Any, method_name: str, method: Callable[..., Any], *args: Any, **kwargs: Any
) -> "UnifiedTraceController":
"""Build a trace controller for an individual flow step.
Args:
self: The flow instance.
method_name: Name of the method being executed.
method: The actual method being executed.
*args: Variable positional arguments.
**kwargs: Variable keyword arguments.
Returns:
UnifiedTraceController: The configured trace controller for the flow step.
"""
current_trace = TraceContext.get_current()
# Get method signature
sig = inspect.signature(method)
params = list(sig.parameters.values())
# Create inputs dictionary mapping parameter names to values
method_params = [p for p in params if p.name != "self"]
inputs: Dict[str, Any] = {}
# Map positional args to their parameter names
for i, param in enumerate(method_params):
if i < len(args):
inputs[param.name] = args[i]
# Add keyword arguments
inputs.update(kwargs)
trace = UnifiedTraceController(
trace_type=TraceType.FLOW_STEP,
run_type=current_trace.run_type if current_trace else RunType.KICKOFF,
crew_type=current_trace.crew_type if current_trace else CrewType.FLOW,
run_id=current_trace.run_id if current_trace else str(self.flow_id),
parent_trace_id=current_trace.trace_id if current_trace else None,
flow_step={
"function_name": method_name,
"inputs": inputs,
"metadata": {
"crew_name": self.__class__.__name__,
},
},
)
return trace
# LLM trace
def trace_llm_call(func: Callable[..., Any]) -> Callable[..., Any]:
"""Decorator to trace LLM calls.
Args:
func: The function to trace.
Returns:
Wrapped function that creates and manages the LLM call trace context.
"""
@wraps(func)
def wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
if not should_trace():
return func(self, *args, **kwargs)
trace = build_llm_trace(self, *args, **kwargs)
with TraceContext.set_current(trace):
trace.start_trace()
try:
response = func(self, *args, **kwargs)
# Extract relevant data from response
trace_response = {
"content": response["choices"][0]["message"]["content"],
"finish_reason": response["choices"][0].get("finish_reason"),
}
# Add usage metrics to context
if "usage" in response:
trace.context["tokens_used"] = response["usage"].get(
"total_tokens", 0
)
trace.context["prompt_tokens"] = response["usage"].get(
"prompt_tokens", 0
)
trace.context["completion_tokens"] = response["usage"].get(
"completion_tokens", 0
)
trace.end_trace(trace_response)
return response
except Exception as e:
trace.end_trace(error=str(e))
raise
return wrapper
def build_llm_trace(
self: Any, params: Dict[str, Any], *args: Any, **kwargs: Any
) -> Any:
"""Build a trace controller for an LLM call.
Args:
self: The LLM instance.
params: The parameters for the LLM call.
*args: Variable positional arguments.
**kwargs: Variable keyword arguments.
Returns:
UnifiedTraceController: The configured trace controller for the LLM call.
"""
current_trace = TraceContext.get_current()
agent, task = self._get_execution_context()
# Get new messages and tool results
new_messages = self._get_new_messages(params.get("messages", []))
new_tool_results = self._get_new_tool_results(agent)
# Create trace context
trace = UnifiedTraceController(
trace_type=TraceType.TOOL_CALL if new_tool_results else TraceType.LLM_CALL,
crew_type=current_trace.crew_type if current_trace else CrewType.CREW,
run_type=current_trace.run_type if current_trace else RunType.KICKOFF,
run_id=current_trace.run_id if current_trace else str(uuid4()),
parent_trace_id=current_trace.trace_id if current_trace else None,
agent_role=agent.role if agent else "unknown",
task_id=str(task.id) if task else None,
task_name=task.name if task else None,
task_description=task.description if task else None,
model=self.model,
messages=new_messages,
temperature=self.temperature,
max_tokens=self.max_tokens,
stop_sequences=self.stop,
tool_calls=[
ToolCall(
name=result["tool_name"],
arguments=result["tool_args"],
output=str(result["result"]),
start_time=result.get("start_time", ""),
end_time=datetime.now(UTC),
)
for result in new_tool_results
],
)
return trace

View File

@@ -15,7 +15,7 @@
"final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfies the expected criteria, use the EXACT format below:\n\n```\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n```", "final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfies the expected criteria, use the EXACT format below:\n\n```\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n```",
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nHere is the expected format I must follow:\n\n```\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n This Thought/Action/Action Input/Result process can repeat N times. Once I know the final answer, I must return the following format:\n\n```\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n```", "format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nHere is the expected format I must follow:\n\n```\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n This Thought/Action/Action Input/Result process can repeat N times. Once I know the final answer, I must return the following format:\n\n```\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n```",
"task_with_context": "{task}\n\nThis is the context you're working with:\n{context}", "task_with_context": "{task}\n\nThis is the context you're working with:\n{context}",
"expected_output": "\nThis is the expected criteria for your final answer: {expected_output}\nyou MUST return the actual complete content as the final answer, not a summary.", "expected_output": "\nThis is the expect criteria for your final answer: {expected_output}\nyou MUST return the actual complete content as the final answer, not a summary.",
"human_feedback": "You got human feedback on your work, re-evaluate it and give a new Final Answer when ready.\n {human_feedback}", "human_feedback": "You got human feedback on your work, re-evaluate it and give a new Final Answer when ready.\n {human_feedback}",
"getting_input": "This is the agent's final answer: {final_answer}\n\n", "getting_input": "This is the agent's final answer: {final_answer}\n\n",
"summarizer_system_message": "You are a helpful assistant that summarizes text.", "summarizer_system_message": "You are a helpful assistant that summarizes text.",
@@ -23,8 +23,8 @@
"summary": "This is a summary of our conversation so far:\n{merged_summary}", "summary": "This is a summary of our conversation so far:\n{merged_summary}",
"manager_request": "Your best answer to your coworker asking you this, accounting for the context shared.", "manager_request": "Your best answer to your coworker asking you this, accounting for the context shared.",
"formatted_task_instructions": "Ensure your final answer contains only the content in the following format: {output_format}\n\nEnsure the final output does not include any code block markers like ```json or ```python.", "formatted_task_instructions": "Ensure your final answer contains only the content in the following format: {output_format}\n\nEnsure the final output does not include any code block markers like ```json or ```python.",
"conversation_history_instruction": "You are a member of a crew collaborating to achieve a common goal. Your task is a specific action that contributes to this larger objective. For additional context, please review the conversation history between you and the user that led to the initiation of this crew. Use any relevant information or feedback from the conversation to inform your task execution and ensure your response aligns with both the immediate task and the crew's overall goals.", "human_feedback_classification": "Determine if the following feedback indicates that the user is satisfied or if further changes are needed. Respond with 'True' if further changes are needed, or 'False' if the user is satisfied. **Important** Do not include any additional commentary outside of your 'True' or 'False' response.\n\nFeedback: \"{feedback}\"",
"feedback_instructions": "User feedback: {feedback}\nInstructions: Use this feedback to enhance the next output iteration.\nNote: Do not respond or add commentary." "conversation_history_instruction": "You are a member of a crew collaborating to achieve a common goal. Your task is a specific action that contributes to this larger objective. For additional context, please review the conversation history between you and the user that led to the initiation of this crew. Use any relevant information or feedback from the conversation to inform your task execution and ensure your response aligns with both the immediate task and the crew's overall goals."
}, },
"errors": { "errors": {
"force_final_answer_error": "You can't keep going, here is the best final answer you generated:\n\n {formatted_answer}", "force_final_answer_error": "You can't keep going, here is the best final answer you generated:\n\n {formatted_answer}",

View File

@@ -4,4 +4,3 @@ DEFAULT_SCORE_THRESHOLD = 0.35
KNOWLEDGE_DIRECTORY = "knowledge" KNOWLEDGE_DIRECTORY = "knowledge"
MAX_LLM_RETRY = 3 MAX_LLM_RETRY = 3
MAX_FILE_NAME_LENGTH = 255 MAX_FILE_NAME_LENGTH = 255
EMITTER_COLOR = "bold_blue"

View File

@@ -20,11 +20,11 @@ class ConverterError(Exception):
class Converter(OutputConverter): class Converter(OutputConverter):
"""Class that converts text into either pydantic or json.""" """Class that converts text into either pydantic or json."""
def to_pydantic(self, current_attempt=1) -> BaseModel: def to_pydantic(self, current_attempt=1):
"""Convert text to pydantic.""" """Convert text to pydantic."""
try: try:
if self.llm.supports_function_calling(): if self.llm.supports_function_calling():
result = self._create_instructor().to_pydantic() return self._create_instructor().to_pydantic()
else: else:
response = self.llm.call( response = self.llm.call(
[ [
@@ -32,40 +32,18 @@ class Converter(OutputConverter):
{"role": "user", "content": self.text}, {"role": "user", "content": self.text},
] ]
) )
try: return self.model.model_validate_json(response)
# Try to directly validate the response JSON
result = self.model.model_validate_json(response)
except ValidationError:
# If direct validation fails, attempt to extract valid JSON
result = handle_partial_json(response, self.model, False, None)
# Ensure result is a BaseModel instance
if not isinstance(result, BaseModel):
if isinstance(result, dict):
result = self.model.parse_obj(result)
elif isinstance(result, str):
try:
parsed = json.loads(result)
result = self.model.parse_obj(parsed)
except Exception as parse_err:
raise ConverterError(
f"Failed to convert partial JSON result into Pydantic: {parse_err}"
)
else:
raise ConverterError(
"handle_partial_json returned an unexpected type."
)
return result
except ValidationError as e: except ValidationError as e:
if current_attempt < self.max_attempts: if current_attempt < self.max_attempts:
return self.to_pydantic(current_attempt + 1) return self.to_pydantic(current_attempt + 1)
raise ConverterError( raise ConverterError(
f"Failed to convert text into a Pydantic model due to validation error: {e}" f"Failed to convert text into a Pydantic model due to the following validation error: {e}"
) )
except Exception as e: except Exception as e:
if current_attempt < self.max_attempts: if current_attempt < self.max_attempts:
return self.to_pydantic(current_attempt + 1) return self.to_pydantic(current_attempt + 1)
raise ConverterError( raise ConverterError(
f"Failed to convert text into a Pydantic model due to error: {e}" f"Failed to convert text into a Pydantic model due to the following error: {e}"
) )
def to_json(self, current_attempt=1): def to_json(self, current_attempt=1):
@@ -219,15 +197,11 @@ def get_conversion_instructions(model: Type[BaseModel], llm: Any) -> str:
if llm.supports_function_calling(): if llm.supports_function_calling():
model_schema = PydanticSchemaParser(model=model).get_schema() model_schema = PydanticSchemaParser(model=model).get_schema()
instructions += ( instructions += (
f"\n\nOutput ONLY the valid JSON and nothing else.\n\n" f"\n\nThe JSON should follow this schema:\n```json\n{model_schema}\n```"
f"The JSON must follow this schema exactly:\n```json\n{model_schema}\n```"
) )
else: else:
model_description = generate_model_description(model) model_description = generate_model_description(model)
instructions += ( instructions += f"\n\nThe JSON should follow this format:\n{model_description}"
f"\n\nOutput ONLY the valid JSON and nothing else.\n\n"
f"The JSON must follow this format exactly:\n{model_description}"
)
return instructions return instructions

View File

@@ -1,5 +1,5 @@
import os import os
from typing import Any, Dict, Optional, cast from typing import Any, Dict, cast
from chromadb import Documents, EmbeddingFunction, Embeddings from chromadb import Documents, EmbeddingFunction, Embeddings
from chromadb.api.types import validate_embedding_function from chromadb.api.types import validate_embedding_function
@@ -18,12 +18,11 @@ class EmbeddingConfigurator:
"bedrock": self._configure_bedrock, "bedrock": self._configure_bedrock,
"huggingface": self._configure_huggingface, "huggingface": self._configure_huggingface,
"watson": self._configure_watson, "watson": self._configure_watson,
"custom": self._configure_custom,
} }
def configure_embedder( def configure_embedder(
self, self,
embedder_config: Optional[Dict[str, Any]] = None, embedder_config: Dict[str, Any] | None = None,
) -> EmbeddingFunction: ) -> EmbeddingFunction:
"""Configures and returns an embedding function based on the provided config.""" """Configures and returns an embedding function based on the provided config."""
if embedder_config is None: if embedder_config is None:
@@ -31,19 +30,21 @@ class EmbeddingConfigurator:
provider = embedder_config.get("provider") provider = embedder_config.get("provider")
config = embedder_config.get("config", {}) config = embedder_config.get("config", {})
model_name = config.get("model") if provider != "custom" else None model_name = config.get("model")
if isinstance(provider, EmbeddingFunction):
try:
validate_embedding_function(provider)
return provider
except Exception as e:
raise ValueError(f"Invalid custom embedding function: {str(e)}")
if provider not in self.embedding_functions: if provider not in self.embedding_functions:
raise Exception( raise Exception(
f"Unsupported embedding provider: {provider}, supported providers: {list(self.embedding_functions.keys())}" f"Unsupported embedding provider: {provider}, supported providers: {list(self.embedding_functions.keys())}"
) )
embedding_function = self.embedding_functions[provider] return self.embedding_functions[provider](config, model_name)
return (
embedding_function(config)
if provider == "custom"
else embedding_function(config, model_name)
)
@staticmethod @staticmethod
def _create_default_embedding_function(): def _create_default_embedding_function():
@@ -64,13 +65,6 @@ class EmbeddingConfigurator:
return OpenAIEmbeddingFunction( return OpenAIEmbeddingFunction(
api_key=config.get("api_key") or os.getenv("OPENAI_API_KEY"), api_key=config.get("api_key") or os.getenv("OPENAI_API_KEY"),
model_name=model_name, model_name=model_name,
api_base=config.get("api_base", None),
api_type=config.get("api_type", None),
api_version=config.get("api_version", None),
default_headers=config.get("default_headers", None),
dimensions=config.get("dimensions", None),
deployment_id=config.get("deployment_id", None),
organization_id=config.get("organization_id", None),
) )
@staticmethod @staticmethod
@@ -85,10 +79,6 @@ class EmbeddingConfigurator:
api_type=config.get("api_type", "azure"), api_type=config.get("api_type", "azure"),
api_version=config.get("api_version"), api_version=config.get("api_version"),
model_name=model_name, model_name=model_name,
default_headers=config.get("default_headers"),
dimensions=config.get("dimensions"),
deployment_id=config.get("deployment_id"),
organization_id=config.get("organization_id"),
) )
@staticmethod @staticmethod
@@ -111,8 +101,6 @@ class EmbeddingConfigurator:
return GoogleVertexEmbeddingFunction( return GoogleVertexEmbeddingFunction(
model_name=model_name, model_name=model_name,
api_key=config.get("api_key"), api_key=config.get("api_key"),
project_id=config.get("project_id"),
region=config.get("region"),
) )
@staticmethod @staticmethod
@@ -124,7 +112,6 @@ class EmbeddingConfigurator:
return GoogleGenerativeAiEmbeddingFunction( return GoogleGenerativeAiEmbeddingFunction(
model_name=model_name, model_name=model_name,
api_key=config.get("api_key"), api_key=config.get("api_key"),
task_type=config.get("task_type"),
) )
@staticmethod @staticmethod
@@ -155,11 +142,9 @@ class EmbeddingConfigurator:
AmazonBedrockEmbeddingFunction, AmazonBedrockEmbeddingFunction,
) )
# Allow custom model_name override with backwards compatibility return AmazonBedrockEmbeddingFunction(
kwargs = {"session": config.get("session")} session=config.get("session"),
if model_name is not None: )
kwargs["model_name"] = model_name
return AmazonBedrockEmbeddingFunction(**kwargs)
@staticmethod @staticmethod
def _configure_huggingface(config, model_name): def _configure_huggingface(config, model_name):
@@ -209,28 +194,3 @@ class EmbeddingConfigurator:
raise e raise e
return WatsonEmbeddingFunction() return WatsonEmbeddingFunction()
@staticmethod
def _configure_custom(config):
custom_embedder = config.get("embedder")
if isinstance(custom_embedder, EmbeddingFunction):
try:
validate_embedding_function(custom_embedder)
return custom_embedder
except Exception as e:
raise ValueError(f"Invalid custom embedding function: {str(e)}")
elif callable(custom_embedder):
try:
instance = custom_embedder()
if isinstance(instance, EmbeddingFunction):
validate_embedding_function(instance)
return instance
raise ValueError(
"Custom embedder does not create an EmbeddingFunction instance"
)
except Exception as e:
raise ValueError(f"Error instantiating custom embedder: {str(e)}")
else:
raise ValueError(
"Custom embedder must be an instance of `EmbeddingFunction` or a callable that creates one"
)

View File

@@ -1,12 +1,11 @@
from collections import defaultdict from collections import defaultdict
from pydantic import BaseModel, Field, InstanceOf from pydantic import BaseModel, Field
from rich.box import HEAVY_EDGE from rich.box import HEAVY_EDGE
from rich.console import Console from rich.console import Console
from rich.table import Table from rich.table import Table
from crewai.agent import Agent from crewai.agent import Agent
from crewai.llm import LLM
from crewai.task import Task from crewai.task import Task
from crewai.tasks.task_output import TaskOutput from crewai.tasks.task_output import TaskOutput
from crewai.telemetry import Telemetry from crewai.telemetry import Telemetry
@@ -24,7 +23,7 @@ class CrewEvaluator:
Attributes: Attributes:
crew (Crew): The crew of agents to evaluate. crew (Crew): The crew of agents to evaluate.
eval_llm (LLM): Language model instance to use for evaluations openai_model_name (str): The model to use for evaluating the performance of the agents (for now ONLY OpenAI accepted).
tasks_scores (defaultdict): A dictionary to store the scores of the agents for each task. tasks_scores (defaultdict): A dictionary to store the scores of the agents for each task.
iteration (int): The current iteration of the evaluation. iteration (int): The current iteration of the evaluation.
""" """
@@ -33,9 +32,9 @@ class CrewEvaluator:
run_execution_times: defaultdict = defaultdict(list) run_execution_times: defaultdict = defaultdict(list)
iteration: int = 0 iteration: int = 0
def __init__(self, crew, eval_llm: InstanceOf[LLM]): def __init__(self, crew, openai_model_name: str):
self.crew = crew self.crew = crew
self.llm = eval_llm self.openai_model_name = openai_model_name
self._telemetry = Telemetry() self._telemetry = Telemetry()
self._setup_for_evaluating() self._setup_for_evaluating()
@@ -52,7 +51,7 @@ class CrewEvaluator:
), ),
backstory="Evaluator agent for crew evaluation with precise capabilities to evaluate the performance of the agents in the crew based on the tasks they have performed", backstory="Evaluator agent for crew evaluation with precise capabilities to evaluate the performance of the agents in the crew based on the tasks they have performed",
verbose=False, verbose=False,
llm=self.llm, llm=self.openai_model_name,
) )
def _evaluation_task( def _evaluation_task(
@@ -182,7 +181,7 @@ class CrewEvaluator:
self.crew, self.crew,
evaluation_result.pydantic.quality, evaluation_result.pydantic.quality,
current_task.execution_duration, current_task.execution_duration,
self.llm.model, self.openai_model_name,
) )
self.tasks_scores[self.iteration].append(evaluation_result.pydantic.quality) self.tasks_scores[self.iteration].append(evaluation_result.pydantic.quality)
self.run_execution_times[self.iteration].append( self.run_execution_times[self.iteration].append(

View File

@@ -3,9 +3,19 @@ from typing import List
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from crewai.utilities import Converter from crewai.utilities import Converter
from crewai.utilities.events import TaskEvaluationEvent, crewai_event_bus
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
agentops = None
try:
from agentops import track_agent # type: ignore
except ImportError:
def track_agent(name):
def noop(f):
return f
return noop
class Entity(BaseModel): class Entity(BaseModel):
name: str = Field(description="The name of the entity.") name: str = Field(description="The name of the entity.")
@@ -38,15 +48,12 @@ class TrainingTaskEvaluation(BaseModel):
) )
@track_agent(name="Task Evaluator")
class TaskEvaluator: class TaskEvaluator:
def __init__(self, original_agent): def __init__(self, original_agent):
self.llm = original_agent.llm self.llm = original_agent.llm
self.original_agent = original_agent
def evaluate(self, task, output) -> TaskEvaluation: def evaluate(self, task, output) -> TaskEvaluation:
crewai_event_bus.emit(
self, TaskEvaluationEvent(evaluation_type="task_evaluation")
)
evaluation_query = ( evaluation_query = (
f"Assess the quality of the task completed based on the description, expected output, and actual results.\n\n" f"Assess the quality of the task completed based on the description, expected output, and actual results.\n\n"
f"Task Description:\n{task.description}\n\n" f"Task Description:\n{task.description}\n\n"
@@ -83,39 +90,15 @@ class TaskEvaluator:
- training_data (dict): The training data to be evaluated. - training_data (dict): The training data to be evaluated.
- agent_id (str): The ID of the agent. - agent_id (str): The ID of the agent.
""" """
crewai_event_bus.emit(
self, TaskEvaluationEvent(evaluation_type="training_data_evaluation")
)
output_training_data = training_data[agent_id] output_training_data = training_data[agent_id]
final_aggregated_data = "" final_aggregated_data = ""
for _, data in output_training_data.items():
for iteration, data in output_training_data.items():
improved_output = data.get("improved_output")
initial_output = data.get("initial_output")
human_feedback = data.get("human_feedback")
if not all([improved_output, initial_output, human_feedback]):
missing_fields = [
field
for field in ["improved_output", "initial_output", "human_feedback"]
if not data.get(field)
]
error_msg = (
f"Critical training data error: Missing fields ({', '.join(missing_fields)}) "
f"for agent {agent_id} in iteration {iteration}.\n"
"This indicates a broken training process. "
"Cannot proceed with evaluation.\n"
"Please check your training implementation."
)
raise ValueError(error_msg)
final_aggregated_data += ( final_aggregated_data += (
f"Iteration: {iteration}\n" f"Initial Output:\n{data['initial_output']}\n\n"
f"Initial Output:\n{initial_output}\n\n" f"Human Feedback:\n{data['human_feedback']}\n\n"
f"Human Feedback:\n{human_feedback}\n\n" f"Improved Output:\n{data['improved_output']}\n\n"
f"Improved Output:\n{improved_output}\n\n"
"------------------------------------------------\n\n"
) )
evaluation_query = ( evaluation_query = (

View File

@@ -0,0 +1,44 @@
from functools import wraps
from typing import Any, Callable, Dict, Generic, List, Type, TypeVar
from pydantic import BaseModel
T = TypeVar("T")
EVT = TypeVar("EVT", bound=BaseModel)
class Emitter(Generic[T, EVT]):
_listeners: Dict[Type[EVT], List[Callable]] = {}
def on(self, event_type: Type[EVT]):
def decorator(func: Callable):
@wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
self._listeners.setdefault(event_type, []).append(wrapper)
return wrapper
return decorator
def emit(self, source: T, event: EVT) -> None:
event_type = type(event)
for func in self._listeners.get(event_type, []):
func(source, event)
default_emitter = Emitter[Any, BaseModel]()
def emit(source: Any, event: BaseModel, raise_on_error: bool = False) -> None:
try:
default_emitter.emit(source, event)
except Exception as e:
if raise_on_error:
raise e
else:
print(f"Error emitting event: {e}")
def on(event_type: Type[BaseModel]) -> Callable:
return default_emitter.on(event_type)

View File

@@ -1,40 +0,0 @@
from .crew_events import (
CrewKickoffStartedEvent,
CrewKickoffCompletedEvent,
CrewKickoffFailedEvent,
CrewTrainStartedEvent,
CrewTrainCompletedEvent,
CrewTrainFailedEvent,
CrewTestStartedEvent,
CrewTestCompletedEvent,
CrewTestFailedEvent,
)
from .agent_events import (
AgentExecutionStartedEvent,
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
)
from .task_events import TaskStartedEvent, TaskCompletedEvent, TaskFailedEvent, TaskEvaluationEvent
from .flow_events import (
FlowCreatedEvent,
FlowStartedEvent,
FlowFinishedEvent,
FlowPlotEvent,
MethodExecutionStartedEvent,
MethodExecutionFinishedEvent,
MethodExecutionFailedEvent,
)
from .crewai_event_bus import CrewAIEventsBus, crewai_event_bus
from .tool_usage_events import (
ToolUsageFinishedEvent,
ToolUsageErrorEvent,
ToolUsageStartedEvent,
ToolExecutionErrorEvent,
ToolSelectionErrorEvent,
ToolUsageEvent,
ToolValidateInputErrorEvent,
)
# events
from .event_listener import EventListener
from .third_party.agentops_listener import agentops_listener

View File

@@ -1,40 +0,0 @@
from typing import TYPE_CHECKING, Any, Dict, Optional, Sequence, Union
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
from .base_events import CrewEvent
if TYPE_CHECKING:
from crewai.agents.agent_builder.base_agent import BaseAgent
class AgentExecutionStartedEvent(CrewEvent):
"""Event emitted when an agent starts executing a task"""
agent: BaseAgent
task: Any
tools: Optional[Sequence[Union[BaseTool, CrewStructuredTool]]]
task_prompt: str
type: str = "agent_execution_started"
model_config = {"arbitrary_types_allowed": True}
class AgentExecutionCompletedEvent(CrewEvent):
"""Event emitted when an agent completes executing a task"""
agent: BaseAgent
task: Any
output: str
type: str = "agent_execution_completed"
class AgentExecutionErrorEvent(CrewEvent):
"""Event emitted when an agent encounters an error during execution"""
agent: BaseAgent
task: Any
error: str
type: str = "agent_execution_error"

View File

@@ -1,14 +0,0 @@
from abc import ABC, abstractmethod
from logging import Logger
from crewai.utilities.events.crewai_event_bus import CrewAIEventsBus, crewai_event_bus
class BaseEventListener(ABC):
def __init__(self):
super().__init__()
self.setup_listeners(crewai_event_bus)
@abstractmethod
def setup_listeners(self, crewai_event_bus: CrewAIEventsBus):
pass

View File

@@ -1,10 +0,0 @@
from datetime import datetime
from pydantic import BaseModel, Field
class CrewEvent(BaseModel):
"""Base class for all crew events"""
timestamp: datetime = Field(default_factory=datetime.now)
type: str

View File

@@ -1,81 +0,0 @@
from typing import Any, Dict, Optional, Union
from pydantic import InstanceOf
from crewai.utilities.events.base_events import CrewEvent
class CrewKickoffStartedEvent(CrewEvent):
"""Event emitted when a crew starts execution"""
crew_name: Optional[str]
inputs: Optional[Dict[str, Any]]
type: str = "crew_kickoff_started"
class CrewKickoffCompletedEvent(CrewEvent):
"""Event emitted when a crew completes execution"""
crew_name: Optional[str]
output: Any
type: str = "crew_kickoff_completed"
class CrewKickoffFailedEvent(CrewEvent):
"""Event emitted when a crew fails to complete execution"""
error: str
crew_name: Optional[str]
type: str = "crew_kickoff_failed"
class CrewTrainStartedEvent(CrewEvent):
"""Event emitted when a crew starts training"""
crew_name: Optional[str]
n_iterations: int
filename: str
inputs: Optional[Dict[str, Any]]
type: str = "crew_train_started"
class CrewTrainCompletedEvent(CrewEvent):
"""Event emitted when a crew completes training"""
crew_name: Optional[str]
n_iterations: int
filename: str
type: str = "crew_train_completed"
class CrewTrainFailedEvent(CrewEvent):
"""Event emitted when a crew fails to complete training"""
error: str
crew_name: Optional[str]
type: str = "crew_train_failed"
class CrewTestStartedEvent(CrewEvent):
"""Event emitted when a crew starts testing"""
crew_name: Optional[str]
n_iterations: int
eval_llm: Optional[Union[str, Any]]
inputs: Optional[Dict[str, Any]]
type: str = "crew_test_started"
class CrewTestCompletedEvent(CrewEvent):
"""Event emitted when a crew completes testing"""
crew_name: Optional[str]
type: str = "crew_test_completed"
class CrewTestFailedEvent(CrewEvent):
"""Event emitted when a crew fails to complete testing"""
error: str
crew_name: Optional[str]
type: str = "crew_test_failed"

View File

@@ -1,113 +0,0 @@
import threading
from contextlib import contextmanager
from typing import Any, Callable, Dict, List, Type, TypeVar, cast
from blinker import Signal
from crewai.utilities.events.base_events import CrewEvent
from crewai.utilities.events.event_types import EventTypes
EventT = TypeVar("EventT", bound=CrewEvent)
class CrewAIEventsBus:
"""
A singleton event bus that uses blinker signals for event handling.
Allows both internal (Flow/Crew) and external event handling.
"""
_instance = None
_lock = threading.Lock()
def __new__(cls):
if cls._instance is None:
with cls._lock:
if cls._instance is None: # prevent race condition
cls._instance = super(CrewAIEventsBus, cls).__new__(cls)
cls._instance._initialize()
return cls._instance
def _initialize(self) -> None:
"""Initialize the event bus internal state"""
self._signal = Signal("crewai_event_bus")
self._handlers: Dict[Type[CrewEvent], List[Callable]] = {}
def on(
self, event_type: Type[EventT]
) -> Callable[[Callable[[Any, EventT], None]], Callable[[Any, EventT], None]]:
"""
Decorator to register an event handler for a specific event type.
Usage:
@crewai_event_bus.on(AgentExecutionCompletedEvent)
def on_agent_execution_completed(
source: Any, event: AgentExecutionCompletedEvent
):
print(f"👍 Agent '{event.agent}' completed task")
print(f" Output: {event.output}")
"""
def decorator(
handler: Callable[[Any, EventT], None],
) -> Callable[[Any, EventT], None]:
if event_type not in self._handlers:
self._handlers[event_type] = []
self._handlers[event_type].append(
cast(Callable[[Any, EventT], None], handler)
)
return handler
return decorator
def emit(self, source: Any, event: CrewEvent) -> None:
"""
Emit an event to all registered handlers
Args:
source: The object emitting the event
event: The event instance to emit
"""
event_type = type(event)
if event_type in self._handlers:
for handler in self._handlers[event_type]:
handler(source, event)
self._signal.send(source, event=event)
def clear_handlers(self) -> None:
"""Clear all registered event handlers - useful for testing"""
self._handlers.clear()
def register_handler(
self, event_type: Type[EventTypes], handler: Callable[[Any, EventTypes], None]
) -> None:
"""Register an event handler for a specific event type"""
if event_type not in self._handlers:
self._handlers[event_type] = []
self._handlers[event_type].append(
cast(Callable[[Any, EventTypes], None], handler)
)
@contextmanager
def scoped_handlers(self):
"""
Context manager for temporary event handling scope.
Useful for testing or temporary event handling.
Usage:
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(CrewKickoffStarted)
def temp_handler(source, event):
print("Temporary handler")
# Do stuff...
# Handlers are cleared after the context
"""
previous_handlers = self._handlers.copy()
self._handlers.clear()
try:
yield
finally:
self._handlers = previous_handlers
# Global instance
crewai_event_bus = CrewAIEventsBus()

View File

@@ -1,257 +0,0 @@
from pydantic import PrivateAttr
from crewai.telemetry.telemetry import Telemetry
from crewai.utilities import Logger
from crewai.utilities.constants import EMITTER_COLOR
from crewai.utilities.events.base_event_listener import BaseEventListener
from .agent_events import AgentExecutionCompletedEvent, AgentExecutionStartedEvent
from .crew_events import (
CrewKickoffCompletedEvent,
CrewKickoffFailedEvent,
CrewKickoffStartedEvent,
CrewTestCompletedEvent,
CrewTestFailedEvent,
CrewTestStartedEvent,
CrewTrainCompletedEvent,
CrewTrainFailedEvent,
CrewTrainStartedEvent,
)
from .flow_events import (
FlowCreatedEvent,
FlowFinishedEvent,
FlowStartedEvent,
MethodExecutionFailedEvent,
MethodExecutionFinishedEvent,
MethodExecutionStartedEvent,
)
from .task_events import TaskCompletedEvent, TaskFailedEvent, TaskStartedEvent
from .tool_usage_events import (
ToolUsageErrorEvent,
ToolUsageFinishedEvent,
ToolUsageStartedEvent,
)
class EventListener(BaseEventListener):
_instance = None
_telemetry: Telemetry = PrivateAttr(default_factory=lambda: Telemetry())
logger = Logger(verbose=True, default_color=EMITTER_COLOR)
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
cls._instance._initialized = False
return cls._instance
def __init__(self):
if not hasattr(self, "_initialized") or not self._initialized:
super().__init__()
self._telemetry = Telemetry()
self._telemetry.set_tracer()
self._initialized = True
# ----------- CREW EVENTS -----------
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(CrewKickoffStartedEvent)
def on_crew_started(source, event: CrewKickoffStartedEvent):
self.logger.log(
f"🚀 Crew '{event.crew_name}' started",
event.timestamp,
)
self._telemetry.crew_execution_span(source, event.inputs)
@crewai_event_bus.on(CrewKickoffCompletedEvent)
def on_crew_completed(source, event: CrewKickoffCompletedEvent):
final_string_output = event.output.raw
self._telemetry.end_crew(source, final_string_output)
self.logger.log(
f"✅ Crew '{event.crew_name}' completed",
event.timestamp,
)
@crewai_event_bus.on(CrewKickoffFailedEvent)
def on_crew_failed(source, event: CrewKickoffFailedEvent):
self.logger.log(
f"❌ Crew '{event.crew_name}' failed",
event.timestamp,
)
@crewai_event_bus.on(CrewTestStartedEvent)
def on_crew_test_started(source, event: CrewTestStartedEvent):
cloned_crew = source.copy()
cloned_crew._telemetry.test_execution_span(
cloned_crew,
event.n_iterations,
event.inputs,
event.eval_llm,
)
self.logger.log(
f"🚀 Crew '{event.crew_name}' started test",
event.timestamp,
)
@crewai_event_bus.on(CrewTestCompletedEvent)
def on_crew_test_completed(source, event: CrewTestCompletedEvent):
self.logger.log(
f"✅ Crew '{event.crew_name}' completed test",
event.timestamp,
)
@crewai_event_bus.on(CrewTestFailedEvent)
def on_crew_test_failed(source, event: CrewTestFailedEvent):
self.logger.log(
f"❌ Crew '{event.crew_name}' failed test",
event.timestamp,
)
@crewai_event_bus.on(CrewTrainStartedEvent)
def on_crew_train_started(source, event: CrewTrainStartedEvent):
self.logger.log(
f"📋 Crew '{event.crew_name}' started train",
event.timestamp,
)
@crewai_event_bus.on(CrewTrainCompletedEvent)
def on_crew_train_completed(source, event: CrewTrainCompletedEvent):
self.logger.log(
f"✅ Crew '{event.crew_name}' completed train",
event.timestamp,
)
@crewai_event_bus.on(CrewTrainFailedEvent)
def on_crew_train_failed(source, event: CrewTrainFailedEvent):
self.logger.log(
f"❌ Crew '{event.crew_name}' failed train",
event.timestamp,
)
# ----------- TASK EVENTS -----------
@crewai_event_bus.on(TaskStartedEvent)
def on_task_started(source, event: TaskStartedEvent):
source._execution_span = self._telemetry.task_started(
crew=source.agent.crew, task=source
)
self.logger.log(
f"📋 Task started: {source.description}",
event.timestamp,
)
@crewai_event_bus.on(TaskCompletedEvent)
def on_task_completed(source, event: TaskCompletedEvent):
if source._execution_span:
self._telemetry.task_ended(
source._execution_span, source, source.agent.crew
)
self.logger.log(
f"✅ Task completed: {source.description}",
event.timestamp,
)
source._execution_span = None
@crewai_event_bus.on(TaskFailedEvent)
def on_task_failed(source, event: TaskFailedEvent):
if source._execution_span:
if source.agent and source.agent.crew:
self._telemetry.task_ended(
source._execution_span, source, source.agent.crew
)
source._execution_span = None
self.logger.log(
f"❌ Task failed: {source.description}",
event.timestamp,
)
# ----------- AGENT EVENTS -----------
@crewai_event_bus.on(AgentExecutionStartedEvent)
def on_agent_execution_started(source, event: AgentExecutionStartedEvent):
self.logger.log(
f"🤖 Agent '{event.agent.role}' started task",
event.timestamp,
)
@crewai_event_bus.on(AgentExecutionCompletedEvent)
def on_agent_execution_completed(source, event: AgentExecutionCompletedEvent):
self.logger.log(
f"✅ Agent '{event.agent.role}' completed task",
event.timestamp,
)
# ----------- FLOW EVENTS -----------
@crewai_event_bus.on(FlowCreatedEvent)
def on_flow_created(source, event: FlowCreatedEvent):
self._telemetry.flow_creation_span(self.__class__.__name__)
self.logger.log(
f"🌊 Flow Created: '{event.flow_name}'",
event.timestamp,
)
@crewai_event_bus.on(FlowStartedEvent)
def on_flow_started(source, event: FlowStartedEvent):
self._telemetry.flow_execution_span(
source.__class__.__name__, list(source._methods.keys())
)
self.logger.log(
f"🤖 Flow Started: '{event.flow_name}'",
event.timestamp,
)
@crewai_event_bus.on(FlowFinishedEvent)
def on_flow_finished(source, event: FlowFinishedEvent):
self.logger.log(
f"👍 Flow Finished: '{event.flow_name}'",
event.timestamp,
)
@crewai_event_bus.on(MethodExecutionStartedEvent)
def on_method_execution_started(source, event: MethodExecutionStartedEvent):
self.logger.log(
f"🤖 Flow Method Started: '{event.method_name}'",
event.timestamp,
)
@crewai_event_bus.on(MethodExecutionFailedEvent)
def on_method_execution_failed(source, event: MethodExecutionFailedEvent):
self.logger.log(
f"❌ Flow Method Failed: '{event.method_name}'",
event.timestamp,
)
@crewai_event_bus.on(MethodExecutionFinishedEvent)
def on_method_execution_finished(source, event: MethodExecutionFinishedEvent):
self.logger.log(
f"👍 Flow Method Finished: '{event.method_name}'",
event.timestamp,
)
# ----------- TOOL USAGE EVENTS -----------
@crewai_event_bus.on(ToolUsageStartedEvent)
def on_tool_usage_started(source, event: ToolUsageStartedEvent):
self.logger.log(
f"🤖 Tool Usage Started: '{event.tool_name}'",
event.timestamp,
)
@crewai_event_bus.on(ToolUsageFinishedEvent)
def on_tool_usage_finished(source, event: ToolUsageFinishedEvent):
self.logger.log(
f"✅ Tool Usage Finished: '{event.tool_name}'",
event.timestamp,
#
)
@crewai_event_bus.on(ToolUsageErrorEvent)
def on_tool_usage_error(source, event: ToolUsageErrorEvent):
self.logger.log(
f"❌ Tool Usage Error: '{event.tool_name}'",
event.timestamp,
#
)
event_listener = EventListener()

View File

@@ -1,61 +0,0 @@
from typing import Union
from .agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
AgentExecutionStartedEvent,
)
from .crew_events import (
CrewKickoffCompletedEvent,
CrewKickoffFailedEvent,
CrewKickoffStartedEvent,
CrewTestCompletedEvent,
CrewTestFailedEvent,
CrewTestStartedEvent,
CrewTrainCompletedEvent,
CrewTrainFailedEvent,
CrewTrainStartedEvent,
)
from .flow_events import (
FlowFinishedEvent,
FlowStartedEvent,
MethodExecutionFailedEvent,
MethodExecutionFinishedEvent,
MethodExecutionStartedEvent,
)
from .task_events import (
TaskCompletedEvent,
TaskFailedEvent,
TaskStartedEvent,
)
from .tool_usage_events import (
ToolUsageErrorEvent,
ToolUsageFinishedEvent,
ToolUsageStartedEvent,
)
EventTypes = Union[
CrewKickoffStartedEvent,
CrewKickoffCompletedEvent,
CrewKickoffFailedEvent,
CrewTestStartedEvent,
CrewTestCompletedEvent,
CrewTestFailedEvent,
CrewTrainStartedEvent,
CrewTrainCompletedEvent,
CrewTrainFailedEvent,
AgentExecutionStartedEvent,
AgentExecutionCompletedEvent,
TaskStartedEvent,
TaskCompletedEvent,
TaskFailedEvent,
FlowStartedEvent,
FlowFinishedEvent,
MethodExecutionStartedEvent,
MethodExecutionFinishedEvent,
MethodExecutionFailedEvent,
AgentExecutionErrorEvent,
ToolUsageFinishedEvent,
ToolUsageErrorEvent,
ToolUsageStartedEvent,
]

View File

@@ -1,71 +0,0 @@
from typing import Any, Dict, Optional, Union
from pydantic import BaseModel
from .base_events import CrewEvent
class FlowEvent(CrewEvent):
"""Base class for all flow events"""
type: str
flow_name: str
class FlowStartedEvent(FlowEvent):
"""Event emitted when a flow starts execution"""
flow_name: str
inputs: Optional[Dict[str, Any]] = None
type: str = "flow_started"
class FlowCreatedEvent(FlowEvent):
"""Event emitted when a flow is created"""
flow_name: str
type: str = "flow_created"
class MethodExecutionStartedEvent(FlowEvent):
"""Event emitted when a flow method starts execution"""
flow_name: str
method_name: str
state: Union[Dict[str, Any], BaseModel]
params: Optional[Dict[str, Any]] = None
type: str = "method_execution_started"
class MethodExecutionFinishedEvent(FlowEvent):
"""Event emitted when a flow method completes execution"""
flow_name: str
method_name: str
result: Any = None
state: Union[Dict[str, Any], BaseModel]
type: str = "method_execution_finished"
class MethodExecutionFailedEvent(FlowEvent):
"""Event emitted when a flow method fails execution"""
flow_name: str
method_name: str
error: Any
type: str = "method_execution_failed"
class FlowFinishedEvent(FlowEvent):
"""Event emitted when a flow completes execution"""
flow_name: str
result: Optional[Any] = None
type: str = "flow_finished"
class FlowPlotEvent(FlowEvent):
"""Event emitted when a flow plot is created"""
flow_name: str
type: str = "flow_plot"

View File

@@ -1,32 +0,0 @@
from typing import Any, Optional
from crewai.tasks.task_output import TaskOutput
from crewai.utilities.events.base_events import CrewEvent
class TaskStartedEvent(CrewEvent):
"""Event emitted when a task starts"""
type: str = "task_started"
context: Optional[str]
class TaskCompletedEvent(CrewEvent):
"""Event emitted when a task completes"""
output: TaskOutput
type: str = "task_completed"
class TaskFailedEvent(CrewEvent):
"""Event emitted when a task fails"""
error: str
type: str = "task_failed"
class TaskEvaluationEvent(CrewEvent):
"""Event emitted when a task evaluation is completed"""
type: str = "task_evaluation"
evaluation_type: str

View File

@@ -1 +0,0 @@
from .agentops_listener import agentops_listener

View File

@@ -1,67 +0,0 @@
from typing import Optional
from crewai.utilities.events import (
CrewKickoffCompletedEvent,
ToolUsageErrorEvent,
ToolUsageStartedEvent,
)
from crewai.utilities.events.base_event_listener import BaseEventListener
from crewai.utilities.events.crew_events import CrewKickoffStartedEvent
from crewai.utilities.events.task_events import TaskEvaluationEvent
try:
import agentops
AGENTOPS_INSTALLED = True
except ImportError:
AGENTOPS_INSTALLED = False
class AgentOpsListener(BaseEventListener):
tool_event: Optional["agentops.ToolEvent"] = None
session: Optional["agentops.Session"] = None
def __init__(self):
super().__init__()
def setup_listeners(self, crewai_event_bus):
if not AGENTOPS_INSTALLED:
return
@crewai_event_bus.on(CrewKickoffStartedEvent)
def on_crew_kickoff_started(source, event: CrewKickoffStartedEvent):
self.session = agentops.init()
for agent in source.agents:
if self.session:
self.session.create_agent(
name=agent.role,
agent_id=str(agent.id),
)
@crewai_event_bus.on(CrewKickoffCompletedEvent)
def on_crew_kickoff_completed(source, event: CrewKickoffCompletedEvent):
if self.session:
self.session.end_session(
end_state="Success",
end_state_reason="Finished Execution",
)
@crewai_event_bus.on(ToolUsageStartedEvent)
def on_tool_usage_started(source, event: ToolUsageStartedEvent):
self.tool_event = agentops.ToolEvent(name=event.tool_name)
if self.session:
self.session.record(self.tool_event)
@crewai_event_bus.on(ToolUsageErrorEvent)
def on_tool_usage_error(source, event: ToolUsageErrorEvent):
agentops.ErrorEvent(exception=event.error, trigger_event=self.tool_event)
@crewai_event_bus.on(TaskEvaluationEvent)
def on_task_evaluation(source, event: TaskEvaluationEvent):
if self.session:
self.session.create_agent(
name="Task Evaluator", agent_id=str(source.original_agent.id)
)
agentops_listener = AgentOpsListener()

View File

@@ -1,64 +0,0 @@
from datetime import datetime
from typing import Any, Callable, Dict
from .base_events import CrewEvent
class ToolUsageEvent(CrewEvent):
"""Base event for tool usage tracking"""
agent_key: str
agent_role: str
tool_name: str
tool_args: Dict[str, Any] | str
tool_class: str
run_attempts: int | None = None
delegations: int | None = None
model_config = {"arbitrary_types_allowed": True}
class ToolUsageStartedEvent(ToolUsageEvent):
"""Event emitted when a tool execution is started"""
type: str = "tool_usage_started"
class ToolUsageFinishedEvent(ToolUsageEvent):
"""Event emitted when a tool execution is completed"""
started_at: datetime
finished_at: datetime
from_cache: bool = False
type: str = "tool_usage_finished"
class ToolUsageErrorEvent(ToolUsageEvent):
"""Event emitted when a tool execution encounters an error"""
error: Any
type: str = "tool_usage_error"
class ToolValidateInputErrorEvent(ToolUsageEvent):
"""Event emitted when a tool input validation encounters an error"""
error: Any
type: str = "tool_validate_input_error"
class ToolSelectionErrorEvent(ToolUsageEvent):
"""Event emitted when a tool selection encounters an error"""
error: Any
type: str = "tool_selection_error"
class ToolExecutionErrorEvent(CrewEvent):
"""Event emitted when a tool execution encounters an error"""
error: Any
type: str = "tool_execution_error"
tool_name: str
tool_args: Dict[str, Any]
tool_class: Callable

View File

@@ -1,63 +1,29 @@
import json
import os import os
import pickle import pickle
from datetime import datetime from datetime import datetime
from typing import Union
class FileHandler: class FileHandler:
"""Handler for file operations supporting both JSON and text-based logging. """take care of file operations, currently it only logs messages to a file"""
Args: def __init__(self, file_path):
file_path (Union[bool, str]): Path to the log file or boolean flag if isinstance(file_path, bool):
"""
def __init__(self, file_path: Union[bool, str]):
self._initialize_path(file_path)
def _initialize_path(self, file_path: Union[bool, str]):
if file_path is True: # File path is boolean True
self._path = os.path.join(os.curdir, "logs.txt") self._path = os.path.join(os.curdir, "logs.txt")
elif isinstance(file_path, str):
elif isinstance(file_path, str): # File path is a string self._path = file_path
if file_path.endswith((".json", ".txt")):
self._path = file_path # No modification if the file ends with .json or .txt
else: else:
self._path = file_path + ".txt" # Append .txt if the file doesn't end with .json or .txt raise ValueError("file_path must be either a boolean or a string.")
else:
raise ValueError("file_path must be a string or boolean.") # Handle the case where file_path isn't valid
def log(self, **kwargs): def log(self, **kwargs):
try:
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S") now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
log_entry = {"timestamp": now, **kwargs} message = (
f"{now}: "
if self._path.endswith(".json"): + ", ".join([f'{key}="{value}"' for key, value in kwargs.items()])
# Append log in JSON format + "\n"
)
with open(self._path, "a", encoding="utf-8") as file: with open(self._path, "a", encoding="utf-8") as file:
# If the file is empty, start with a list; else, append to it file.write(message + "\n")
try:
# Try reading existing content to avoid overwriting
with open(self._path, "r", encoding="utf-8") as read_file:
existing_data = json.load(read_file)
existing_data.append(log_entry)
except (json.JSONDecodeError, FileNotFoundError):
# If no valid JSON or file doesn't exist, start with an empty list
existing_data = [log_entry]
with open(self._path, "w", encoding="utf-8") as write_file:
json.dump(existing_data, write_file, indent=4)
write_file.write("\n")
else:
# Append log in plain text format
message = f"{now}: " + ", ".join([f"{key}=\"{value}\"" for key, value in kwargs.items()]) + "\n"
with open(self._path, "a", encoding="utf-8") as file:
file.write(message)
except Exception as e:
raise ValueError(f"Failed to log message: {str(e)}")
class PickleHandler: class PickleHandler:
def __init__(self, file_name: str) -> None: def __init__(self, file_name: str) -> None:

View File

@@ -24,10 +24,12 @@ def create_llm(
# 1) If llm_value is already an LLM object, return it directly # 1) If llm_value is already an LLM object, return it directly
if isinstance(llm_value, LLM): if isinstance(llm_value, LLM):
print("LLM value is already an LLM object")
return llm_value return llm_value
# 2) If llm_value is a string (model name) # 2) If llm_value is a string (model name)
if isinstance(llm_value, str): if isinstance(llm_value, str):
print("LLM value is a string")
try: try:
created_llm = LLM(model=llm_value) created_llm = LLM(model=llm_value)
return created_llm return created_llm
@@ -37,10 +39,12 @@ def create_llm(
# 3) If llm_value is None, parse environment variables or use default # 3) If llm_value is None, parse environment variables or use default
if llm_value is None: if llm_value is None:
print("LLM value is None")
return _llm_via_environment_or_fallback() return _llm_via_environment_or_fallback()
# 4) Otherwise, attempt to extract relevant attributes from an unknown object # 4) Otherwise, attempt to extract relevant attributes from an unknown object
try: try:
print("LLM value is an unknown object")
# Extract attributes with explicit types # Extract attributes with explicit types
model = ( model = (
getattr(llm_value, "model_name", None) getattr(llm_value, "model_name", None)
@@ -53,7 +57,6 @@ def create_llm(
timeout: Optional[float] = getattr(llm_value, "timeout", None) timeout: Optional[float] = getattr(llm_value, "timeout", None)
api_key: Optional[str] = getattr(llm_value, "api_key", None) api_key: Optional[str] = getattr(llm_value, "api_key", None)
base_url: Optional[str] = getattr(llm_value, "base_url", None) base_url: Optional[str] = getattr(llm_value, "base_url", None)
api_base: Optional[str] = getattr(llm_value, "api_base", None)
created_llm = LLM( created_llm = LLM(
model=model, model=model,
@@ -63,7 +66,6 @@ def create_llm(
timeout=timeout, timeout=timeout,
api_key=api_key, api_key=api_key,
base_url=base_url, base_url=base_url,
api_base=api_base,
) )
return created_llm return created_llm
except Exception as e: except Exception as e:
@@ -103,18 +105,8 @@ def _llm_via_environment_or_fallback() -> Optional[LLM]:
callbacks: List[Any] = [] callbacks: List[Any] = []
# Optional base URL from env # Optional base URL from env
base_url = ( api_base = os.environ.get("OPENAI_API_BASE") or os.environ.get("OPENAI_BASE_URL")
os.environ.get("BASE_URL") if api_base:
or os.environ.get("OPENAI_API_BASE")
or os.environ.get("OPENAI_BASE_URL")
)
api_base = os.environ.get("API_BASE") or os.environ.get("AZURE_API_BASE")
# Synchronize base_url and api_base if one is populated and the other is not
if base_url and not api_base:
api_base = base_url
elif api_base and not base_url:
base_url = api_base base_url = api_base
# Initialize llm_params dictionary # Initialize llm_params dictionary
@@ -127,7 +119,6 @@ def _llm_via_environment_or_fallback() -> Optional[LLM]:
"timeout": timeout, "timeout": timeout,
"api_key": api_key, "api_key": api_key,
"base_url": base_url, "base_url": base_url,
"api_base": api_base,
"api_version": api_version, "api_version": api_version,
"presence_penalty": presence_penalty, "presence_penalty": presence_penalty,
"frequency_penalty": frequency_penalty, "frequency_penalty": frequency_penalty,

View File

@@ -8,11 +8,8 @@ from crewai.utilities.printer import Printer
class Logger(BaseModel): class Logger(BaseModel):
verbose: bool = Field(default=False) verbose: bool = Field(default=False)
_printer: Printer = PrivateAttr(default_factory=Printer) _printer: Printer = PrivateAttr(default_factory=Printer)
default_color: str = Field(default="bold_yellow")
def log(self, level, message, color=None): def log(self, level, message, color="bold_yellow"):
if color is None:
color = self.default_color
if self.verbose: if self.verbose:
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
self._printer.print( self._printer.print(

View File

@@ -1,12 +0,0 @@
from typing import Any, Protocol, runtime_checkable
@runtime_checkable
class AgentExecutorProtocol(Protocol):
"""Protocol defining the expected interface for an agent executor."""
@property
def agent(self) -> Any: ...
@property
def task(self) -> Any: ...

View File

@@ -1,5 +1,3 @@
import os
from crewai.utilities.file_handler import PickleHandler from crewai.utilities.file_handler import PickleHandler
@@ -31,8 +29,3 @@ class CrewTrainingHandler(PickleHandler):
data[agent_id] = {train_iteration: new_data} data[agent_id] = {train_iteration: new_data}
self.save(data) self.save(data)
def clear(self) -> None:
"""Clear the training data by removing the file or resetting its contents."""
if os.path.exists(self.file_path):
self.save({})

View File

@@ -8,17 +8,16 @@ import pytest
from crewai import Agent, Crew, Task from crewai import Agent, Crew, Task
from crewai.agents.cache import CacheHandler from crewai.agents.cache import CacheHandler
from crewai.agents.crew_agent_executor import AgentFinish, CrewAgentExecutor from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.agents.parser import AgentAction, CrewAgentParser, OutputParserException from crewai.agents.parser import AgentAction, CrewAgentParser, OutputParserException
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from crewai.llm import LLM from crewai.llm import LLM
from crewai.tools import tool from crewai.tools import tool
from crewai.tools.tool_calling import InstructorToolCalling from crewai.tools.tool_calling import InstructorToolCalling
from crewai.tools.tool_usage import ToolUsage from crewai.tools.tool_usage import ToolUsage
from crewai.tools.tool_usage_events import ToolUsageFinished
from crewai.utilities import RPMController from crewai.utilities import RPMController
from crewai.utilities.events import crewai_event_bus from crewai.utilities.events import Emitter
from crewai.utilities.events.tool_usage_events import ToolUsageFinishedEvent
def test_agent_llm_creation_with_env_vars(): def test_agent_llm_creation_with_env_vars():
@@ -154,19 +153,15 @@ def test_agent_execution_with_tools():
agent=agent, agent=agent,
expected_output="The result of the multiplication.", expected_output="The result of the multiplication.",
) )
received_events = [] with patch.object(Emitter, "emit") as emit:
@crewai_event_bus.on(ToolUsageFinishedEvent)
def handle_tool_end(source, event):
received_events.append(event)
output = agent.execute_task(task) output = agent.execute_task(task)
assert output == "The result of the multiplication is 12." assert output == "The result of the multiplication is 12."
assert emit.call_count == 1
assert len(received_events) == 1 args, _ = emit.call_args
assert isinstance(received_events[0], ToolUsageFinishedEvent) assert isinstance(args[1], ToolUsageFinished)
assert received_events[0].tool_name == "multiplier" assert not args[1].from_cache
assert received_events[0].tool_args == {"first_number": 3, "second_number": 4} assert args[1].tool_name == "multiplier"
assert args[1].tool_args == {"first_number": 3, "second_number": 4}
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
@@ -253,14 +248,10 @@ def test_cache_hitting():
"multiplier-{'first_number': 3, 'second_number': 3}": 9, "multiplier-{'first_number': 3, 'second_number': 3}": 9,
"multiplier-{'first_number': 12, 'second_number': 3}": 36, "multiplier-{'first_number': 12, 'second_number': 3}": 36,
} }
received_events = []
@crewai_event_bus.on(ToolUsageFinishedEvent)
def handle_tool_end(source, event):
received_events.append(event)
with ( with (
patch.object(CacheHandler, "read") as read, patch.object(CacheHandler, "read") as read,
patch.object(Emitter, "emit") as emit,
): ):
read.return_value = "0" read.return_value = "0"
task = Task( task = Task(
@@ -273,9 +264,10 @@ def test_cache_hitting():
read.assert_called_with( read.assert_called_with(
tool="multiplier", input={"first_number": 2, "second_number": 6} tool="multiplier", input={"first_number": 2, "second_number": 6}
) )
assert len(received_events) == 1 assert emit.call_count == 1
assert isinstance(received_events[0], ToolUsageFinishedEvent) args, _ = emit.call_args
assert received_events[0].from_cache assert isinstance(args[1], ToolUsageFinished)
assert args[1].from_cache
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
@@ -915,8 +907,6 @@ def test_tool_result_as_answer_is_the_final_answer_for_the_agent():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_tool_usage_information_is_appended_to_agent(): def test_tool_usage_information_is_appended_to_agent():
from datetime import UTC, datetime
from crewai.tools import BaseTool from crewai.tools import BaseTool
class MyCustomTool(BaseTool): class MyCustomTool(BaseTool):
@@ -926,11 +916,6 @@ def test_tool_usage_information_is_appended_to_agent():
def _run(self) -> str: def _run(self) -> str:
return "Howdy!" return "Howdy!"
fixed_datetime = datetime(2025, 2, 10, 12, 0, 0, tzinfo=UTC)
with patch("datetime.datetime") as mock_datetime:
mock_datetime.now.return_value = fixed_datetime
mock_datetime.side_effect = lambda *args, **kw: datetime(*args, **kw)
agent1 = Agent( agent1 = Agent(
role="Friendly Neighbor", role="Friendly Neighbor",
goal="Make everyone feel welcome", goal="Make everyone feel welcome",
@@ -953,7 +938,6 @@ def test_tool_usage_information_is_appended_to_agent():
"tool_name": "Decide Greetings", "tool_name": "Decide Greetings",
"tool_args": {}, "tool_args": {},
"result_as_answer": True, "result_as_answer": True,
"start_time": fixed_datetime,
} }
] ]
@@ -998,35 +982,23 @@ def test_agent_human_input():
# Side effect function for _ask_human_input to simulate multiple feedback iterations # Side effect function for _ask_human_input to simulate multiple feedback iterations
feedback_responses = iter( feedback_responses = iter(
[ [
"Don't say hi, say Hello instead!", # First feedback: instruct change "Don't say hi, say Hello instead!", # First feedback
"", # Second feedback: empty string signals acceptance "looks good", # Second feedback to exit loop
] ]
) )
def ask_human_input_side_effect(*args, **kwargs): def ask_human_input_side_effect(*args, **kwargs):
return next(feedback_responses) return next(feedback_responses)
# Patch both _ask_human_input and _invoke_loop to avoid real API/network calls. with patch.object(
with ( CrewAgentExecutor, "_ask_human_input", side_effect=ask_human_input_side_effect
patch.object( ) as mock_human_input:
CrewAgentExecutor,
"_ask_human_input",
side_effect=ask_human_input_side_effect,
) as mock_human_input,
patch.object(
CrewAgentExecutor,
"_invoke_loop",
return_value=AgentFinish(output="Hello", thought="", text=""),
) as mock_invoke_loop,
):
# Execute the task # Execute the task
output = agent.execute_task(task) output = agent.execute_task(task)
# Assertions to ensure the agent behaves correctly. # Assertions to ensure the agent behaves correctly
# It should have requested feedback twice. assert mock_human_input.call_count == 2 # Should have asked for feedback twice
assert mock_human_input.call_count == 2 assert output.strip().lower() == "hello" # Final output should be 'Hello'
# The final result should be processed to "Hello"
assert output.strip().lower() == "hello"
def test_interpolate_inputs(): def test_interpolate_inputs():
@@ -1210,7 +1182,7 @@ def test_agent_max_retry_limit():
[ [
mock.call( mock.call(
{ {
"input": "Say the word: Hi\n\nThis is the expected criteria for your final answer: The word: Hi\nyou MUST return the actual complete content as the final answer, not a summary.", "input": "Say the word: Hi\n\nThis is the expect criteria for your final answer: The word: Hi\nyou MUST return the actual complete content as the final answer, not a summary.",
"tool_names": "", "tool_names": "",
"tools": "", "tools": "",
"ask_for_human_input": True, "ask_for_human_input": True,
@@ -1218,7 +1190,7 @@ def test_agent_max_retry_limit():
), ),
mock.call( mock.call(
{ {
"input": "Say the word: Hi\n\nThis is the expected criteria for your final answer: The word: Hi\nyou MUST return the actual complete content as the final answer, not a summary.", "input": "Say the word: Hi\n\nThis is the expect criteria for your final answer: The word: Hi\nyou MUST return the actual complete content as the final answer, not a summary.",
"tool_names": "", "tool_names": "",
"tools": "", "tools": "",
"ask_for_human_input": True, "ask_for_human_input": True,
@@ -1628,181 +1600,3 @@ def test_agent_with_knowledge_sources():
# Assert that the agent provides the correct information # Assert that the agent provides the correct information
assert "red" in result.raw.lower() assert "red" in result.raw.lower()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_with_knowledge_sources_works_with_copy():
content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content)
with patch(
"crewai.knowledge.source.base_knowledge_source.BaseKnowledgeSource",
autospec=True,
) as MockKnowledgeSource:
mock_knowledge_source_instance = MockKnowledgeSource.return_value
mock_knowledge_source_instance.__class__ = BaseKnowledgeSource
mock_knowledge_source_instance.sources = [string_source]
agent = Agent(
role="Information Agent",
goal="Provide information based on knowledge sources",
backstory="You have access to specific knowledge sources.",
llm=LLM(model="gpt-4o-mini"),
knowledge_sources=[string_source],
)
with patch(
"crewai.knowledge.storage.knowledge_storage.KnowledgeStorage"
) as MockKnowledgeStorage:
mock_knowledge_storage = MockKnowledgeStorage.return_value
agent.knowledge_storage = mock_knowledge_storage
agent_copy = agent.copy()
assert agent_copy.role == agent.role
assert agent_copy.goal == agent.goal
assert agent_copy.backstory == agent.backstory
assert agent_copy.knowledge_sources is not None
assert len(agent_copy.knowledge_sources) == 1
assert isinstance(agent_copy.knowledge_sources[0], StringKnowledgeSource)
assert agent_copy.knowledge_sources[0].content == content
assert isinstance(agent_copy.llm, LLM)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_litellm_auth_error_handling():
"""Test that LiteLLM authentication errors are handled correctly and not retried."""
from litellm import AuthenticationError as LiteLLMAuthenticationError
# Create an agent with a mocked LLM and max_retry_limit=0
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm=LLM(model="gpt-4"),
max_retry_limit=0, # Disable retries for authentication errors
)
# Create a task
task = Task(
description="Test task",
expected_output="Test output",
agent=agent,
)
# Mock the LLM call to raise AuthenticationError
with (
patch.object(LLM, "call") as mock_llm_call,
pytest.raises(LiteLLMAuthenticationError, match="Invalid API key"),
):
mock_llm_call.side_effect = LiteLLMAuthenticationError(
message="Invalid API key", llm_provider="openai", model="gpt-4"
)
agent.execute_task(task)
# Verify the call was only made once (no retries)
mock_llm_call.assert_called_once()
def test_crew_agent_executor_litellm_auth_error():
"""Test that CrewAgentExecutor handles LiteLLM authentication errors by raising them."""
from litellm.exceptions import AuthenticationError
from crewai.agents.tools_handler import ToolsHandler
from crewai.utilities import Printer
# Create an agent and executor
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm=LLM(model="gpt-4", api_key="invalid_api_key"),
)
task = Task(
description="Test task",
expected_output="Test output",
agent=agent,
)
# Create executor with all required parameters
executor = CrewAgentExecutor(
agent=agent,
task=task,
llm=agent.llm,
crew=None,
prompt={"system": "You are a test agent", "user": "Execute the task: {input}"},
max_iter=5,
tools=[],
tools_names="",
stop_words=[],
tools_description="",
tools_handler=ToolsHandler(),
)
# Mock the LLM call to raise AuthenticationError
with (
patch.object(LLM, "call") as mock_llm_call,
patch.object(Printer, "print") as mock_printer,
pytest.raises(AuthenticationError) as exc_info,
):
mock_llm_call.side_effect = AuthenticationError(
message="Invalid API key", llm_provider="openai", model="gpt-4"
)
executor.invoke(
{
"input": "test input",
"tool_names": "",
"tools": "",
}
)
# Verify error handling messages
error_message = f"Error during LLM call: {str(mock_llm_call.side_effect)}"
mock_printer.assert_any_call(
content=error_message,
color="red",
)
# Verify the call was only made once (no retries)
mock_llm_call.assert_called_once()
# Assert that the exception was raised and has the expected attributes
assert exc_info.type is AuthenticationError
assert "Invalid API key".lower() in exc_info.value.message.lower()
assert exc_info.value.llm_provider == "openai"
assert exc_info.value.model == "gpt-4"
def test_litellm_anthropic_error_handling():
"""Test that AnthropicError from LiteLLM is handled correctly and not retried."""
from litellm.llms.anthropic.common_utils import AnthropicError
# Create an agent with a mocked LLM that uses an Anthropic model
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm=LLM(model="claude-3.5-sonnet-20240620"),
max_retry_limit=0,
)
# Create a task
task = Task(
description="Test task",
expected_output="Test output",
agent=agent,
)
# Mock the LLM call to raise AnthropicError
with (
patch.object(LLM, "call") as mock_llm_call,
pytest.raises(AnthropicError, match="Test Anthropic error"),
):
mock_llm_call.side_effect = AnthropicError(
status_code=500,
message="Test Anthropic error",
)
agent.execute_task(task)
# Verify the LLM call was only made once (no retries)
mock_llm_call.assert_called_once()

View File

@@ -2,21 +2,21 @@ interactions:
- request: - request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer(*args:
Arguments: {}\nTool Description: Get the final answer but don''t give it yet, Any, **kwargs: Any) -> Any\nTool Description: get_final_answer() - Get the final
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format answer but don''t give it yet, just re-use this tool non-stop. \nTool
in your response:\n\n```\nThought: you should always think about what to do\nAction: Arguments: {}\n\nUse the following format:\n\nThought: you should always think
the action to take, only one name of [get_final_answer], just the name, exactly about what to do\nAction: the action to take, only one name of [get_final_answer],
as it''s written.\nAction Input: the input to the action, just a simple JSON just the name, exactly as it''s written.\nAction Input: the input to the action,
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: just a simple python dictionary, enclosed in curly braces, using \" to wrap
the result of the action\n```\n\nOnce all necessary information is gathered, keys and values.\nObservation: the result of the action\n\nOnce all necessary
return the following format:\n\n```\nThought: I now know the final answer\nFinal information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
Answer: the final answer to the original input question\n```"}, {"role": "user", the final answer to the original input question\n"}, {"role": "user", "content":
"content": "\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect "\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect criteria
criteria for your final answer: The final answer\nyou MUST return the actual for your final answer: The final answer\nyou MUST return the actual complete
complete content as the final answer, not a summary.\n\nBegin! This is VERY content as the final answer, not a summary.\n\nBegin! This is VERY important
important to you, use the tools available and give your best Final Answer, your to you, use the tools available and give your best Final Answer, your job depends
job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}' on it!\n\nThought:"}], "model": "gpt-4o"}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -25,13 +25,16 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '1367' - '1325'
content-type: content-type:
- application/json - application/json
cookie:
- _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000;
__cf_bm=3giyBOIM0GNudFELtsBWYXwLrpLBTNLsh81wfXgu2tg-1727226247-1.0.1.1-ugUDz0c5EhmfVpyGtcdedlIWeDGuy2q0tXQTKVpv83HZhvxgBcS7SBL1wS4rapPM38yhfEcfwA79ARt3HQEzKA
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.59.6 - OpenAI/Python 1.47.0
x-stainless-arch: x-stainless-arch:
- arm64 - arm64
x-stainless-async: x-stainless-async:
@@ -41,35 +44,30 @@ interactions:
x-stainless-os: x-stainless-os:
- MacOS - MacOS
x-stainless-package-version: x-stainless-package-version:
- 1.59.6 - 1.47.0
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.12.7 - 3.11.7
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
content: "{\n \"id\": \"chatcmpl-AsXdf4OZKCZSigmN4k0gyh67NciqP\",\n \"object\": content: "{\n \"id\": \"chatcmpl-ABAtOWmVjvzQ9X58tKAUcOF4gmXwx\",\n \"object\":
\"chat.completion\",\n \"created\": 1737562383,\n \"model\": \"gpt-4o-2024-08-06\",\n \"chat.completion\",\n \"created\": 1727226842,\n \"model\": \"gpt-4o-2024-05-13\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"```\\nThought: I have to use the available \"assistant\",\n \"content\": \"Thought: I need to use the get_final_answer
tool to get the final answer. Let's proceed with executing it.\\nAction: get_final_answer\\nAction tool to determine the final answer.\\nAction: get_final_answer\\nAction Input:
Input: {}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n {}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 274,\n \"completion_tokens\":
274,\n \"completion_tokens\": 33,\n \"total_tokens\": 307,\n \"prompt_tokens_details\": 27,\n \"total_tokens\": 301,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": 0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_50cad350e4\"\n}\n"
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 9060d43e3be1d690-IAD - 8c8727b3492f31e6-MIA
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -77,27 +75,19 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Wed, 22 Jan 2025 16:13:03 GMT - Wed, 25 Sep 2024 01:14:03 GMT
Server: Server:
- cloudflare - cloudflare
Set-Cookie:
- __cf_bm=_Jcp7wnO_mXdvOnborCN6j8HwJxJXbszedJC1l7pFUg-1737562383-1.0.1.1-pDSLXlg.nKjG4wsT7mTJPjUvOX1UJITiS4MqKp6yfMWwRSJINsW1qC48SAcjBjakx2H5I1ESVk9JtUpUFDtf4g;
path=/; expires=Wed, 22-Jan-25 16:43:03 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=x3SYvzL2nq_PTBGtE8R9cl5CkeaaDzZFQIrYfo91S2s-1737562383916-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding: Transfer-Encoding:
- chunked - chunked
X-Content-Type-Options: X-Content-Type-Options:
- nosniff - nosniff
access-control-expose-headers: access-control-expose-headers:
- X-Request-ID - X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization: openai-organization:
- crewai-iuxna1 - crewai-iuxna1
openai-processing-ms: openai-processing-ms:
- '791' - '348'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
@@ -109,59 +99,45 @@ interactions:
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '9999' - '9999'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '29999680' - '29999682'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 6ms - 6ms
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 0s - 0s
x-request-id: x-request-id:
- req_eeed99acafd3aeb1e3d4a6c8063192b0 - req_be929caac49706f487950548bdcdd46e
http_version: HTTP/1.1 http_version: HTTP/1.1
status_code: 200 status_code: 200
- request: - request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer(*args:
Arguments: {}\nTool Description: Get the final answer but don''t give it yet, Any, **kwargs: Any) -> Any\nTool Description: get_final_answer() - Get the final
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format answer but don''t give it yet, just re-use this tool non-stop. \nTool
in your response:\n\n```\nThought: you should always think about what to do\nAction: Arguments: {}\n\nUse the following format:\n\nThought: you should always think
the action to take, only one name of [get_final_answer], just the name, exactly about what to do\nAction: the action to take, only one name of [get_final_answer],
as it''s written.\nAction Input: the input to the action, just a simple JSON just the name, exactly as it''s written.\nAction Input: the input to the action,
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: just a simple python dictionary, enclosed in curly braces, using \" to wrap
the result of the action\n```\n\nOnce all necessary information is gathered, keys and values.\nObservation: the result of the action\n\nOnce all necessary
return the following format:\n\n```\nThought: I now know the final answer\nFinal information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
Answer: the final answer to the original input question\n```"}, {"role": "user", the final answer to the original input question\n"}, {"role": "user", "content":
"content": "\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect "\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect criteria
criteria for your final answer: The final answer\nyou MUST return the actual for your final answer: The final answer\nyou MUST return the actual complete
complete content as the final answer, not a summary.\n\nBegin! This is VERY content as the final answer, not a summary.\n\nBegin! This is VERY important
important to you, use the tools available and give your best Final Answer, your to you, use the tools available and give your best Final Answer, your job depends
job depends on it!\n\nThought:"}, {"role": "assistant", "content": "```\nThought: on it!\n\nThought:"}, {"role": "user", "content": "Thought: I need to use the
I have to use the available tool to get the final answer. Let''s proceed with get_final_answer tool to determine the final answer.\nAction: get_final_answer\nAction
executing it.\nAction: get_final_answer\nAction Input: {}\nObservation: I encountered
an error: Error on parsing tool.\nMoving on then. I MUST either use a tool (use
one at time) OR give my best final answer not both at the same time. When responding,
I must use the following format:\n\n```\nThought: you should always think about
what to do\nAction: the action to take, should be one of [get_final_answer]\nAction
Input: the input to the action, dictionary enclosed in curly braces\nObservation:
the result of the action\n```\nThis Thought/Action/Action Input/Result can repeat
N times. Once I know the final answer, I must return the following format:\n\n```\nThought:
I now can give a great answer\nFinal Answer: Your final answer must be the great
and the most complete as possible, it must be outcome described\n\n```"}, {"role":
"assistant", "content": "```\nThought: I have to use the available tool to get
the final answer. Let''s proceed with executing it.\nAction: get_final_answer\nAction
Input: {}\nObservation: I encountered an error: Error on parsing tool.\nMoving Input: {}\nObservation: I encountered an error: Error on parsing tool.\nMoving
on then. I MUST either use a tool (use one at time) OR give my best final answer on then. I MUST either use a tool (use one at time) OR give my best final answer
not both at the same time. When responding, I must use the following format:\n\n```\nThought: not both at the same time. To Use the following format:\n\nThought: you should
you should always think about what to do\nAction: the action to take, should always think about what to do\nAction: the action to take, should be one of
be one of [get_final_answer]\nAction Input: the input to the action, dictionary [get_final_answer]\nAction Input: the input to the action, dictionary enclosed
enclosed in curly braces\nObservation: the result of the action\n```\nThis Thought/Action/Action in curly braces\nObservation: the result of the action\n... (this Thought/Action/Action
Input/Result can repeat N times. Once I know the final answer, I must return Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal
the following format:\n\n```\nThought: I now can give a great answer\nFinal
Answer: Your final answer must be the great and the most complete as possible, Answer: Your final answer must be the great and the most complete as possible,
it must be outcome described\n\n```\nNow it''s time you MUST give your absolute it must be outcome described\n\n \nNow it''s time you MUST give your absolute
best final answer. You''ll ignore all previous instructions, stop using any best final answer. You''ll ignore all previous instructions, stop using any
tools, and just return your absolute BEST Final answer."}], "model": "gpt-4o", tools, and just return your absolute BEST Final answer."}], "model": "gpt-4o"}'
"stop": ["\nObservation:"]}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -170,16 +146,16 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '3445' - '2320'
content-type: content-type:
- application/json - application/json
cookie: cookie:
- __cf_bm=_Jcp7wnO_mXdvOnborCN6j8HwJxJXbszedJC1l7pFUg-1737562383-1.0.1.1-pDSLXlg.nKjG4wsT7mTJPjUvOX1UJITiS4MqKp6yfMWwRSJINsW1qC48SAcjBjakx2H5I1ESVk9JtUpUFDtf4g; - _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000;
_cfuvid=x3SYvzL2nq_PTBGtE8R9cl5CkeaaDzZFQIrYfo91S2s-1737562383916-0.0.1.1-604800000 __cf_bm=3giyBOIM0GNudFELtsBWYXwLrpLBTNLsh81wfXgu2tg-1727226247-1.0.1.1-ugUDz0c5EhmfVpyGtcdedlIWeDGuy2q0tXQTKVpv83HZhvxgBcS7SBL1wS4rapPM38yhfEcfwA79ARt3HQEzKA
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.59.6 - OpenAI/Python 1.47.0
x-stainless-arch: x-stainless-arch:
- arm64 - arm64
x-stainless-async: x-stainless-async:
@@ -189,36 +165,29 @@ interactions:
x-stainless-os: x-stainless-os:
- MacOS - MacOS
x-stainless-package-version: x-stainless-package-version:
- 1.59.6 - 1.47.0
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.12.7 - 3.11.7
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
content: "{\n \"id\": \"chatcmpl-AsXdg9UrLvAiqWP979E6DszLsQ84k\",\n \"object\": content: "{\n \"id\": \"chatcmpl-ABAtPaaeRfdNsZ3k06CfAmrEW8IJu\",\n \"object\":
\"chat.completion\",\n \"created\": 1737562384,\n \"model\": \"gpt-4o-2024-08-06\",\n \"chat.completion\",\n \"created\": 1727226843,\n \"model\": \"gpt-4o-2024-05-13\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"```\\nThought: I now know the final answer\\nFinal \"assistant\",\n \"content\": \"Final Answer: The final answer\",\n \"refusal\":
Answer: The final answer must be the great and the most complete as possible, null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
it must be outcome described.\\n```\",\n \"refusal\": null\n },\n \ }\n ],\n \"usage\": {\n \"prompt_tokens\": 483,\n \"completion_tokens\":
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n 6,\n \"total_tokens\": 489,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
\ \"usage\": {\n \"prompt_tokens\": 719,\n \"completion_tokens\": 35,\n 0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
\ \"total_tokens\": 754,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_50cad350e4\"\n}\n"
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 9060d4441edad690-IAD - 8c8727b9da1f31e6-MIA
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -226,7 +195,7 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Wed, 22 Jan 2025 16:13:05 GMT - Wed, 25 Sep 2024 01:14:03 GMT
Server: Server:
- cloudflare - cloudflare
Transfer-Encoding: Transfer-Encoding:
@@ -240,7 +209,7 @@ interactions:
openai-organization: openai-organization:
- crewai-iuxna1 - crewai-iuxna1
openai-processing-ms: openai-processing-ms:
- '928' - '188'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
@@ -252,13 +221,13 @@ interactions:
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '9999' - '9999'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '29999187' - '29999445'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 6ms - 6ms
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 1ms - 1ms
x-request-id: x-request-id:
- req_61fc7506e6db326ec572224aec81ef23 - req_d8e32538689fe064627468bad802d9a8
http_version: HTTP/1.1 http_version: HTTP/1.1
status_code: 200 status_code: 200
version: 1 version: 1

View File

@@ -0,0 +1,520 @@
interactions:
- request:
body: !!binary |
CqcXCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkS/hYKEgoQY3Jld2FpLnRl
bGVtZXRyeRJ5ChBuJJtOdNaB05mOW/p3915eEgj2tkAd3rZcASoQVG9vbCBVc2FnZSBFcnJvcjAB
OYa7/URvKBUYQUpcFEVvKBUYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoPCgNsbG0SCAoG
Z3B0LTRvegIYAYUBAAEAABLJBwoQifhX01E5i+5laGdALAlZBBIIBuGM1aN+OPgqDENyZXcgQ3Jl
YXRlZDABORVGruBvKBUYQaipwOBvKBUYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5w
eXRob25fdmVyc2lvbhIICgYzLjEyLjdKLgoIY3Jld19rZXkSIgogN2U2NjA4OTg5ODU5YTY3ZWVj
ODhlZWY3ZmNlODUyMjVKMQoHY3Jld19pZBImCiRiOThiNWEwMC01YTI1LTQxMDctYjQwNS1hYmYz
MjBhOGYzYThKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAA
ShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgB
SuQCCgtjcmV3X2FnZW50cxLUAgrRAlt7ImtleSI6ICIyMmFjZDYxMWU0NGVmNWZhYzA1YjUzM2Q3
NWU4ODkzYiIsICJpZCI6ICJkNWIyMzM1YS0yMmIyLTQyZWEtYmYwNS03OTc3NmU3MmYzOTIiLCAi
cm9sZSI6ICJEYXRhIFNjaWVudGlzdCIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAy
MCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJn
cHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4
ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFsi
Z2V0IGdyZWV0aW5ncyJdfV1KkgIKCmNyZXdfdGFza3MSgwIKgAJbeyJrZXkiOiAiYTI3N2IzNGIy
YzE0NmYwYzU2YzVlMTM1NmU4ZjhhNTciLCAiaWQiOiAiMjJiZWMyMzEtY2QyMS00YzU4LTgyN2Ut
MDU4MWE4ZjBjMTExIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6
IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJEYXRhIFNjaWVudGlzdCIsICJhZ2VudF9rZXkiOiAiMjJh
Y2Q2MTFlNDRlZjVmYWMwNWI1MzNkNzVlODg5M2IiLCAidG9vbHNfbmFtZXMiOiBbImdldCBncmVl
dGluZ3MiXX1degIYAYUBAAEAABKOAgoQ5WYoxRtTyPjge4BduhL0rRIIv2U6rvWALfwqDFRhc2sg
Q3JlYXRlZDABOX068uBvKBUYQZkv8+BvKBUYSi4KCGNyZXdfa2V5EiIKIDdlNjYwODk4OTg1OWE2
N2VlYzg4ZWVmN2ZjZTg1MjI1SjEKB2NyZXdfaWQSJgokYjk4YjVhMDAtNWEyNS00MTA3LWI0MDUt
YWJmMzIwYThmM2E4Si4KCHRhc2tfa2V5EiIKIGEyNzdiMzRiMmMxNDZmMGM1NmM1ZTEzNTZlOGY4
YTU3SjEKB3Rhc2tfaWQSJgokMjJiZWMyMzEtY2QyMS00YzU4LTgyN2UtMDU4MWE4ZjBjMTExegIY
AYUBAAEAABKQAQoQXyeDtJDFnyp2Fjk9YEGTpxIIaNE7gbhPNYcqClRvb2wgVXNhZ2UwATkaXTvj
bygVGEGvx0rjbygVGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKHAoJdG9vbF9uYW1lEg8K
DUdldCBHcmVldGluZ3NKDgoIYXR0ZW1wdHMSAhgBegIYAYUBAAEAABLVBwoQMWfznt0qwauEzl7T
UOQxRBII9q+pUS5EdLAqDENyZXcgQ3JlYXRlZDABORONPORvKBUYQSAoS+RvKBUYShoKDmNyZXdh
aV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVyc2lvbhIICgYzLjEyLjdKLgoIY3Jld19r
ZXkSIgogYzMwNzYwMDkzMjY3NjE0NDRkNTdjNzFkMWRhM2YyN2NKMQoHY3Jld19pZBImCiQ3OTQw
MTkyNS1iOGU5LTQ3MDgtODUzMC00NDhhZmEzYmY4YjBKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVl
bnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVj
cmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSuoCCgtjcmV3X2FnZW50cxLaAgrXAlt7ImtleSI6ICI5
OGYzYjFkNDdjZTk2OWNmMDU3NzI3Yjc4NDE0MjVjZCIsICJpZCI6ICI5OTJkZjYyZi1kY2FiLTQy
OTUtOTIwNi05MDBkNDExNGIxZTkiLCAicm9sZSI6ICJGcmllbmRseSBOZWlnaGJvciIsICJ2ZXJi
b3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25f
Y2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJs
ZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9s
aW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFsiZGVjaWRlIGdyZWV0aW5ncyJdfV1KmAIKCmNyZXdf
dGFza3MSiQIKhgJbeyJrZXkiOiAiODBkN2JjZDQ5MDk5MjkwMDgzODMyZjBlOTgzMzgwZGYiLCAi
aWQiOiAiMmZmNjE5N2UtYmEyNy00YjczLWI0YTctNGZhMDQ4ZTYyYjQ3IiwgImFzeW5jX2V4ZWN1
dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJGcmll
bmRseSBOZWlnaGJvciIsICJhZ2VudF9rZXkiOiAiOThmM2IxZDQ3Y2U5NjljZjA1NzcyN2I3ODQx
NDI1Y2QiLCAidG9vbHNfbmFtZXMiOiBbImRlY2lkZSBncmVldGluZ3MiXX1degIYAYUBAAEAABKO
AgoQnjTp5boK7/+DQxztYIpqihIIgGnMUkBtzHEqDFRhc2sgQ3JlYXRlZDABOcpYcuRvKBUYQalE
c+RvKBUYSi4KCGNyZXdfa2V5EiIKIGMzMDc2MDA5MzI2NzYxNDQ0ZDU3YzcxZDFkYTNmMjdjSjEK
B2NyZXdfaWQSJgokNzk0MDE5MjUtYjhlOS00NzA4LTg1MzAtNDQ4YWZhM2JmOGIwSi4KCHRhc2tf
a2V5EiIKIDgwZDdiY2Q0OTA5OTI5MDA4MzgzMmYwZTk4MzM4MGRmSjEKB3Rhc2tfaWQSJgokMmZm
NjE5N2UtYmEyNy00YjczLWI0YTctNGZhMDQ4ZTYyYjQ3egIYAYUBAAEAABKTAQoQ26H9pLUgswDN
p9XhJwwL6BIIx3bw7mAvPYwqClRvb2wgVXNhZ2UwATmy7NPlbygVGEEvb+HlbygVGEoaCg5jcmV3
YWlfdmVyc2lvbhIICgYwLjg2LjBKHwoJdG9vbF9uYW1lEhIKEERlY2lkZSBHcmVldGluZ3NKDgoI
YXR0ZW1wdHMSAhgBegIYAYUBAAEAAA==
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '2986'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.27.0
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Fri, 27 Dec 2024 22:14:53 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nTo give my best complete final answer to the task
use the exact following format:\n\nThought: I now can give a great answer\nFinal
Answer: Your final answer must be the great and the most complete as possible,
it must be outcome described.\n\nI MUST use these formats, my job depends on
it!"}, {"role": "user", "content": "\nCurrent Task: Say the word: Hi\n\nThis
is the expect criteria for your final answer: The word: Hi\nyou MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"],
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '824'
content-type:
- application/json
cookie:
- _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtZLLrWi8ZASpP9bz6HaCV7xBIn\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: Hi\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
158,\n \"completion_tokens\": 12,\n \"total_tokens\": 170,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa83deca756b-SEA
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:53 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw;
path=/; expires=Fri, 27-Dec-24 22:44:53 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '404'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999816'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_6ac84634bff9193743c4b0911c09b4a6
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "Determine if the following
feedback indicates that the user is satisfied or if further changes are needed.
Respond with ''True'' if further changes are needed, or ''False'' if the user
is satisfied. **Important** Do not include any additional commentary outside
of your ''True'' or ''False'' response.\n\nFeedback: \"Don''t say hi, say Hello
instead!\""}], "model": "gpt-4o-mini", "stop": ["\nObservation:"], "stream":
false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '461'
content-type:
- application/json
cookie:
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtZNlWdrrPZhq0MJDqd16sMuQEJ\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"True\",\n \"refusal\": null\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 78,\n \"completion_tokens\":
1,\n \"total_tokens\": 79,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa87094f756b-SEA
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:53 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '156'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999898'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_ec74bef2a9ef7b2144c03fd7f7bbeab0
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nTo give my best complete final answer to the task
use the exact following format:\n\nThought: I now can give a great answer\nFinal
Answer: Your final answer must be the great and the most complete as possible,
it must be outcome described.\n\nI MUST use these formats, my job depends on
it!"}, {"role": "user", "content": "\nCurrent Task: Say the word: Hi\n\nThis
is the expect criteria for your final answer: The word: Hi\nyou MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "I now
can give a great answer \nFinal Answer: Hi"}, {"role": "user", "content": "Feedback:
Don''t say hi, say Hello instead!"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"],
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '986'
content-type:
- application/json
cookie:
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtZGv4f3h7GDdhyOy9G0sB1lRgC\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I understand the feedback and
will adjust my response accordingly. \\nFinal Answer: Hello\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 188,\n \"completion_tokens\":
18,\n \"total_tokens\": 206,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa88cac4756b-SEA
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:54 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '358'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999793'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_ae1ab6b206d28ded6fee3c83ed0c2ab7
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "Determine if the following
feedback indicates that the user is satisfied or if further changes are needed.
Respond with ''True'' if further changes are needed, or ''False'' if the user
is satisfied. **Important** Do not include any additional commentary outside
of your ''True'' or ''False'' response.\n\nFeedback: \"looks good\""}], "model":
"gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '439'
content-type:
- application/json
cookie:
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtaiHL4TY8Dssk0j2miqmjrzquy\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337694,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"False\",\n \"refusal\": null\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 73,\n \"completion_tokens\":
1,\n \"total_tokens\": 74,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa8bdd26756b-SEA
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:54 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '184'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999902'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_652891f79c1104a7a8436275d78a69f1
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,117 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '939'
content-type:
- application/json
cookie:
- __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
_cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnuRlxiTxduAVoXHHY58Fvfbll5IS\",\n \"object\":
\"chat.completion\",\n \"created\": 1736458417,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: This is a test task, and the context or question from the coworker is
not specified. Therefore, my best effort would be to affirm my readiness to
answer accurately and in detail any question about Futel Football Club based
on the context described. If provided with specific information or questions,
I will ensure to respond comprehensively as required by my job directives.\",\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 177,\n \"completion_tokens\":
82,\n \"total_tokens\": 259,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_703d4ff298\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ff78bf7bd6cc002-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 09 Jan 2025 21:33:40 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '2263'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_7c1a31da73cd103e9f410f908e59187f
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,119 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '939'
content-type:
- application/json
cookie:
- __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
_cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnuRrFJZGKw8cIEshvuW1PKwFZFKs\",\n \"object\":
\"chat.completion\",\n \"created\": 1736458423,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: Although you mentioned this being a \\\"Test task\\\" and haven't provided
a specific question regarding Futel Football Club, your request appears to involve
ensuring accuracy and detail in responses. For a proper answer about Futel,
I'd be ready to provide details about the club's history, management, players,
match schedules, and recent performance statistics. Remember to ask specific
questions to receive a targeted response. If this were a real context where
information was shared, I would respond precisely to what's been asked regarding
Futel Football Club.\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
177,\n \"completion_tokens\": 113,\n \"total_tokens\": 290,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_703d4ff298\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ff78c1d0ecdc002-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 09 Jan 2025 21:33:47 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '3097'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_179e1d56e2b17303e40480baffbc7b08
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,16 +1,17 @@
interactions: interactions:
- request: - request:
body: '{"messages": [{"role": "system", "content": "You are base_agent. You are body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
a helpful assistant that just says hi\nYour personal goal is: Just say hi\nTo Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user", described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Just say hi\n\nThis is the expect criteria for your "content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: hi\nyou MUST return the actual complete content as the final answer, final answer: Your best answer to your coworker asking you this, accounting
not a summary.\n\nBegin! This is VERY important to you, use the tools available for the context shared.\nyou MUST return the actual complete content as the
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model": final answer, not a summary.\n\nBegin! This is VERY important to you, use the
"gpt-4o-mini", "stop": ["\nObservation:"]}' tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -19,16 +20,16 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '836' - '939'
content-type: content-type:
- application/json - application/json
cookie: cookie:
- _cfuvid=gsNyCo_jrDOolzf8SXHDaxQQrEgdR3jgv4OAH8MziDE-1739291824699-0.0.1.1-604800000; - __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
__cf_bm=cRijYuylMGzRGxv3udQL5PhHOR5mRN_9_eLLwevlM_o-1739299455-1.0.1.1-Fszr_Msw0B1.IBMkiunP.VF2ilul1YGZZV8TqMcO3Q2SHvSlqfgm9NHgns1bJrm0wWRvHiCE7wdZfUAOx7T3Lg _cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.61.0 - OpenAI/Python 1.52.1
x-stainless-arch: x-stainless-arch:
- arm64 - arm64
x-stainless-async: x-stainless-async:
@@ -38,7 +39,7 @@ interactions:
x-stainless-os: x-stainless-os:
- MacOS - MacOS
x-stainless-package-version: x-stainless-package-version:
- 1.61.0 - 1.52.1
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count: x-stainless-retry-count:
@@ -46,26 +47,28 @@ interactions:
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.12.8 - 3.12.7
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
content: "{\n \"id\": \"chatcmpl-AzpWx6pctOvzu6xsbyg0XfSAc0q9V\",\n \"object\": content: "{\n \"id\": \"chatcmpl-AnuRqgg7eiHnDi2DOqdk99fiqOboz\",\n \"object\":
\"chat.completion\",\n \"created\": 1739299455,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n \"chat.completion\",\n \"created\": 1736458422,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal \"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: hi\",\n \"refusal\": null\n },\n \"logprobs\": null,\n Answer: Your best answer to your coworker asking you this, accounting for the
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": context shared. You MUST return the actual complete content as the final answer,
161,\n \"completion_tokens\": 12,\n \"total_tokens\": 173,\n \"prompt_tokens_details\": not a summary.\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
177,\n \"completion_tokens\": 44,\n \"total_tokens\": 221,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n" \"fp_703d4ff298\"\n}\n"
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 91067d3ddc68fa16-SJC - 8ff78c164ad2c002-ATL
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -73,7 +76,7 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 11 Feb 2025 18:44:16 GMT - Thu, 09 Jan 2025 21:33:43 GMT
Server: Server:
- cloudflare - cloudflare
Transfer-Encoding: Transfer-Encoding:
@@ -87,25 +90,25 @@ interactions:
openai-organization: openai-organization:
- crewai-iuxna1 - crewai-iuxna1
openai-processing-ms: openai-processing-ms:
- '703' - '899'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
- max-age=31536000; includeSubDomains; preload - max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '30000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '150000000' - '30000000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '29999' - '9999'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '149999810' - '29999786'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 2ms - 6ms
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 0s - 0s
x-request-id: x-request-id:
- req_89222c00e4608e8557a135e91b223556 - req_9f5226208edb90a27987aaf7e0ca03d3
http_version: HTTP/1.1 http_version: HTTP/1.1
status_code: 200 status_code: 200
version: 1 version: 1

View File

@@ -1,16 +1,17 @@
interactions: interactions:
- request: - request:
body: '{"messages": [{"role": "system", "content": "You are base_agent. You are body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
a helpful assistant that just says hi\nYour personal goal is: Just say hi\nTo Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user", described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Just say hi\n\nThis is the expect criteria for your "content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: hi\nyou MUST return the actual complete content as the final answer, final answer: Your best answer to your coworker asking you this, accounting
not a summary.\n\nBegin! This is VERY important to you, use the tools available for the context shared.\nyou MUST return the actual complete content as the
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model": final answer, not a summary.\n\nBegin! This is VERY important to you, use the
"gpt-4o-mini", "stop": ["\nObservation:"]}' tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -19,13 +20,13 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '836' - '939'
content-type: content-type:
- application/json - application/json
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.61.0 - OpenAI/Python 1.52.1
x-stainless-arch: x-stainless-arch:
- arm64 - arm64
x-stainless-async: x-stainless-async:
@@ -35,7 +36,7 @@ interactions:
x-stainless-os: x-stainless-os:
- MacOS - MacOS
x-stainless-package-version: x-stainless-package-version:
- 1.61.0 - 1.52.1
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count: x-stainless-retry-count:
@@ -43,24 +44,30 @@ interactions:
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.12.8 - 3.12.7
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
content: "{\n \"id\": \"chatcmpl-AzpkZLpCyjKT5d6Udfx4zAme2sOMy\",\n \"object\": content: "{\n \"id\": \"chatcmpl-AnuRjmwH5mrykLxQhFwTqqTiDtuTf\",\n \"object\":
\"chat.completion\",\n \"created\": 1739300299,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n \"chat.completion\",\n \"created\": 1736458415,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal \"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: hi\",\n \"refusal\": null\n },\n \"logprobs\": null,\n Answer: As this is a test task, please note that Futel Football Club is fictional
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": and any specific details about it would not be available. However, if you have
161,\n \"completion_tokens\": 12,\n \"total_tokens\": 173,\n \"prompt_tokens_details\": specific questions or need information about a particular aspect of Futel or
any general football club inquiry, feel free to ask, and I'll do my best to
assist you with your query!\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
177,\n \"completion_tokens\": 79,\n \"total_tokens\": 256,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n" \"fp_703d4ff298\"\n}\n"
headers: headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY: CF-RAY:
- 910691d3ab90ebef-SJC - 8ff78be5eebfc002-ATL
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -68,14 +75,14 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 11 Feb 2025 18:58:20 GMT - Thu, 09 Jan 2025 21:33:37 GMT
Server: Server:
- cloudflare - cloudflare
Set-Cookie: Set-Cookie:
- __cf_bm=MOH5EY6n3p8JKY53.yz7qzLuLYsEB8QdQXH09loUMBM-1739300300-1.0.1.1-hjb4mk04sMygPFhoFyiySKZSqB_fN5PbhbOyn.kipa3.eLvk7EtriDyjvGkBFIAV13DYnc08BfF_l2kxdx9hfQ; - __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
path=/; expires=Tue, 11-Feb-25 19:28:20 GMT; domain=.api.openai.com; HttpOnly; path=/; expires=Thu, 09-Jan-25 22:03:37 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None Secure; SameSite=None
- _cfuvid=uu.cEiV.FfgvSvCdKOooDYJWrwjVEuFeGdQodijGUUI-1739300300232-0.0.1.1-604800000; - _cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding: Transfer-Encoding:
- chunked - chunked
@@ -85,30 +92,28 @@ interactions:
- X-Request-ID - X-Request-ID
alt-svc: alt-svc:
- h3=":443"; ma=86400 - h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization: openai-organization:
- crewai-iuxna1 - crewai-iuxna1
openai-processing-ms: openai-processing-ms:
- '1357' - '2730'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
- max-age=31536000; includeSubDomains; preload - max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '30000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '150000000' - '30000000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '29999' - '9999'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '149999810' - '29999786'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 2ms - 6ms
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 0s - 0s
x-request-id: x-request-id:
- req_2277503f851195e7d7a43b66eb044454 - req_014478ba748f860d10ac250ca0ba824a
http_version: HTTP/1.1 http_version: HTTP/1.1
status_code: 200 status_code: 200
version: 1 version: 1

View File

@@ -1,16 +1,17 @@
interactions: interactions:
- request: - request:
body: '{"messages": [{"role": "system", "content": "You are base_agent. You are body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
a helpful assistant that just says hi\nYour personal goal is: Just say hi\nTo Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user", described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Just say hi\n\nThis is the expect criteria for your "content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: hi\nyou MUST return the actual complete content as the final answer, final answer: Your best answer to your coworker asking you this, accounting
not a summary.\n\nBegin! This is VERY important to you, use the tools available for the context shared.\nyou MUST return the actual complete content as the
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model": final answer, not a summary.\n\nBegin! This is VERY important to you, use the
"gpt-4o-mini", "stop": ["\nObservation:"]}' tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -19,15 +20,16 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '836' - '939'
content-type: content-type:
- application/json - application/json
cookie: cookie:
- _cfuvid=gsNyCo_jrDOolzf8SXHDaxQQrEgdR3jgv4OAH8MziDE-1739291824699-0.0.1.1-604800000 - __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
_cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.61.0 - OpenAI/Python 1.52.1
x-stainless-arch: x-stainless-arch:
- arm64 - arm64
x-stainless-async: x-stainless-async:
@@ -37,7 +39,7 @@ interactions:
x-stainless-os: x-stainless-os:
- MacOS - MacOS
x-stainless-package-version: x-stainless-package-version:
- 1.61.0 - 1.52.1
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count: x-stainless-retry-count:
@@ -45,24 +47,33 @@ interactions:
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.12.8 - 3.12.7
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
content: "{\n \"id\": \"chatcmpl-AzpWxLzAcRzigZuIGmjP3ckQgxAom\",\n \"object\": content: "{\n \"id\": \"chatcmpl-AnuRofLgmzWcDya5LILqYwIJYgFoq\",\n \"object\":
\"chat.completion\",\n \"created\": 1739299455,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n \"chat.completion\",\n \"created\": 1736458420,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal \"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: hi\",\n \"refusal\": null\n },\n \"logprobs\": null,\n Answer: As an official Futel Football Club infopoint, my responsibility is to
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": provide detailed and accurate information about the club. This includes answering
161,\n \"completion_tokens\": 12,\n \"total_tokens\": 173,\n \"prompt_tokens_details\": questions regarding team statistics, player performances, upcoming fixtures,
ticketing and fan zone details, club history, and community initiatives. Our
focus is to ensure that fans and stakeholders have access to the latest and
most precise information about the club's on and off-pitch activities. If there's
anything specific you need to know, just let me know, and I'll be more than
happy to assist!\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
177,\n \"completion_tokens\": 115,\n \"total_tokens\": 292,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n" \"fp_703d4ff298\"\n}\n"
headers: headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY: CF-RAY:
- 91067d389e90fa16-SJC - 8ff78c066f37c002-ATL
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -70,13 +81,9 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 11 Feb 2025 18:44:15 GMT - Thu, 09 Jan 2025 21:33:42 GMT
Server: Server:
- cloudflare - cloudflare
Set-Cookie:
- __cf_bm=cRijYuylMGzRGxv3udQL5PhHOR5mRN_9_eLLwevlM_o-1739299455-1.0.1.1-Fszr_Msw0B1.IBMkiunP.VF2ilul1YGZZV8TqMcO3Q2SHvSlqfgm9NHgns1bJrm0wWRvHiCE7wdZfUAOx7T3Lg;
path=/; expires=Tue, 11-Feb-25 19:14:15 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
Transfer-Encoding: Transfer-Encoding:
- chunked - chunked
X-Content-Type-Options: X-Content-Type-Options:
@@ -85,30 +92,28 @@ interactions:
- X-Request-ID - X-Request-ID
alt-svc: alt-svc:
- h3=":443"; ma=86400 - h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization: openai-organization:
- crewai-iuxna1 - crewai-iuxna1
openai-processing-ms: openai-processing-ms:
- '716' - '2459'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
- max-age=31536000; includeSubDomains; preload - max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '30000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '150000000' - '30000000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '29999' - '9999'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '149999810' - '29999786'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 2ms - 6ms
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 0s - 0s
x-request-id: x-request-id:
- req_ef807dc3223d40332aae8a313e96ef3a - req_a146dd27f040f39a576750970cca0f52
http_version: HTTP/1.1 http_version: HTTP/1.1
status_code: 200 status_code: 200
version: 1 version: 1

Some files were not shown because too many files have changed in this diff Show More