mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-08 23:58:34 +00:00
Add pt-BR docs translation (#3039)
* docs: add pt-br translations Powered by a CrewAI Flow https://github.com/danielfsbarreto/docs_translator * Update mcp/overview.mdx brazilian docs Its en-US counterpart was updated after I did a pass, so now it includes the new section about @CrewBase
This commit is contained in:
61
docs/en/learn/before-and-after-kickoff-hooks.mdx
Normal file
61
docs/en/learn/before-and-after-kickoff-hooks.mdx
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
title: Before and After Kickoff Hooks
|
||||
description: Learn how to use before and after kickoff hooks in CrewAI
|
||||
---
|
||||
|
||||
CrewAI provides hooks that allow you to execute code before and after a crew's kickoff. These hooks are useful for preprocessing inputs or post-processing results.
|
||||
|
||||
## Before Kickoff Hook
|
||||
|
||||
The before kickoff hook is executed before the crew starts its tasks. It receives the input dictionary and can modify it before passing it to the crew. You can use this hook to set up your environment, load necessary data, or preprocess your inputs. This is useful in scenarios where the input data might need enrichment or validation before being processed by the crew.
|
||||
|
||||
Here's an example of defining a before kickoff function in your `crew.py`:
|
||||
|
||||
```python
|
||||
from crewai import CrewBase
|
||||
from crewai.project import before_kickoff
|
||||
|
||||
@CrewBase
|
||||
class MyCrew:
|
||||
@before_kickoff
|
||||
def prepare_data(self, inputs):
|
||||
# Preprocess or modify inputs
|
||||
inputs['processed'] = True
|
||||
return inputs
|
||||
|
||||
#...
|
||||
```
|
||||
|
||||
In this example, the prepare_data function modifies the inputs by adding a new key-value pair indicating that the inputs have been processed.
|
||||
|
||||
## After Kickoff Hook
|
||||
|
||||
The after kickoff hook is executed after the crew has completed its tasks. It receives the result object, which contains the outputs of the crew's execution. This hook is ideal for post-processing results, such as logging, data transformation, or further analysis.
|
||||
|
||||
Here's how you can define an after kickoff function in your `crew.py`:
|
||||
|
||||
```python
|
||||
from crewai import CrewBase
|
||||
from crewai.project import after_kickoff
|
||||
|
||||
@CrewBase
|
||||
class MyCrew:
|
||||
@after_kickoff
|
||||
def log_results(self, result):
|
||||
# Log or modify the results
|
||||
print("Crew execution completed with result:", result)
|
||||
return result
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
|
||||
In the `log_results` function, the results of the crew execution are simply printed out. You can extend this to perform more complex operations such as sending notifications or integrating with other services.
|
||||
|
||||
## Utilizing Both Hooks
|
||||
|
||||
Both hooks can be used together to provide a comprehensive setup and teardown process for your crew's execution. They are particularly useful in maintaining clean code architecture by separating concerns and enhancing the modularity of your CrewAI implementations.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Before and after kickoff hooks in CrewAI offer powerful ways to interact with the lifecycle of a crew's execution. By understanding and utilizing these hooks, you can greatly enhance the robustness and flexibility of your AI agents.
|
||||
443
docs/en/learn/bring-your-own-agent.mdx
Normal file
443
docs/en/learn/bring-your-own-agent.mdx
Normal file
@@ -0,0 +1,443 @@
|
||||
---
|
||||
title: Bring your own agent
|
||||
description: Learn how to bring your own agents that work within a Crew.
|
||||
icon: robots
|
||||
---
|
||||
|
||||
Interoperability is a core concept in CrewAI. This guide will show you how to bring your own agents that work within a Crew.
|
||||
|
||||
|
||||
## Adapter Guide for Bringing your own agents (Langgraph Agents, OpenAI Agents, etc...)
|
||||
We require 3 adapters to turn any agent from different frameworks to work within crew.
|
||||
|
||||
1. BaseAgentAdapter
|
||||
2. BaseToolAdapter
|
||||
3. BaseConverter
|
||||
|
||||
|
||||
## BaseAgentAdapter
|
||||
This abstract class defines the common interface and functionality that all
|
||||
agent adapters must implement. It extends BaseAgent to maintain compatibility
|
||||
with the CrewAI framework while adding adapter-specific requirements.
|
||||
|
||||
Required Methods:
|
||||
|
||||
1. `def configure_tools`
|
||||
2. `def configure_structured_output`
|
||||
|
||||
## Creating your own Adapter
|
||||
To integrate an agent from a different framework (e.g., LangGraph, Autogen, OpenAI Assistants) into CrewAI, you need to create a custom adapter by inheriting from `BaseAgentAdapter`. This adapter acts as a compatibility layer, translating between the CrewAI interfaces and the specific requirements of your external agent.
|
||||
|
||||
Here's how you implement your custom adapter:
|
||||
|
||||
1. **Inherit from `BaseAgentAdapter`**:
|
||||
```python
|
||||
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
|
||||
from crewai.tools import BaseTool
|
||||
from typing import List, Optional, Any, Dict
|
||||
|
||||
class MyCustomAgentAdapter(BaseAgentAdapter):
|
||||
# ... implementation details ...
|
||||
```
|
||||
|
||||
2. **Implement `__init__`**:
|
||||
The constructor should call the parent class constructor `super().__init__(**kwargs)` and perform any initialization specific to your external agent. You can use the optional `agent_config` dictionary passed during CrewAI's `Agent` initialization to configure your adapter and the underlying agent.
|
||||
|
||||
```python
|
||||
def __init__(self, agent_config: Optional[Dict[str, Any]] = None, **kwargs: Any):
|
||||
super().__init__(agent_config=agent_config, **kwargs)
|
||||
# Initialize your external agent here, possibly using agent_config
|
||||
# Example: self.external_agent = initialize_my_agent(agent_config)
|
||||
print(f"Initializing MyCustomAgentAdapter with config: {agent_config}")
|
||||
```
|
||||
|
||||
3. **Implement `configure_tools`**:
|
||||
This abstract method is crucial. It receives a list of CrewAI `BaseTool` instances. Your implementation must convert or adapt these tools into the format expected by your external agent framework. This might involve wrapping them, extracting specific attributes, or registering them with the external agent instance.
|
||||
|
||||
```python
|
||||
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
|
||||
if tools:
|
||||
adapted_tools = []
|
||||
for tool in tools:
|
||||
# Adapt CrewAI BaseTool to the format your agent expects
|
||||
# Example: adapted_tool = adapt_to_my_framework(tool)
|
||||
# adapted_tools.append(adapted_tool)
|
||||
pass # Replace with your actual adaptation logic
|
||||
|
||||
# Configure the external agent with the adapted tools
|
||||
# Example: self.external_agent.set_tools(adapted_tools)
|
||||
print(f"Configuring tools for MyCustomAgentAdapter: {adapted_tools}") # Placeholder
|
||||
else:
|
||||
# Handle the case where no tools are provided
|
||||
# Example: self.external_agent.set_tools([])
|
||||
print("No tools provided for MyCustomAgentAdapter.")
|
||||
```
|
||||
|
||||
4. **Implement `configure_structured_output`**:
|
||||
This method is called when the CrewAI `Agent` is configured with structured output requirements (e.g., `output_json` or `output_pydantic`). Your adapter needs to ensure the external agent is set up to comply with these requirements. This might involve setting specific parameters on the external agent or ensuring its underlying model supports the requested format. If the external agent doesn't support structured output in a way compatible with CrewAI's expectations, you might need to handle the conversion or raise an appropriate error.
|
||||
|
||||
```python
|
||||
def configure_structured_output(self, structured_output: Any) -> None:
|
||||
# Configure your external agent to produce output in the specified format
|
||||
# Example: self.external_agent.set_output_format(structured_output)
|
||||
self.adapted_structured_output = True # Signal that structured output is handled
|
||||
print(f"Configuring structured output for MyCustomAgentAdapter: {structured_output}")
|
||||
```
|
||||
|
||||
By implementing these methods, your `MyCustomAgentAdapter` will allow your custom agent implementation to function correctly within a CrewAI crew, interacting with tasks and tools seamlessly. Remember to replace the example comments and print statements with your actual adaptation logic specific to the external agent framework you are integrating.
|
||||
|
||||
## BaseToolAdapter implementation
|
||||
The `BaseToolAdapter` class is responsible for converting CrewAI's native `BaseTool` objects into a format that your specific external agent framework can understand and utilize. Different agent frameworks (like LangGraph, OpenAI Assistants, etc.) have their own unique ways of defining and handling tools, and the `BaseToolAdapter` acts as the translator.
|
||||
|
||||
Here's how you implement your custom tool adapter:
|
||||
|
||||
1. **Inherit from `BaseToolAdapter`**:
|
||||
```python
|
||||
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
|
||||
from crewai.tools import BaseTool
|
||||
from typing import List, Any
|
||||
|
||||
class MyCustomToolAdapter(BaseToolAdapter):
|
||||
# ... implementation details ...
|
||||
```
|
||||
|
||||
2. **Implement `configure_tools`**:
|
||||
This is the core abstract method you must implement. It receives a list of CrewAI `BaseTool` instances provided to the agent. Your task is to iterate through this list, adapt each `BaseTool` into the format expected by your external framework, and store the converted tools in the `self.converted_tools` list (which is initialized in the base class constructor).
|
||||
|
||||
```python
|
||||
def configure_tools(self, tools: List[BaseTool]) -> None:
|
||||
"""Configure and convert CrewAI tools for the specific implementation."""
|
||||
self.converted_tools = [] # Reset in case it's called multiple times
|
||||
for tool in tools:
|
||||
# Sanitize the tool name if required by the target framework
|
||||
sanitized_name = self.sanitize_tool_name(tool.name)
|
||||
|
||||
# --- Your Conversion Logic Goes Here ---
|
||||
# Example: Convert BaseTool to a dictionary format for LangGraph
|
||||
# converted_tool = {
|
||||
# "name": sanitized_name,
|
||||
# "description": tool.description,
|
||||
# "parameters": tool.args_schema.schema() if tool.args_schema else {},
|
||||
# # Add any other framework-specific fields
|
||||
# }
|
||||
|
||||
# Example: Convert BaseTool to an OpenAI function definition
|
||||
# converted_tool = {
|
||||
# "type": "function",
|
||||
# "function": {
|
||||
# "name": sanitized_name,
|
||||
# "description": tool.description,
|
||||
# "parameters": tool.args_schema.schema() if tool.args_schema else {"type": "object", "properties": {}},
|
||||
# }
|
||||
# }
|
||||
|
||||
# --- Replace above examples with your actual adaptation ---
|
||||
converted_tool = self.adapt_tool_to_my_framework(tool, sanitized_name) # Placeholder
|
||||
|
||||
self.converted_tools.append(converted_tool)
|
||||
print(f"Adapted tool '{tool.name}' to '{sanitized_name}' for MyCustomToolAdapter") # Placeholder
|
||||
|
||||
print(f"MyCustomToolAdapter finished configuring tools: {len(self.converted_tools)} adapted.") # Placeholder
|
||||
|
||||
# --- Helper method for adaptation (Example) ---
|
||||
def adapt_tool_to_my_framework(self, tool: BaseTool, sanitized_name: str) -> Any:
|
||||
# Replace this with the actual logic to convert a CrewAI BaseTool
|
||||
# to the format needed by your specific external agent framework.
|
||||
# This will vary greatly depending on the target framework.
|
||||
adapted_representation = {
|
||||
"framework_specific_name": sanitized_name,
|
||||
"framework_specific_description": tool.description,
|
||||
"inputs": tool.args_schema.schema() if tool.args_schema else None,
|
||||
"implementation_reference": tool.run # Or however the framework needs to call it
|
||||
}
|
||||
# Also ensure the tool works both sync and async
|
||||
async def async_tool_wrapper(*args, **kwargs):
|
||||
output = tool.run(*args, **kwargs)
|
||||
if inspect.isawaitable(output):
|
||||
return await output
|
||||
else:
|
||||
return output
|
||||
|
||||
adapted_tool = MyFrameworkTool(
|
||||
name=sanitized_name,
|
||||
description=tool.description,
|
||||
inputs=tool.args_schema.schema() if tool.args_schema else None,
|
||||
implementation_reference=async_tool_wrapper
|
||||
)
|
||||
|
||||
return adapted_representation
|
||||
|
||||
```
|
||||
|
||||
3. **Using the Adapter**:
|
||||
Typically, you would instantiate your `MyCustomToolAdapter` within your `MyCustomAgentAdapter`'s `configure_tools` method and use it to process the tools before configuring your external agent.
|
||||
|
||||
```python
|
||||
# Inside MyCustomAgentAdapter.configure_tools
|
||||
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
|
||||
if tools:
|
||||
tool_adapter = MyCustomToolAdapter() # Instantiate your tool adapter
|
||||
tool_adapter.configure_tools(tools) # Convert the tools
|
||||
adapted_tools = tool_adapter.tools() # Get the converted tools
|
||||
|
||||
# Now configure your external agent with the adapted_tools
|
||||
# Example: self.external_agent.set_tools(adapted_tools)
|
||||
print(f"Configuring external agent with adapted tools: {adapted_tools}") # Placeholder
|
||||
else:
|
||||
# Handle no tools case
|
||||
print("No tools provided for MyCustomAgentAdapter.")
|
||||
```
|
||||
|
||||
By creating a `BaseToolAdapter`, you decouple the tool conversion logic from the agent adaptation, making the integration cleaner and more modular. Remember to replace the placeholder examples with the actual conversion logic required by your specific external agent framework.
|
||||
|
||||
## BaseConverter
|
||||
The `BaseConverterAdapter` plays a crucial role when a CrewAI `Task` requires an agent to return its final output in a specific structured format, such as JSON or a Pydantic model. It bridges the gap between CrewAI's structured output requirements and the capabilities of your external agent.
|
||||
|
||||
Its primary responsibilities are:
|
||||
1. **Configuring the Agent for Structured Output:** Based on the `Task`'s requirements (`output_json` or `output_pydantic`), it instructs the associated `BaseAgentAdapter` (and indirectly, the external agent) on what format is expected.
|
||||
2. **Enhancing the System Prompt:** It modifies the agent's system prompt to include clear instructions on *how* to generate the output in the required structure.
|
||||
3. **Post-processing the Result:** It takes the raw output from the agent and attempts to parse, validate, and format it according to the required structure, ultimately returning a string representation (e.g., a JSON string).
|
||||
|
||||
Here's how you implement your custom converter adapter:
|
||||
|
||||
1. **Inherit from `BaseConverterAdapter`**:
|
||||
```python
|
||||
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
|
||||
# Assuming you have your MyCustomAgentAdapter defined
|
||||
# from .my_custom_agent_adapter import MyCustomAgentAdapter
|
||||
from crewai.task import Task
|
||||
from typing import Any
|
||||
|
||||
class MyCustomConverterAdapter(BaseConverterAdapter):
|
||||
# Store the expected output type (e.g., 'json', 'pydantic', 'text')
|
||||
_output_type: str = 'text'
|
||||
_output_schema: Any = None # Store JSON schema or Pydantic model
|
||||
|
||||
# ... implementation details ...
|
||||
```
|
||||
|
||||
2. **Implement `__init__`**:
|
||||
The constructor must accept the corresponding `agent_adapter` instance it will work with.
|
||||
|
||||
```python
|
||||
def __init__(self, agent_adapter: Any): # Use your specific AgentAdapter type hint
|
||||
self.agent_adapter = agent_adapter
|
||||
print(f"Initializing MyCustomConverterAdapter for agent adapter: {type(agent_adapter).__name__}")
|
||||
```
|
||||
|
||||
3. **Implement `configure_structured_output`**:
|
||||
This method receives the CrewAI `Task` object. You need to check the task's `output_json` and `output_pydantic` attributes to determine the required output structure. Store this information (e.g., in `_output_type` and `_output_schema`) and potentially call configuration methods on your `self.agent_adapter` if the external agent needs specific setup for structured output (which might have been partially handled in the agent adapter's `configure_structured_output` already).
|
||||
|
||||
```python
|
||||
def configure_structured_output(self, task: Task) -> None:
|
||||
"""Configure the expected structured output based on the task."""
|
||||
if task.output_pydantic:
|
||||
self._output_type = 'pydantic'
|
||||
self._output_schema = task.output_pydantic
|
||||
print(f"Converter: Configured for Pydantic output: {self._output_schema.__name__}")
|
||||
elif task.output_json:
|
||||
self._output_type = 'json'
|
||||
self._output_schema = task.output_json
|
||||
print(f"Converter: Configured for JSON output with schema: {self._output_schema}")
|
||||
else:
|
||||
self._output_type = 'text'
|
||||
self._output_schema = None
|
||||
print("Converter: Configured for standard text output.")
|
||||
|
||||
# Optionally, inform the agent adapter if needed
|
||||
# self.agent_adapter.set_output_mode(self._output_type, self._output_schema)
|
||||
```
|
||||
|
||||
4. **Implement `enhance_system_prompt`**:
|
||||
This method takes the agent's base system prompt string and should append instructions tailored to the currently configured `_output_type` and `_output_schema`. The goal is to guide the LLM powering the agent to produce output in the correct format.
|
||||
|
||||
```python
|
||||
def enhance_system_prompt(self, base_prompt: str) -> str:
|
||||
"""Enhance the system prompt with structured output instructions."""
|
||||
if self._output_type == 'text':
|
||||
return base_prompt # No enhancement needed for plain text
|
||||
|
||||
instructions = "\n\nYour final answer MUST be formatted as "
|
||||
if self._output_type == 'json':
|
||||
schema_str = json.dumps(self._output_schema, indent=2)
|
||||
instructions += f"a JSON object conforming to the following schema:\n```json\n{schema_str}\n```"
|
||||
elif self._output_type == 'pydantic':
|
||||
schema_str = json.dumps(self._output_schema.model_json_schema(), indent=2)
|
||||
instructions += f"a JSON object conforming to the Pydantic model '{self._output_schema.__name__}' with the following schema:\n```json\n{schema_str}\n```"
|
||||
|
||||
instructions += "\nEnsure your entire response is ONLY the valid JSON object, without any introductory text, explanations, or concluding remarks."
|
||||
|
||||
print(f"Converter: Enhancing prompt for {self._output_type} output.")
|
||||
return base_prompt + instructions
|
||||
```
|
||||
*Note: The exact prompt engineering might need tuning based on the agent/LLM being used.*
|
||||
|
||||
5. **Implement `post_process_result`**:
|
||||
This method receives the raw string output from the agent. If structured output was requested (`json` or `pydantic`), you should attempt to parse the string into the expected format. Handle potential parsing errors (e.g., log them, attempt simple fixes, or raise an exception). Crucially, the method must **always return a string**, even if the intermediate format was a dictionary or Pydantic object (e.g., by serializing it back to a JSON string).
|
||||
|
||||
```python
|
||||
import json
|
||||
from pydantic import ValidationError
|
||||
|
||||
def post_process_result(self, result: str) -> str:
|
||||
"""Post-process the agent's result to ensure it matches the expected format."""
|
||||
print(f"Converter: Post-processing result for {self._output_type} output.")
|
||||
if self._output_type == 'json':
|
||||
try:
|
||||
# Attempt to parse and re-serialize to ensure validity and consistent format
|
||||
parsed_json = json.loads(result)
|
||||
# Optional: Validate against self._output_schema if it's a JSON schema dictionary
|
||||
# from jsonschema import validate
|
||||
# validate(instance=parsed_json, schema=self._output_schema)
|
||||
return json.dumps(parsed_json)
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"Error: Failed to parse JSON output: {e}\nRaw output:\n{result}")
|
||||
# Handle error: return raw, raise exception, or try to fix
|
||||
return result # Example: return raw output on failure
|
||||
# except Exception as e: # Catch validation errors if using jsonschema
|
||||
# print(f"Error: JSON output failed schema validation: {e}\nRaw output:\n{result}")
|
||||
# return result
|
||||
elif self._output_type == 'pydantic':
|
||||
try:
|
||||
# Attempt to parse into the Pydantic model
|
||||
model_instance = self._output_schema.model_validate_json(result)
|
||||
# Return the model serialized back to JSON
|
||||
return model_instance.model_dump_json()
|
||||
except ValidationError as e:
|
||||
print(f"Error: Failed to validate Pydantic output: {e}\nRaw output:\n{result}")
|
||||
# Handle error
|
||||
return result # Example: return raw output on failure
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"Error: Failed to parse JSON for Pydantic model: {e}\nRaw output:\n{result}")
|
||||
return result
|
||||
else: # 'text'
|
||||
return result # No processing needed for plain text
|
||||
```
|
||||
|
||||
By implementing these methods, your `MyCustomConverterAdapter` ensures that structured output requests from CrewAI tasks are correctly handled by your integrated external agent, improving the reliability and usability of your custom agent within the CrewAI framework.
|
||||
|
||||
## Out of the Box Adapters
|
||||
|
||||
We provide out of the box adapters for the following frameworks:
|
||||
1. LangGraph
|
||||
2. OpenAI Agents
|
||||
|
||||
## Kicking off a crew with adapted agents:
|
||||
|
||||
```python
|
||||
import json
|
||||
import os
|
||||
from typing import List
|
||||
|
||||
from crewai_tools import SerperDevTool
|
||||
from src.crewai import Agent, Crew, Task
|
||||
from langchain_openai import ChatOpenAI
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.agents.agent_adapters.langgraph.langgraph_adapter import (
|
||||
LangGraphAgentAdapter,
|
||||
)
|
||||
from crewai.agents.agent_adapters.openai_agents.openai_adapter import OpenAIAgentAdapter
|
||||
|
||||
# CrewAI Agent
|
||||
code_helper_agent = Agent(
|
||||
role="Code Helper",
|
||||
goal="Help users solve coding problems effectively and provide clear explanations.",
|
||||
backstory="You are an experienced programmer with deep knowledge across multiple programming languages and frameworks. You specialize in solving complex coding challenges and explaining solutions clearly.",
|
||||
allow_delegation=False,
|
||||
verbose=True,
|
||||
)
|
||||
# OpenAI Agent Adapter
|
||||
link_finder_agent = OpenAIAgentAdapter(
|
||||
role="Link Finder",
|
||||
goal="Find the most relevant and high-quality resources for coding tasks.",
|
||||
backstory="You are a research specialist with a talent for finding the most helpful resources. You're skilled at using search tools to discover documentation, tutorials, and examples that directly address the user's coding needs.",
|
||||
tools=[SerperDevTool()],
|
||||
allow_delegation=False,
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# LangGraph Agent Adapter
|
||||
reporter_agent = LangGraphAgentAdapter(
|
||||
role="Reporter",
|
||||
goal="Report the results of the tasks.",
|
||||
backstory="You are a reporter who reports the results of the other tasks",
|
||||
llm=ChatOpenAI(model="gpt-4o"),
|
||||
allow_delegation=True,
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
|
||||
class Code(BaseModel):
|
||||
code: str
|
||||
|
||||
|
||||
task = Task(
|
||||
description="Give an answer to the coding question: {task}",
|
||||
expected_output="A thorough answer to the coding question: {task}",
|
||||
agent=code_helper_agent,
|
||||
output_json=Code,
|
||||
)
|
||||
task2 = Task(
|
||||
description="Find links to resources that can help with coding tasks. Use the serper tool to find resources that can help.",
|
||||
expected_output="A list of links to resources that can help with coding tasks",
|
||||
agent=link_finder_agent,
|
||||
)
|
||||
|
||||
|
||||
class Report(BaseModel):
|
||||
code: str
|
||||
links: List[str]
|
||||
|
||||
|
||||
task3 = Task(
|
||||
description="Report the results of the tasks.",
|
||||
expected_output="A report of the results of the tasks. this is the code produced and then the links to the resources that can help with the coding task.",
|
||||
agent=reporter_agent,
|
||||
output_json=Report,
|
||||
)
|
||||
# Use in CrewAI
|
||||
crew = Crew(
|
||||
agents=[code_helper_agent, link_finder_agent, reporter_agent],
|
||||
tasks=[task, task2, task3],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
result = crew.kickoff(
|
||||
inputs={"task": "How do you implement an abstract class in python?"}
|
||||
)
|
||||
|
||||
# Print raw result first
|
||||
print("Raw result:", result)
|
||||
|
||||
# Handle result based on its type
|
||||
if hasattr(result, "json_dict") and result.json_dict:
|
||||
json_result = result.json_dict
|
||||
print("\nStructured JSON result:")
|
||||
print(f"{json.dumps(json_result, indent=2)}")
|
||||
|
||||
# Access fields safely
|
||||
if isinstance(json_result, dict):
|
||||
if "code" in json_result:
|
||||
print("\nCode:")
|
||||
print(
|
||||
json_result["code"][:200] + "..."
|
||||
if len(json_result["code"]) > 200
|
||||
else json_result["code"]
|
||||
)
|
||||
|
||||
if "links" in json_result:
|
||||
print("\nLinks:")
|
||||
for link in json_result["links"][:5]: # Print first 5 links
|
||||
print(f"- {link}")
|
||||
if len(json_result["links"]) > 5:
|
||||
print(f"...and {len(json_result['links']) - 5} more links")
|
||||
elif hasattr(result, "pydantic") and result.pydantic:
|
||||
print("\nPydantic model result:")
|
||||
print(result.pydantic.model_dump_json(indent=2))
|
||||
else:
|
||||
# Fallback to raw output
|
||||
print("\nNo structured result available, using raw output:")
|
||||
print(result.raw[:500] + "..." if len(result.raw) > 500 else result.raw)
|
||||
|
||||
```
|
||||
95
docs/en/learn/coding-agents.mdx
Normal file
95
docs/en/learn/coding-agents.mdx
Normal file
@@ -0,0 +1,95 @@
|
||||
---
|
||||
title: Coding Agents
|
||||
description: Learn how to enable your CrewAI Agents to write and execute code, and explore advanced features for enhanced functionality.
|
||||
icon: rectangle-code
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI Agents now have the powerful ability to write and execute code, significantly enhancing their problem-solving capabilities. This feature is particularly useful for tasks that require computational or programmatic solutions.
|
||||
|
||||
## Enabling Code Execution
|
||||
|
||||
To enable code execution for an agent, set the `allow_code_execution` parameter to `True` when creating the agent.
|
||||
|
||||
Here's an example:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent
|
||||
|
||||
coding_agent = Agent(
|
||||
role="Senior Python Developer",
|
||||
goal="Craft well-designed and thought-out code",
|
||||
backstory="You are a senior Python developer with extensive experience in software architecture and best practices.",
|
||||
allow_code_execution=True
|
||||
)
|
||||
```
|
||||
|
||||
<Note>
|
||||
Note that `allow_code_execution` parameter defaults to `False`.
|
||||
</Note>
|
||||
|
||||
## Important Considerations
|
||||
|
||||
1. **Model Selection**: It is strongly recommended to use more capable models like Claude 3.5 Sonnet and GPT-4 when enabling code execution.
|
||||
These models have a better understanding of programming concepts and are more likely to generate correct and efficient code.
|
||||
|
||||
2. **Error Handling**: The code execution feature includes error handling. If executed code raises an exception, the agent will receive the error message and can attempt to correct the code or
|
||||
provide alternative solutions. The `max_retry_limit` parameter, which defaults to 2, controls the maximum number of retries for a task.
|
||||
|
||||
3. **Dependencies**: To use the code execution feature, you need to install the `crewai_tools` package. If not installed, the agent will log an info message:
|
||||
"Coding tools not available. Install crewai_tools."
|
||||
|
||||
## Code Execution Process
|
||||
|
||||
When an agent with code execution enabled encounters a task requiring programming:
|
||||
|
||||
<Steps>
|
||||
<Step title="Task Analysis">
|
||||
The agent analyzes the task and determines that code execution is necessary.
|
||||
</Step>
|
||||
<Step title="Code Formulation">
|
||||
It formulates the Python code needed to solve the problem.
|
||||
</Step>
|
||||
<Step title="Code Execution">
|
||||
The code is sent to the internal code execution tool (`CodeInterpreterTool`).
|
||||
</Step>
|
||||
<Step title="Result Interpretation">
|
||||
The agent interprets the result and incorporates it into its response or uses it for further problem-solving.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Example Usage
|
||||
|
||||
Here's a detailed example of creating an agent with code execution capabilities and using it in a task:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Create an agent with code execution enabled
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
backstory="You are an experienced data analyst with strong Python skills.",
|
||||
allow_code_execution=True
|
||||
)
|
||||
|
||||
# Create a task that requires code execution
|
||||
data_analysis_task = Task(
|
||||
description="Analyze the given dataset and calculate the average age of participants.",
|
||||
agent=coding_agent
|
||||
)
|
||||
|
||||
# Create a crew and add the task
|
||||
analysis_crew = Crew(
|
||||
agents=[coding_agent],
|
||||
tasks=[data_analysis_task]
|
||||
)
|
||||
|
||||
# Execute the crew
|
||||
result = analysis_crew.kickoff()
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
In this example, the `coding_agent` can write and execute Python code to perform data analysis tasks.
|
||||
89
docs/en/learn/conditional-tasks.mdx
Normal file
89
docs/en/learn/conditional-tasks.mdx
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
title: Conditional Tasks
|
||||
description: Learn how to use conditional tasks in a crewAI kickoff
|
||||
icon: diagram-subtask
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Conditional Tasks in crewAI allow for dynamic workflow adaptation based on the outcomes of previous tasks.
|
||||
This powerful feature enables crews to make decisions and execute tasks selectively, enhancing the flexibility and efficiency of your AI-driven processes.
|
||||
|
||||
## Example Usage
|
||||
|
||||
```python Code
|
||||
from typing import List
|
||||
from pydantic import BaseModel
|
||||
from crewai import Agent, Crew
|
||||
from crewai.tasks.conditional_task import ConditionalTask
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
from crewai.task import Task
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
# Define a condition function for the conditional task
|
||||
# If false, the task will be skipped, if true, then execute the task.
|
||||
def is_data_missing(output: TaskOutput) -> bool:
|
||||
return len(output.pydantic.events) < 10 # this will skip this task
|
||||
|
||||
# Define the agents
|
||||
data_fetcher_agent = Agent(
|
||||
role="Data Fetcher",
|
||||
goal="Fetch data online using Serper tool",
|
||||
backstory="Backstory 1",
|
||||
verbose=True,
|
||||
tools=[SerperDevTool()]
|
||||
)
|
||||
|
||||
data_processor_agent = Agent(
|
||||
role="Data Processor",
|
||||
goal="Process fetched data",
|
||||
backstory="Backstory 2",
|
||||
verbose=True
|
||||
)
|
||||
|
||||
summary_generator_agent = Agent(
|
||||
role="Summary Generator",
|
||||
goal="Generate summary from fetched data",
|
||||
backstory="Backstory 3",
|
||||
verbose=True
|
||||
)
|
||||
|
||||
class EventOutput(BaseModel):
|
||||
events: List[str]
|
||||
|
||||
task1 = Task(
|
||||
description="Fetch data about events in San Francisco using Serper tool",
|
||||
expected_output="List of 10 things to do in SF this week",
|
||||
agent=data_fetcher_agent,
|
||||
output_pydantic=EventOutput,
|
||||
)
|
||||
|
||||
conditional_task = ConditionalTask(
|
||||
description="""
|
||||
Check if data is missing. If we have less than 10 events,
|
||||
fetch more events using Serper tool so that
|
||||
we have a total of 10 events in SF this week..
|
||||
""",
|
||||
expected_output="List of 10 Things to do in SF this week",
|
||||
condition=is_data_missing,
|
||||
agent=data_processor_agent,
|
||||
)
|
||||
|
||||
task3 = Task(
|
||||
description="Generate summary of events in San Francisco from fetched data",
|
||||
expected_output="A complete report on the customer and their customers and competitors, including their demographics, preferences, market positioning and audience engagement.",
|
||||
agent=summary_generator_agent,
|
||||
)
|
||||
|
||||
# Create a crew with the tasks
|
||||
crew = Crew(
|
||||
agents=[data_fetcher_agent, data_processor_agent, summary_generator_agent],
|
||||
tasks=[task1, conditional_task, task3],
|
||||
verbose=True,
|
||||
planning=True
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
result = crew.kickoff()
|
||||
print("results", result)
|
||||
```
|
||||
69
docs/en/learn/create-custom-tools.mdx
Normal file
69
docs/en/learn/create-custom-tools.mdx
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
title: Create Custom Tools
|
||||
description: Comprehensive guide on crafting, using, and managing custom tools within the CrewAI framework, including new functionalities and error handling.
|
||||
icon: hammer
|
||||
---
|
||||
|
||||
## Creating and Utilizing Tools in CrewAI
|
||||
|
||||
This guide provides detailed instructions on creating custom tools for the CrewAI framework and how to efficiently manage and utilize these tools,
|
||||
incorporating the latest functionalities such as tool delegation, error handling, and dynamic tool calling. It also highlights the importance of collaboration tools,
|
||||
enabling agents to perform a wide range of actions.
|
||||
|
||||
### Subclassing `BaseTool`
|
||||
|
||||
To create a personalized tool, inherit from `BaseTool` and define the necessary attributes, including the `args_schema` for input validation, and the `_run` method.
|
||||
|
||||
```python Code
|
||||
from typing import Type
|
||||
from crewai.tools import BaseTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class MyToolInput(BaseModel):
|
||||
"""Input schema for MyCustomTool."""
|
||||
argument: str = Field(..., description="Description of the argument.")
|
||||
|
||||
class MyCustomTool(BaseTool):
|
||||
name: str = "Name of my tool"
|
||||
description: str = "What this tool does. It's vital for effective utilization."
|
||||
args_schema: Type[BaseModel] = MyToolInput
|
||||
|
||||
def _run(self, argument: str) -> str:
|
||||
# Your tool's logic here
|
||||
return "Tool's result"
|
||||
```
|
||||
|
||||
### Using the `tool` Decorator
|
||||
|
||||
Alternatively, you can use the tool decorator `@tool`. This approach allows you to define the tool's attributes and functionality directly within a function,
|
||||
offering a concise and efficient way to create specialized tools tailored to your needs.
|
||||
|
||||
```python Code
|
||||
from crewai.tools import tool
|
||||
|
||||
@tool("Tool Name")
|
||||
def my_simple_tool(question: str) -> str:
|
||||
"""Tool description for clarity."""
|
||||
# Tool logic here
|
||||
return "Tool output"
|
||||
```
|
||||
|
||||
### Defining a Cache Function for the Tool
|
||||
|
||||
To optimize tool performance with caching, define custom caching strategies using the `cache_function` attribute.
|
||||
|
||||
```python Code
|
||||
@tool("Tool with Caching")
|
||||
def cached_tool(argument: str) -> str:
|
||||
"""Tool functionality description."""
|
||||
return "Cacheable result"
|
||||
|
||||
def my_cache_strategy(arguments: dict, result: str) -> bool:
|
||||
# Define custom caching logic
|
||||
return True if some_condition else False
|
||||
|
||||
cached_tool.cache_function = my_cache_strategy
|
||||
```
|
||||
|
||||
By adhering to these guidelines and incorporating new functionalities and collaboration tools into your tool creation and management processes,
|
||||
you can leverage the full capabilities of the CrewAI framework, enhancing both the development experience and the efficiency of your AI agents.
|
||||
350
docs/en/learn/custom-llm.mdx
Normal file
350
docs/en/learn/custom-llm.mdx
Normal file
@@ -0,0 +1,350 @@
|
||||
---
|
||||
title: Custom LLM Implementation
|
||||
description: Learn how to create custom LLM implementations in CrewAI.
|
||||
icon: code
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
CrewAI supports custom LLM implementations through the `BaseLLM` abstract base class. This allows you to integrate any LLM provider that doesn't have built-in support in LiteLLM, or implement custom authentication mechanisms.
|
||||
|
||||
## Quick Start
|
||||
|
||||
Here's a minimal custom LLM implementation:
|
||||
|
||||
```python
|
||||
from crewai import BaseLLM
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
import requests
|
||||
|
||||
class CustomLLM(BaseLLM):
|
||||
def __init__(self, model: str, api_key: str, endpoint: str, temperature: Optional[float] = None):
|
||||
# IMPORTANT: Call super().__init__() with required parameters
|
||||
super().__init__(model=model, temperature=temperature)
|
||||
|
||||
self.api_key = api_key
|
||||
self.endpoint = endpoint
|
||||
|
||||
def call(
|
||||
self,
|
||||
messages: Union[str, List[Dict[str, str]]],
|
||||
tools: Optional[List[dict]] = None,
|
||||
callbacks: Optional[List[Any]] = None,
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
) -> Union[str, Any]:
|
||||
"""Call the LLM with the given messages."""
|
||||
# Convert string to message format if needed
|
||||
if isinstance(messages, str):
|
||||
messages = [{"role": "user", "content": messages}]
|
||||
|
||||
# Prepare request
|
||||
payload = {
|
||||
"model": self.model,
|
||||
"messages": messages,
|
||||
"temperature": self.temperature,
|
||||
}
|
||||
|
||||
# Add tools if provided and supported
|
||||
if tools and self.supports_function_calling():
|
||||
payload["tools"] = tools
|
||||
|
||||
# Make API call
|
||||
response = requests.post(
|
||||
self.endpoint,
|
||||
headers={
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
"Content-Type": "application/json"
|
||||
},
|
||||
json=payload,
|
||||
timeout=30
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
result = response.json()
|
||||
return result["choices"][0]["message"]["content"]
|
||||
|
||||
def supports_function_calling(self) -> bool:
|
||||
"""Override if your LLM supports function calling."""
|
||||
return True # Change to False if your LLM doesn't support tools
|
||||
|
||||
def get_context_window_size(self) -> int:
|
||||
"""Return the context window size of your LLM."""
|
||||
return 8192 # Adjust based on your model's actual context window
|
||||
```
|
||||
|
||||
## Using Your Custom LLM
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Assuming you have the CustomLLM class defined above
|
||||
# Create your custom LLM
|
||||
custom_llm = CustomLLM(
|
||||
model="my-custom-model",
|
||||
api_key="your-api-key",
|
||||
endpoint="https://api.example.com/v1/chat/completions",
|
||||
temperature=0.7
|
||||
)
|
||||
|
||||
# Use with an agent
|
||||
agent = Agent(
|
||||
role="Research Assistant",
|
||||
goal="Find and analyze information",
|
||||
backstory="You are a research assistant.",
|
||||
llm=custom_llm
|
||||
)
|
||||
|
||||
# Create and execute tasks
|
||||
task = Task(
|
||||
description="Research the latest developments in AI",
|
||||
expected_output="A comprehensive summary",
|
||||
agent=agent
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Required Methods
|
||||
|
||||
### Constructor: `__init__()`
|
||||
|
||||
**Critical**: You must call `super().__init__(model, temperature)` with the required parameters:
|
||||
|
||||
```python
|
||||
def __init__(self, model: str, api_key: str, temperature: Optional[float] = None):
|
||||
# REQUIRED: Call parent constructor with model and temperature
|
||||
super().__init__(model=model, temperature=temperature)
|
||||
|
||||
# Your custom initialization
|
||||
self.api_key = api_key
|
||||
```
|
||||
|
||||
### Abstract Method: `call()`
|
||||
|
||||
The `call()` method is the heart of your LLM implementation. It must:
|
||||
|
||||
- Accept messages (string or list of dicts with 'role' and 'content')
|
||||
- Return a string response
|
||||
- Handle tools and function calling if supported
|
||||
- Raise appropriate exceptions for errors
|
||||
|
||||
### Optional Methods
|
||||
|
||||
```python
|
||||
def supports_function_calling(self) -> bool:
|
||||
"""Return True if your LLM supports function calling."""
|
||||
return True # Default is True
|
||||
|
||||
def supports_stop_words(self) -> bool:
|
||||
"""Return True if your LLM supports stop sequences."""
|
||||
return True # Default is True
|
||||
|
||||
def get_context_window_size(self) -> int:
|
||||
"""Return the context window size."""
|
||||
return 4096 # Default is 4096
|
||||
```
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Error Handling
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
def call(self, messages, tools=None, callbacks=None, available_functions=None):
|
||||
try:
|
||||
response = requests.post(
|
||||
self.endpoint,
|
||||
headers={"Authorization": f"Bearer {self.api_key}"},
|
||||
json=payload,
|
||||
timeout=30
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.json()["choices"][0]["message"]["content"]
|
||||
|
||||
except requests.Timeout:
|
||||
raise TimeoutError("LLM request timed out")
|
||||
except requests.RequestException as e:
|
||||
raise RuntimeError(f"LLM request failed: {str(e)}")
|
||||
except (KeyError, IndexError) as e:
|
||||
raise ValueError(f"Invalid response format: {str(e)}")
|
||||
```
|
||||
|
||||
### Custom Authentication
|
||||
|
||||
```python
|
||||
from crewai import BaseLLM
|
||||
from typing import Optional
|
||||
|
||||
class CustomAuthLLM(BaseLLM):
|
||||
def __init__(self, model: str, auth_token: str, endpoint: str, temperature: Optional[float] = None):
|
||||
super().__init__(model=model, temperature=temperature)
|
||||
self.auth_token = auth_token
|
||||
self.endpoint = endpoint
|
||||
|
||||
def call(self, messages, tools=None, callbacks=None, available_functions=None):
|
||||
headers = {
|
||||
"Authorization": f"Custom {self.auth_token}", # Custom auth format
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
# Rest of implementation...
|
||||
```
|
||||
|
||||
### Stop Words Support
|
||||
|
||||
CrewAI automatically adds `"\nObservation:"` as a stop word to control agent behavior. If your LLM supports stop words:
|
||||
|
||||
```python
|
||||
def call(self, messages, tools=None, callbacks=None, available_functions=None):
|
||||
payload = {
|
||||
"model": self.model,
|
||||
"messages": messages,
|
||||
"stop": self.stop # Include stop words in API call
|
||||
}
|
||||
# Make API call...
|
||||
|
||||
def supports_stop_words(self) -> bool:
|
||||
return True # Your LLM supports stop sequences
|
||||
```
|
||||
|
||||
If your LLM doesn't support stop words natively:
|
||||
|
||||
```python
|
||||
def call(self, messages, tools=None, callbacks=None, available_functions=None):
|
||||
response = self._make_api_call(messages, tools)
|
||||
content = response["choices"][0]["message"]["content"]
|
||||
|
||||
# Manually truncate at stop words
|
||||
if self.stop:
|
||||
for stop_word in self.stop:
|
||||
if stop_word in content:
|
||||
content = content.split(stop_word)[0]
|
||||
break
|
||||
|
||||
return content
|
||||
|
||||
def supports_stop_words(self) -> bool:
|
||||
return False # Tell CrewAI we handle stop words manually
|
||||
```
|
||||
|
||||
## Function Calling
|
||||
|
||||
If your LLM supports function calling, implement the complete flow:
|
||||
|
||||
```python
|
||||
import json
|
||||
|
||||
def call(self, messages, tools=None, callbacks=None, available_functions=None):
|
||||
# Convert string to message format
|
||||
if isinstance(messages, str):
|
||||
messages = [{"role": "user", "content": messages}]
|
||||
|
||||
# Make API call
|
||||
response = self._make_api_call(messages, tools)
|
||||
message = response["choices"][0]["message"]
|
||||
|
||||
# Check for function calls
|
||||
if "tool_calls" in message and available_functions:
|
||||
return self._handle_function_calls(
|
||||
message["tool_calls"], messages, tools, available_functions
|
||||
)
|
||||
|
||||
return message["content"]
|
||||
|
||||
def _handle_function_calls(self, tool_calls, messages, tools, available_functions):
|
||||
"""Handle function calling with proper message flow."""
|
||||
for tool_call in tool_calls:
|
||||
function_name = tool_call["function"]["name"]
|
||||
|
||||
if function_name in available_functions:
|
||||
# Parse and execute function
|
||||
function_args = json.loads(tool_call["function"]["arguments"])
|
||||
function_result = available_functions[function_name](**function_args)
|
||||
|
||||
# Add function call and result to message history
|
||||
messages.append({
|
||||
"role": "assistant",
|
||||
"content": None,
|
||||
"tool_calls": [tool_call]
|
||||
})
|
||||
messages.append({
|
||||
"role": "tool",
|
||||
"tool_call_id": tool_call["id"],
|
||||
"name": function_name,
|
||||
"content": str(function_result)
|
||||
})
|
||||
|
||||
# Call LLM again with updated context
|
||||
return self.call(messages, tools, None, available_functions)
|
||||
|
||||
return "Function call failed"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Constructor Errors**
|
||||
```python
|
||||
# ❌ Wrong - missing required parameters
|
||||
def __init__(self, api_key: str):
|
||||
super().__init__()
|
||||
|
||||
# ✅ Correct
|
||||
def __init__(self, model: str, api_key: str, temperature: Optional[float] = None):
|
||||
super().__init__(model=model, temperature=temperature)
|
||||
```
|
||||
|
||||
**Function Calling Not Working**
|
||||
- Ensure `supports_function_calling()` returns `True`
|
||||
- Check that you handle `tool_calls` in the response
|
||||
- Verify `available_functions` parameter is used correctly
|
||||
|
||||
**Authentication Failures**
|
||||
- Verify API key format and permissions
|
||||
- Check authentication header format
|
||||
- Ensure endpoint URLs are correct
|
||||
|
||||
**Response Parsing Errors**
|
||||
- Validate response structure before accessing nested fields
|
||||
- Handle cases where content might be None
|
||||
- Add proper error handling for malformed responses
|
||||
|
||||
## Testing Your Custom LLM
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
def test_custom_llm():
|
||||
llm = CustomLLM(
|
||||
model="test-model",
|
||||
api_key="test-key",
|
||||
endpoint="https://api.test.com"
|
||||
)
|
||||
|
||||
# Test basic call
|
||||
result = llm.call("Hello, world!")
|
||||
assert isinstance(result, str)
|
||||
assert len(result) > 0
|
||||
|
||||
# Test with CrewAI agent
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test custom LLM",
|
||||
backstory="A test agent.",
|
||||
llm=llm
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Say hello",
|
||||
expected_output="A greeting",
|
||||
agent=agent
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
assert "hello" in result.raw.lower()
|
||||
```
|
||||
|
||||
This guide covers the essentials of implementing custom LLMs in CrewAI.
|
||||
90
docs/en/learn/custom-manager-agent.mdx
Normal file
90
docs/en/learn/custom-manager-agent.mdx
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
title: Custom Manager Agent
|
||||
description: Learn how to set a custom agent as the manager in CrewAI, providing more control over task management and coordination.
|
||||
icon: user-shield
|
||||
---
|
||||
|
||||
# Setting a Specific Agent as Manager in CrewAI
|
||||
|
||||
CrewAI allows users to set a specific agent as the manager of the crew, providing more control over the management and coordination of tasks.
|
||||
This feature enables the customization of the managerial role to better fit your project's requirements.
|
||||
|
||||
## Using the `manager_agent` Attribute
|
||||
|
||||
### Custom Manager Agent
|
||||
|
||||
The `manager_agent` attribute allows you to define a custom agent to manage the crew. This agent will oversee the entire process, ensuring that tasks are completed efficiently and to the highest standard.
|
||||
|
||||
### Example
|
||||
|
||||
```python Code
|
||||
import os
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
|
||||
# Define your agents
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="Conduct thorough research and analysis on AI and AI agents",
|
||||
backstory="You're an expert researcher, specialized in technology, software engineering, AI, and startups. You work as a freelancer and are currently researching for a new client.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
writer = Agent(
|
||||
role="Senior Writer",
|
||||
goal="Create compelling content about AI and AI agents",
|
||||
backstory="You're a senior writer, specialized in technology, software engineering, AI, and startups. You work as a freelancer and are currently writing content for a new client.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
# Define your task
|
||||
task = Task(
|
||||
description="Generate a list of 5 interesting ideas for an article, then write one captivating paragraph for each idea that showcases the potential of a full article on this topic. Return the list of ideas with their paragraphs and your notes.",
|
||||
expected_output="5 bullet points, each with a paragraph and accompanying notes.",
|
||||
)
|
||||
|
||||
# Define the manager agent
|
||||
manager = Agent(
|
||||
role="Project Manager",
|
||||
goal="Efficiently manage the crew and ensure high-quality task completion",
|
||||
backstory="You're an experienced project manager, skilled in overseeing complex projects and guiding teams to success. Your role is to coordinate the efforts of the crew members, ensuring that each task is completed on time and to the highest standard.",
|
||||
allow_delegation=True,
|
||||
)
|
||||
|
||||
# Instantiate your crew with a custom manager
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[task],
|
||||
manager_agent=manager,
|
||||
process=Process.hierarchical,
|
||||
)
|
||||
|
||||
# Start the crew's work
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Benefits of a Custom Manager Agent
|
||||
|
||||
- **Enhanced Control**: Tailor the management approach to fit the specific needs of your project.
|
||||
- **Improved Coordination**: Ensure efficient task coordination and management by an experienced agent.
|
||||
- **Customizable Management**: Define managerial roles and responsibilities that align with your project's goals.
|
||||
|
||||
## Setting a Manager LLM
|
||||
|
||||
If you're using the hierarchical process and don't want to set a custom manager agent, you can specify the language model for the manager:
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
manager_llm = LLM(model="gpt-4o")
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[task],
|
||||
process=Process.hierarchical,
|
||||
manager_llm=manager_llm
|
||||
)
|
||||
```
|
||||
|
||||
<Note>
|
||||
Either `manager_agent` or `manager_llm` must be set when using the hierarchical process.
|
||||
</Note>
|
||||
111
docs/en/learn/customizing-agents.mdx
Normal file
111
docs/en/learn/customizing-agents.mdx
Normal file
@@ -0,0 +1,111 @@
|
||||
---
|
||||
title: Customize Agents
|
||||
description: A comprehensive guide to tailoring agents for specific roles, tasks, and advanced customizations within the CrewAI framework.
|
||||
icon: user-pen
|
||||
---
|
||||
|
||||
## Customizable Attributes
|
||||
|
||||
Crafting an efficient CrewAI team hinges on the ability to dynamically tailor your AI agents to meet the unique requirements of any project. This section covers the foundational attributes you can customize.
|
||||
|
||||
### Key Attributes for Customization
|
||||
|
||||
| Attribute | Description |
|
||||
|:-----------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| **Role** | Specifies the agent's job within the crew, such as 'Analyst' or 'Customer Service Rep'. |
|
||||
| **Goal** | Defines the agent’s objectives, aligned with its role and the crew’s overarching mission. |
|
||||
| **Backstory** | Provides depth to the agent's persona, enhancing motivations and engagements within the crew. |
|
||||
| **Tools** *(Optional)* | Represents the capabilities or methods the agent uses for tasks, from simple functions to complex integrations. |
|
||||
| **Cache** *(Optional)* | Determines if the agent should use a cache for tool usage. |
|
||||
| **Max RPM** | Sets the maximum requests per minute (`max_rpm`). Can be set to `None` for unlimited requests to external services. |
|
||||
| **Verbose** *(Optional)* | Enables detailed logging for debugging and optimization, providing insights into execution processes. |
|
||||
| **Allow Delegation** *(Optional)* | Controls task delegation to other agents, default is `False`. |
|
||||
| **Max Iter** *(Optional)* | Limits the maximum number of iterations (`max_iter`) for a task to prevent infinite loops, with a default of 25. |
|
||||
| **Max Execution Time** *(Optional)* | Sets the maximum time allowed for an agent to complete a task. |
|
||||
| **System Template** *(Optional)* | Defines the system format for the agent. |
|
||||
| **Prompt Template** *(Optional)* | Defines the prompt format for the agent. |
|
||||
| **Response Template** *(Optional)* | Defines the response format for the agent. |
|
||||
| **Use System Prompt** *(Optional)* | Controls whether the agent will use a system prompt during task execution. |
|
||||
| **Respect Context Window** | Enables a sliding context window by default, maintaining context size. |
|
||||
| **Max Retry Limit** | Sets the maximum number of retries (`max_retry_limit`) for an agent in case of errors. |
|
||||
|
||||
## Advanced Customization Options
|
||||
|
||||
Beyond the basic attributes, CrewAI allows for deeper customization to enhance an agent's behavior and capabilities significantly.
|
||||
|
||||
### Language Model Customization
|
||||
|
||||
Agents can be customized with specific language models (`llm`) and function-calling language models (`function_calling_llm`), offering advanced control over their processing and decision-making abilities.
|
||||
It's important to note that setting the `function_calling_llm` allows for overriding the default crew function-calling language model, providing a greater degree of customization.
|
||||
|
||||
## Performance and Debugging Settings
|
||||
|
||||
Adjusting an agent's performance and monitoring its operations are crucial for efficient task execution.
|
||||
|
||||
### Verbose Mode and RPM Limit
|
||||
|
||||
- **Verbose Mode**: Enables detailed logging of an agent's actions, useful for debugging and optimization. Specifically, it provides insights into agent execution processes, aiding in the optimization of performance.
|
||||
- **RPM Limit**: Sets the maximum number of requests per minute (`max_rpm`). This attribute is optional and can be set to `None` for no limit, allowing for unlimited queries to external services if needed.
|
||||
|
||||
### Maximum Iterations for Task Execution
|
||||
|
||||
The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions.
|
||||
The default value is set to 25, providing a balance between thoroughness and efficiency. Once the agent approaches this number, it will try its best to give a good answer.
|
||||
|
||||
## Customizing Agents and Tools
|
||||
|
||||
Agents are customized by defining their attributes and tools during initialization. Tools are critical for an agent's functionality, enabling them to perform specialized tasks.
|
||||
The `tools` attribute should be an array of tools the agent can utilize, and it's initialized as an empty list by default. Tools can be added or modified post-agent initialization to adapt to new requirements.
|
||||
|
||||
```shell
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
### Example: Assigning Tools to an Agent
|
||||
|
||||
```python Code
|
||||
import os
|
||||
from crewai import Agent
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
# Set API keys for tool initialization
|
||||
os.environ["OPENAI_API_KEY"] = "Your Key"
|
||||
os.environ["SERPER_API_KEY"] = "Your Key"
|
||||
|
||||
# Initialize a search tool
|
||||
search_tool = SerperDevTool()
|
||||
|
||||
# Initialize the agent with advanced options
|
||||
agent = Agent(
|
||||
role='Research Analyst',
|
||||
goal='Provide up-to-date market analysis',
|
||||
backstory='An expert analyst with a keen eye for market trends.',
|
||||
tools=[search_tool],
|
||||
memory=True, # Enable memory
|
||||
verbose=True,
|
||||
max_rpm=None, # No limit on requests per minute
|
||||
max_iter=25, # Default value for maximum iterations
|
||||
)
|
||||
```
|
||||
|
||||
## Delegation and Autonomy
|
||||
|
||||
Controlling an agent's ability to delegate tasks or ask questions is vital for tailoring its autonomy and collaborative dynamics within the CrewAI framework. By default,
|
||||
the `allow_delegation` attribute is now set to `False`, disabling agents to seek assistance or delegate tasks as needed. This default behavior can be changed to promote collaborative problem-solving and
|
||||
efficiency within the CrewAI ecosystem. If needed, delegation can be enabled to suit specific operational requirements.
|
||||
|
||||
### Example: Disabling Delegation for an Agent
|
||||
|
||||
```python Code
|
||||
agent = Agent(
|
||||
role='Content Writer',
|
||||
goal='Write engaging content on market trends',
|
||||
backstory='A seasoned writer with expertise in market analysis.',
|
||||
allow_delegation=True # Enabling delegation
|
||||
)
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Customizing agents in CrewAI by setting their roles, goals, backstories, and tools, alongside advanced options like language model customization, memory, performance settings, and delegation preferences,
|
||||
equips a nuanced and capable AI team ready for complex challenges.
|
||||
73
docs/en/learn/dalle-image-generation.mdx
Normal file
73
docs/en/learn/dalle-image-generation.mdx
Normal file
@@ -0,0 +1,73 @@
|
||||
---
|
||||
title: "Image Generation with DALL-E"
|
||||
description: "Learn how to use DALL-E for AI-powered image generation in your CrewAI projects"
|
||||
icon: "image"
|
||||
---
|
||||
|
||||
CrewAI supports integration with OpenAI's DALL-E, allowing your AI agents to generate images as part of their tasks. This guide will walk you through how to set up and use the DALL-E tool in your CrewAI projects.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- crewAI installed (latest version)
|
||||
- OpenAI API key with access to DALL-E
|
||||
|
||||
## Setting Up the DALL-E Tool
|
||||
|
||||
<Steps>
|
||||
<Step title="Import the DALL-E tool">
|
||||
```python
|
||||
from crewai_tools import DallETool
|
||||
```
|
||||
</Step>
|
||||
|
||||
<Step title="Add the DALL-E tool to your agent configuration">
|
||||
```python
|
||||
@agent
|
||||
def researcher(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['researcher'],
|
||||
tools=[SerperDevTool(), DallETool()], # Add DallETool to the list of tools
|
||||
allow_delegation=False,
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Using the DALL-E Tool
|
||||
|
||||
Once you've added the DALL-E tool to your agent, it can generate images based on text prompts. The tool will return a URL to the generated image, which can be used in the agent's output or passed to other agents for further processing.
|
||||
|
||||
### Example Agent Configuration
|
||||
|
||||
```yaml
|
||||
role: >
|
||||
LinkedIn Profile Senior Data Researcher
|
||||
goal: >
|
||||
Uncover detailed LinkedIn profiles based on provided name {name} and domain {domain}
|
||||
Generate a Dall-e image based on domain {domain}
|
||||
backstory: >
|
||||
You're a seasoned researcher with a knack for uncovering the most relevant LinkedIn profiles.
|
||||
Known for your ability to navigate LinkedIn efficiently, you excel at gathering and presenting
|
||||
professional information clearly and concisely.
|
||||
```
|
||||
|
||||
### Expected Output
|
||||
|
||||
The agent with the DALL-E tool will be able to generate the image and provide a URL in its response. You can then download the image.
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/dall-e-image.png" alt="DALL-E Image" />
|
||||
</Frame>
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Be specific in your image generation prompts** to get the best results.
|
||||
2. **Consider generation time** - Image generation can take some time, so factor this into your task planning.
|
||||
3. **Follow usage policies** - Always comply with OpenAI's usage policies when generating images.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
1. **Check API access** - Ensure your OpenAI API key has access to DALL-E.
|
||||
2. **Version compatibility** - Check that you're using the latest version of crewAI and crewai-tools.
|
||||
3. **Tool configuration** - Verify that the DALL-E tool is correctly added to the agent's tool list.
|
||||
50
docs/en/learn/force-tool-output-as-result.mdx
Normal file
50
docs/en/learn/force-tool-output-as-result.mdx
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
title: Force Tool Output as Result
|
||||
description: Learn how to force tool output as the result in an Agent's task in CrewAI.
|
||||
icon: wrench-simple
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
In CrewAI, you can force the output of a tool as the result of an agent's task.
|
||||
This feature is useful when you want to ensure that the tool output is captured and returned as the task result, avoiding any agent modification during the task execution.
|
||||
|
||||
## Forcing Tool Output as Result
|
||||
|
||||
To force the tool output as the result of an agent's task, you need to set the `result_as_answer` parameter to `True` when adding a tool to the agent.
|
||||
This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
|
||||
|
||||
Here's an example of how to force the tool output as the result of an agent's task:
|
||||
|
||||
```python Code
|
||||
from crewai.agent import Agent
|
||||
from my_tool import MyCustomTool
|
||||
|
||||
# Create a coding agent with the custom tool
|
||||
coding_agent = Agent(
|
||||
role="Data Scientist",
|
||||
goal="Produce amazing reports on AI",
|
||||
backstory="You work with data and AI",
|
||||
tools=[MyCustomTool(result_as_answer=True)],
|
||||
)
|
||||
|
||||
# Assuming the tool's execution and result population occurs within the system
|
||||
task_result = coding_agent.execute_task(task)
|
||||
```
|
||||
|
||||
## Workflow in Action
|
||||
|
||||
<Steps>
|
||||
<Step title="Task Execution">
|
||||
The agent executes the task using the tool provided.
|
||||
</Step>
|
||||
<Step title="Tool Output">
|
||||
The tool generates the output, which is captured as the task result.
|
||||
</Step>
|
||||
<Step title="Agent Interaction">
|
||||
The agent may reflect and take learnings from the tool but the output is not modified.
|
||||
</Step>
|
||||
<Step title="Result Return">
|
||||
The tool output is returned as the task result without any modifications.
|
||||
</Step>
|
||||
</Steps>
|
||||
112
docs/en/learn/hierarchical-process.mdx
Normal file
112
docs/en/learn/hierarchical-process.mdx
Normal file
@@ -0,0 +1,112 @@
|
||||
---
|
||||
title: Hierarchical Process
|
||||
description: A comprehensive guide to understanding and applying the hierarchical process within your CrewAI projects, updated to reflect the latest coding practices and functionalities.
|
||||
icon: sitemap
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
The hierarchical process in CrewAI introduces a structured approach to task management, simulating traditional organizational hierarchies for efficient task delegation and execution.
|
||||
This systematic workflow enhances project outcomes by ensuring tasks are handled with optimal efficiency and accuracy.
|
||||
|
||||
<Tip>
|
||||
The hierarchical process is designed to leverage advanced models like GPT-4, optimizing token usage while handling complex tasks with greater efficiency.
|
||||
</Tip>
|
||||
|
||||
## Hierarchical Process Overview
|
||||
|
||||
By default, tasks in CrewAI are managed through a sequential process. However, adopting a hierarchical approach allows for a clear hierarchy in task management,
|
||||
where a 'manager' agent coordinates the workflow, delegates tasks, and validates outcomes for streamlined and effective execution. This manager agent can now be either
|
||||
automatically created by CrewAI or explicitly set by the user.
|
||||
|
||||
### Key Features
|
||||
|
||||
- **Task Delegation**: A manager agent allocates tasks among crew members based on their roles and capabilities.
|
||||
- **Result Validation**: The manager evaluates outcomes to ensure they meet the required standards.
|
||||
- **Efficient Workflow**: Emulates corporate structures, providing an organized approach to task management.
|
||||
- **System Prompt Handling**: Optionally specify whether the system should use predefined prompts.
|
||||
- **Stop Words Control**: Optionally specify whether stop words should be used, supporting various models including the o1 models.
|
||||
- **Context Window Respect**: Prioritize important context by enabling respect of the context window, which is now the default behavior.
|
||||
- **Delegation Control**: Delegation is now disabled by default to give users explicit control.
|
||||
- **Max Requests Per Minute**: Configurable option to set the maximum number of requests per minute.
|
||||
- **Max Iterations**: Limit the maximum number of iterations for obtaining a final answer.
|
||||
|
||||
|
||||
## Implementing the Hierarchical Process
|
||||
|
||||
To utilize the hierarchical process, it's essential to explicitly set the process attribute to `Process.hierarchical`, as the default behavior is `Process.sequential`.
|
||||
Define a crew with a designated manager and establish a clear chain of command.
|
||||
|
||||
<Tip>
|
||||
Assign tools at the agent level to facilitate task delegation and execution by the designated agents under the manager's guidance.
|
||||
Tools can also be specified at the task level for precise control over tool availability during task execution.
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
Configuring the `manager_llm` parameter is crucial for the hierarchical process.
|
||||
The system requires a manager LLM to be set up for proper function, ensuring tailored decision-making.
|
||||
</Tip>
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Process, Agent
|
||||
|
||||
# Agents are defined with attributes for backstory, cache, and verbose mode
|
||||
researcher = Agent(
|
||||
role='Researcher',
|
||||
goal='Conduct in-depth analysis',
|
||||
backstory='Experienced data analyst with a knack for uncovering hidden trends.',
|
||||
)
|
||||
writer = Agent(
|
||||
role='Writer',
|
||||
goal='Create engaging content',
|
||||
backstory='Creative writer passionate about storytelling in technical domains.',
|
||||
)
|
||||
|
||||
# Establishing the crew with a hierarchical process and additional configurations
|
||||
project_crew = Crew(
|
||||
tasks=[...], # Tasks to be delegated and executed under the manager's supervision
|
||||
agents=[researcher, writer],
|
||||
manager_llm="gpt-4o", # Specify which LLM the manager should use
|
||||
process=Process.hierarchical,
|
||||
planning=True,
|
||||
)
|
||||
```
|
||||
|
||||
### Using a Custom Manager Agent
|
||||
|
||||
Alternatively, you can create a custom manager agent with specific attributes tailored to your project's management needs. This gives you more control over the manager's behavior and capabilities.
|
||||
|
||||
```python
|
||||
# Define a custom manager agent
|
||||
manager = Agent(
|
||||
role="Project Manager",
|
||||
goal="Efficiently manage the crew and ensure high-quality task completion",
|
||||
backstory="You're an experienced project manager, skilled in overseeing complex projects and guiding teams to success.",
|
||||
allow_delegation=True,
|
||||
)
|
||||
|
||||
# Use the custom manager in your crew
|
||||
project_crew = Crew(
|
||||
tasks=[...],
|
||||
agents=[researcher, writer],
|
||||
manager_agent=manager, # Use your custom manager agent
|
||||
process=Process.hierarchical,
|
||||
planning=True,
|
||||
)
|
||||
```
|
||||
|
||||
<Tip>
|
||||
For more details on creating and customizing a manager agent, check out the [Custom Manager Agent documentation](https://docs.crewai.com/how-to/custom-manager-agent#custom-manager-agent).
|
||||
</Tip>
|
||||
|
||||
|
||||
### Workflow in Action
|
||||
|
||||
1. **Task Assignment**: The manager assigns tasks strategically, considering each agent's capabilities and available tools.
|
||||
2. **Execution and Review**: Agents complete their tasks with the option for asynchronous execution and callback functions for streamlined workflows.
|
||||
3. **Sequential Task Progression**: Despite being a hierarchical process, tasks follow a logical order for smooth progression, facilitated by the manager's oversight.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Adopting the hierarchical process in CrewAI, with the correct configurations and understanding of the system's capabilities, facilitates an organized and efficient approach to project management.
|
||||
Utilize the advanced features and customizations to tailor the workflow to your specific needs, ensuring optimal task execution and project success.
|
||||
78
docs/en/learn/human-in-the-loop.mdx
Normal file
78
docs/en/learn/human-in-the-loop.mdx
Normal file
@@ -0,0 +1,78 @@
|
||||
---
|
||||
title: "Human-in-the-Loop (HITL) Workflows"
|
||||
description: "Learn how to implement Human-in-the-Loop workflows in CrewAI for enhanced decision-making"
|
||||
icon: "user-check"
|
||||
---
|
||||
|
||||
Human-in-the-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI.
|
||||
|
||||
## Setting Up HITL Workflows
|
||||
|
||||
<Steps>
|
||||
<Step title="Configure Your Task">
|
||||
Set up your task with human input enabled:
|
||||
<Frame>
|
||||
<img src="/images/enterprise/crew-human-input.png" alt="Crew Human Input" />
|
||||
</Frame>
|
||||
</Step>
|
||||
|
||||
<Step title="Provide Webhook URL">
|
||||
When kicking off your crew, include a webhook URL for human input:
|
||||
<Frame>
|
||||
<img src="/images/enterprise/crew-webhook-url.png" alt="Crew Webhook URL" />
|
||||
</Frame>
|
||||
</Step>
|
||||
|
||||
<Step title="Receive Webhook Notification">
|
||||
Once the crew completes the task requiring human input, you'll receive a webhook notification containing:
|
||||
- Execution ID
|
||||
- Task ID
|
||||
- Task output
|
||||
</Step>
|
||||
|
||||
<Step title="Review Task Output">
|
||||
The system will pause in the `Pending Human Input` state. Review the task output carefully.
|
||||
</Step>
|
||||
|
||||
<Step title="Submit Human Feedback">
|
||||
Call the resume endpoint of your crew with the following information:
|
||||
<Frame>
|
||||
<img src="/images/enterprise/crew-resume-endpoint.png" alt="Crew Resume Endpoint" />
|
||||
</Frame>
|
||||
<Warning>
|
||||
**Feedback Impact on Task Execution**:
|
||||
It's crucial to exercise care when providing feedback, as the entire feedback content will be incorporated as additional context for further task executions.
|
||||
</Warning>
|
||||
This means:
|
||||
- All information in your feedback becomes part of the task's context.
|
||||
- Irrelevant details may negatively influence it.
|
||||
- Concise, relevant feedback helps maintain task focus and efficiency.
|
||||
- Always review your feedback carefully before submission to ensure it contains only pertinent information that will positively guide the task's execution.
|
||||
</Step>
|
||||
<Step title="Handle Negative Feedback">
|
||||
If you provide negative feedback:
|
||||
- The crew will retry the task with added context from your feedback.
|
||||
- You'll receive another webhook notification for further review.
|
||||
- Repeat steps 4-6 until satisfied.
|
||||
</Step>
|
||||
|
||||
<Step title="Execution Continuation">
|
||||
When you submit positive feedback, the execution will proceed to the next steps.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Be Specific**: Provide clear, actionable feedback that directly addresses the task at hand
|
||||
- **Stay Relevant**: Only include information that will help improve the task execution
|
||||
- **Be Timely**: Respond to HITL prompts promptly to avoid workflow delays
|
||||
- **Review Carefully**: Double-check your feedback before submitting to ensure accuracy
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
HITL workflows are particularly valuable for:
|
||||
- Quality assurance and validation
|
||||
- Complex decision-making scenarios
|
||||
- Sensitive or high-stakes operations
|
||||
- Creative tasks requiring human judgment
|
||||
- Compliance and regulatory reviews
|
||||
98
docs/en/learn/human-input-on-execution.mdx
Normal file
98
docs/en/learn/human-input-on-execution.mdx
Normal file
@@ -0,0 +1,98 @@
|
||||
---
|
||||
title: Human Input on Execution
|
||||
description: Integrating CrewAI with human input during execution in complex decision-making processes and leveraging the full capabilities of the agent's attributes and tools.
|
||||
icon: user-check
|
||||
---
|
||||
|
||||
## Human input in agent execution
|
||||
|
||||
Human input is critical in several agent execution scenarios, allowing agents to request additional information or clarification when necessary.
|
||||
This feature is especially useful in complex decision-making processes or when agents require more details to complete a task effectively.
|
||||
|
||||
## Using human input with CrewAI
|
||||
|
||||
To integrate human input into agent execution, set the `human_input` flag in the task definition. When enabled, the agent prompts the user for input before delivering its final answer.
|
||||
This input can provide extra context, clarify ambiguities, or validate the agent's output.
|
||||
|
||||
### Example:
|
||||
|
||||
```shell
|
||||
pip install crewai
|
||||
```
|
||||
|
||||
```python Code
|
||||
import os
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
|
||||
os.environ["OPENAI_API_KEY"] = "Your Key"
|
||||
|
||||
# Loading Tools
|
||||
search_tool = SerperDevTool()
|
||||
|
||||
# Define your agents with roles, goals, tools, and additional attributes
|
||||
researcher = Agent(
|
||||
role='Senior Research Analyst',
|
||||
goal='Uncover cutting-edge developments in AI and data science',
|
||||
backstory=(
|
||||
"You are a Senior Research Analyst at a leading tech think tank. "
|
||||
"Your expertise lies in identifying emerging trends and technologies in AI and data science. "
|
||||
"You have a knack for dissecting complex data and presenting actionable insights."
|
||||
),
|
||||
verbose=True,
|
||||
allow_delegation=False,
|
||||
tools=[search_tool]
|
||||
)
|
||||
writer = Agent(
|
||||
role='Tech Content Strategist',
|
||||
goal='Craft compelling content on tech advancements',
|
||||
backstory=(
|
||||
"You are a renowned Tech Content Strategist, known for your insightful and engaging articles on technology and innovation. "
|
||||
"With a deep understanding of the tech industry, you transform complex concepts into compelling narratives."
|
||||
),
|
||||
verbose=True,
|
||||
allow_delegation=True,
|
||||
tools=[search_tool],
|
||||
cache=False, # Disable cache for this agent
|
||||
)
|
||||
|
||||
# Create tasks for your agents
|
||||
task1 = Task(
|
||||
description=(
|
||||
"Conduct a comprehensive analysis of the latest advancements in AI in 2025. "
|
||||
"Identify key trends, breakthrough technologies, and potential industry impacts. "
|
||||
"Compile your findings in a detailed report. "
|
||||
"Make sure to check with a human if the draft is good before finalizing your answer."
|
||||
),
|
||||
expected_output='A comprehensive full report on the latest AI advancements in 2025, leave nothing out',
|
||||
agent=researcher,
|
||||
human_input=True
|
||||
)
|
||||
|
||||
task2 = Task(
|
||||
description=(
|
||||
"Using the insights from the researcher\'s report, develop an engaging blog post that highlights the most significant AI advancements. "
|
||||
"Your post should be informative yet accessible, catering to a tech-savvy audience. "
|
||||
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
|
||||
),
|
||||
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2025',
|
||||
agent=writer,
|
||||
human_input=True
|
||||
)
|
||||
|
||||
# Instantiate your crew with a sequential process
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[task1, task2],
|
||||
verbose=True,
|
||||
memory=True,
|
||||
planning=True # Enable planning feature for the crew
|
||||
)
|
||||
|
||||
# Get your crew to work!
|
||||
result = crew.kickoff()
|
||||
|
||||
print("######################")
|
||||
print(result)
|
||||
```
|
||||
123
docs/en/learn/kickoff-async.mdx
Normal file
123
docs/en/learn/kickoff-async.mdx
Normal file
@@ -0,0 +1,123 @@
|
||||
---
|
||||
title: Kickoff Crew Asynchronously
|
||||
description: Kickoff a Crew Asynchronously
|
||||
icon: rocket-launch
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI provides the ability to kickoff a crew asynchronously, allowing you to start the crew execution in a non-blocking manner.
|
||||
This feature is particularly useful when you want to run multiple crews concurrently or when you need to perform other tasks while the crew is executing.
|
||||
|
||||
## Asynchronous Crew Execution
|
||||
|
||||
To kickoff a crew asynchronously, use the `kickoff_async()` method. This method initiates the crew execution in a separate thread, allowing the main thread to continue executing other tasks.
|
||||
|
||||
### Method Signature
|
||||
|
||||
```python Code
|
||||
def kickoff_async(self, inputs: dict) -> CrewOutput:
|
||||
```
|
||||
|
||||
### Parameters
|
||||
|
||||
- `inputs` (dict): A dictionary containing the input data required for the tasks.
|
||||
|
||||
### Returns
|
||||
|
||||
- `CrewOutput`: An object representing the result of the crew execution.
|
||||
|
||||
## Potential Use Cases
|
||||
|
||||
- **Parallel Content Generation**: Kickoff multiple independent crews asynchronously, each responsible for generating content on different topics. For example, one crew might research and draft an article on AI trends, while another crew generates social media posts about a new product launch. Each crew operates independently, allowing content production to scale efficiently.
|
||||
|
||||
- **Concurrent Market Research Tasks**: Launch multiple crews asynchronously to conduct market research in parallel. One crew might analyze industry trends, while another examines competitor strategies, and yet another evaluates consumer sentiment. Each crew independently completes its task, enabling faster and more comprehensive insights.
|
||||
|
||||
- **Independent Travel Planning Modules**: Execute separate crews to independently plan different aspects of a trip. One crew might handle flight options, another handles accommodation, and a third plans activities. Each crew works asynchronously, allowing various components of the trip to be planned simultaneously and independently for faster results.
|
||||
|
||||
## Example: Single Asynchronous Crew Execution
|
||||
|
||||
Here's an example of how to kickoff a crew asynchronously using asyncio and awaiting the result:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai import Crew, Agent, Task
|
||||
|
||||
# Create an agent with code execution enabled
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
backstory="You are an experienced data analyst with strong Python skills.",
|
||||
allow_code_execution=True
|
||||
)
|
||||
|
||||
# Create a task that requires code execution
|
||||
data_analysis_task = Task(
|
||||
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
expected_output="The average age of the participants."
|
||||
)
|
||||
|
||||
# Create a crew and add the task
|
||||
analysis_crew = Crew(
|
||||
agents=[coding_agent],
|
||||
tasks=[data_analysis_task]
|
||||
)
|
||||
|
||||
# Async function to kickoff the crew asynchronously
|
||||
async def async_crew_execution():
|
||||
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
|
||||
print("Crew Result:", result)
|
||||
|
||||
# Run the async function
|
||||
asyncio.run(async_crew_execution())
|
||||
```
|
||||
|
||||
## Example: Multiple Asynchronous Crew Executions
|
||||
|
||||
In this example, we'll show how to kickoff multiple crews asynchronously and wait for all of them to complete using `asyncio.gather()`:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai import Crew, Agent, Task
|
||||
|
||||
# Create an agent with code execution enabled
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
backstory="You are an experienced data analyst with strong Python skills.",
|
||||
allow_code_execution=True
|
||||
)
|
||||
|
||||
# Create tasks that require code execution
|
||||
task_1 = Task(
|
||||
description="Analyze the first dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
expected_output="The average age of the participants."
|
||||
)
|
||||
|
||||
task_2 = Task(
|
||||
description="Analyze the second dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
expected_output="The average age of the participants."
|
||||
)
|
||||
|
||||
# Create two crews and add tasks
|
||||
crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
|
||||
crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
|
||||
|
||||
# Async function to kickoff multiple crews asynchronously and wait for all to finish
|
||||
async def async_multiple_crews():
|
||||
# Create coroutines for concurrent execution
|
||||
result_1 = crew_1.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
|
||||
result_2 = crew_2.kickoff_async(inputs={"ages": [20, 22, 24, 28, 30]})
|
||||
|
||||
# Wait for both crews to finish
|
||||
results = await asyncio.gather(result_1, result_2)
|
||||
|
||||
for i, result in enumerate(results, 1):
|
||||
print(f"Crew {i} Result:", result)
|
||||
|
||||
# Run the async function
|
||||
asyncio.run(async_multiple_crews())
|
||||
```
|
||||
53
docs/en/learn/kickoff-for-each.mdx
Normal file
53
docs/en/learn/kickoff-for-each.mdx
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
title: Kickoff Crew for Each
|
||||
description: Kickoff Crew for Each Item in a List
|
||||
icon: at
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI provides the ability to kickoff a crew for each item in a list, allowing you to execute the crew for each item in the list.
|
||||
This feature is particularly useful when you need to perform the same set of tasks for multiple items.
|
||||
|
||||
## Kicking Off a Crew for Each Item
|
||||
|
||||
To kickoff a crew for each item in a list, use the `kickoff_for_each()` method.
|
||||
This method executes the crew for each item in the list, allowing you to process multiple items efficiently.
|
||||
|
||||
Here's an example of how to kickoff a crew for each item in a list:
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task
|
||||
|
||||
# Create an agent with code execution enabled
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
backstory="You are an experienced data analyst with strong Python skills.",
|
||||
allow_code_execution=True
|
||||
)
|
||||
|
||||
# Create a task that requires code execution
|
||||
data_analysis_task = Task(
|
||||
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
expected_output="The average age calculated from the dataset"
|
||||
)
|
||||
|
||||
# Create a crew and add the task
|
||||
analysis_crew = Crew(
|
||||
agents=[coding_agent],
|
||||
tasks=[data_analysis_task],
|
||||
verbose=True,
|
||||
memory=False
|
||||
)
|
||||
|
||||
datasets = [
|
||||
{ "ages": [25, 30, 35, 40, 45] },
|
||||
{ "ages": [20, 25, 30, 35, 40] },
|
||||
{ "ages": [30, 35, 40, 45, 50] }
|
||||
]
|
||||
|
||||
# Execute the crew
|
||||
result = analysis_crew.kickoff_for_each(inputs=datasets)
|
||||
```
|
||||
204
docs/en/learn/llm-connections.mdx
Normal file
204
docs/en/learn/llm-connections.mdx
Normal file
@@ -0,0 +1,204 @@
|
||||
---
|
||||
title: Connect to any LLM
|
||||
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs) using LiteLLM, including supported providers and configuration options.
|
||||
icon: brain-circuit
|
||||
---
|
||||
|
||||
## Connect CrewAI to LLMs
|
||||
|
||||
CrewAI uses LiteLLM to connect to a wide variety of Language Models (LLMs). This integration provides extensive versatility, allowing you to use models from numerous providers with a simple, unified interface.
|
||||
|
||||
<Note>
|
||||
By default, CrewAI uses the `gpt-4o-mini` model. This is determined by the `OPENAI_MODEL_NAME` environment variable, which defaults to "gpt-4o-mini" if not set.
|
||||
You can easily configure your agents to use a different model or provider as described in this guide.
|
||||
</Note>
|
||||
|
||||
## Supported Providers
|
||||
|
||||
LiteLLM supports a wide range of providers, including but not limited to:
|
||||
|
||||
- OpenAI
|
||||
- Anthropic
|
||||
- Google (Vertex AI, Gemini)
|
||||
- Azure OpenAI
|
||||
- AWS (Bedrock, SageMaker)
|
||||
- Cohere
|
||||
- VoyageAI
|
||||
- Hugging Face
|
||||
- Ollama
|
||||
- Mistral AI
|
||||
- Replicate
|
||||
- Together AI
|
||||
- AI21
|
||||
- Cloudflare Workers AI
|
||||
- DeepInfra
|
||||
- Groq
|
||||
- SambaNova
|
||||
- [NVIDIA NIMs](https://docs.api.nvidia.com/nim/reference/models-1)
|
||||
- And many more!
|
||||
|
||||
For a complete and up-to-date list of supported providers, please refer to the [LiteLLM Providers documentation](https://docs.litellm.ai/docs/providers).
|
||||
|
||||
## Changing the LLM
|
||||
|
||||
To use a different LLM with your CrewAI agents, you have several options:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Using a String Identifier">
|
||||
Pass the model name as a string when initializing the agent:
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
from crewai import Agent
|
||||
|
||||
# Using OpenAI's GPT-4
|
||||
openai_agent = Agent(
|
||||
role='OpenAI Expert',
|
||||
goal='Provide insights using GPT-4',
|
||||
backstory="An AI assistant powered by OpenAI's latest model.",
|
||||
llm='gpt-4'
|
||||
)
|
||||
|
||||
# Using Anthropic's Claude
|
||||
claude_agent = Agent(
|
||||
role='Anthropic Expert',
|
||||
goal='Analyze data using Claude',
|
||||
backstory="An AI assistant leveraging Anthropic's language model.",
|
||||
llm='claude-2'
|
||||
)
|
||||
```
|
||||
</CodeGroup>
|
||||
</Tab>
|
||||
<Tab title="Using the LLM Class">
|
||||
For more detailed configuration, use the LLM class:
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
from crewai import Agent, LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gpt-4",
|
||||
temperature=0.7,
|
||||
base_url="https://api.openai.com/v1",
|
||||
api_key="your-api-key-here"
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role='Customized LLM Expert',
|
||||
goal='Provide tailored responses',
|
||||
backstory="An AI assistant with custom LLM settings.",
|
||||
llm=llm
|
||||
)
|
||||
```
|
||||
</CodeGroup>
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Configuration Options
|
||||
|
||||
When configuring an LLM for your agent, you have access to a wide range of parameters:
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|:----------|:-----:|:-------------|
|
||||
| **model** | `str` | The name of the model to use (e.g., "gpt-4", "claude-2") |
|
||||
| **temperature** | `float` | Controls randomness in output (0.0 to 1.0) |
|
||||
| **max_tokens** | `int` | Maximum number of tokens to generate |
|
||||
| **top_p** | `float` | Controls diversity of output (0.0 to 1.0) |
|
||||
| **frequency_penalty** | `float` | Penalizes new tokens based on their frequency in the text so far |
|
||||
| **presence_penalty** | `float` | Penalizes new tokens based on their presence in the text so far |
|
||||
| **stop** | `str`, `List[str]` | Sequence(s) to stop generation |
|
||||
| **base_url** | `str` | The base URL for the API endpoint |
|
||||
| **api_key** | `str` | Your API key for authentication |
|
||||
|
||||
For a complete list of parameters and their descriptions, refer to the LLM class documentation.
|
||||
|
||||
## Connecting to OpenAI-Compatible LLMs
|
||||
|
||||
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Using Environment Variables">
|
||||
<CodeGroup>
|
||||
```python Generic
|
||||
import os
|
||||
|
||||
os.environ["OPENAI_API_KEY"] = "your-api-key"
|
||||
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
|
||||
os.environ["OPENAI_MODEL_NAME"] = "your-model-name"
|
||||
```
|
||||
|
||||
```python Google
|
||||
import os
|
||||
|
||||
# Example using Gemini's OpenAI-compatible API.
|
||||
os.environ["OPENAI_API_KEY"] = "your-gemini-key" # Should start with AIza...
|
||||
os.environ["OPENAI_API_BASE"] = "https://generativelanguage.googleapis.com/v1beta/openai/"
|
||||
os.environ["OPENAI_MODEL_NAME"] = "openai/gemini-2.0-flash" # Add your Gemini model here, under openai/
|
||||
```
|
||||
</CodeGroup>
|
||||
</Tab>
|
||||
<Tab title="Using LLM Class Attributes">
|
||||
<CodeGroup>
|
||||
```python Generic
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.your-provider.com/v1"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
```python Google
|
||||
# Example using Gemini's OpenAI-compatible API
|
||||
llm = LLM(
|
||||
model="openai/gemini-2.0-flash",
|
||||
base_url="https://generativelanguage.googleapis.com/v1beta/openai/",
|
||||
api_key="your-gemini-key", # Should start with AIza...
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
</CodeGroup>
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Using Local Models with Ollama
|
||||
|
||||
For local models like those provided by Ollama:
|
||||
|
||||
<Steps>
|
||||
<Step title="Download and install Ollama">
|
||||
[Click here to download and install Ollama](https://ollama.com/download)
|
||||
</Step>
|
||||
<Step title="Pull the desired model">
|
||||
For example, run `ollama pull llama3.2` to download the model.
|
||||
</Step>
|
||||
<Step title="Configure your agent">
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
agent = Agent(
|
||||
role='Local AI Expert',
|
||||
goal='Process information using a local model',
|
||||
backstory="An AI assistant running on local hardware.",
|
||||
llm=LLM(model="ollama/llama3.2", base_url="http://localhost:11434")
|
||||
)
|
||||
```
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Changing the Base API URL
|
||||
|
||||
You can change the base API URL for any LLM provider by setting the `base_url` parameter:
|
||||
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
base_url="https://api.your-provider.com/v1",
|
||||
api_key="your-api-key"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.
|
||||
|
||||
## Conclusion
|
||||
|
||||
By leveraging LiteLLM, CrewAI offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the [LiteLLM documentation](https://docs.litellm.ai/docs/) for the most up-to-date information on supported models and configuration options.
|
||||
729
docs/en/learn/llm-selection-guide.mdx
Normal file
729
docs/en/learn/llm-selection-guide.mdx
Normal file
@@ -0,0 +1,729 @@
|
||||
---
|
||||
title: 'Strategic LLM Selection Guide'
|
||||
description: 'Strategic framework for choosing the right LLM for your CrewAI AI agents and writing effective task and agent definitions'
|
||||
icon: 'brain-circuit'
|
||||
---
|
||||
|
||||
## The CrewAI Approach to LLM Selection
|
||||
|
||||
Rather than prescriptive model recommendations, we advocate for a **thinking framework** that helps you make informed decisions based on your specific use case, constraints, and requirements. The LLM landscape evolves rapidly, with new models emerging regularly and existing ones being updated frequently. What matters most is developing a systematic approach to evaluation that remains relevant regardless of which specific models are available.
|
||||
|
||||
<Note>
|
||||
This guide focuses on strategic thinking rather than specific model recommendations, as the LLM landscape evolves rapidly.
|
||||
</Note>
|
||||
|
||||
## Quick Decision Framework
|
||||
|
||||
<Steps>
|
||||
<Step title="Analyze Your Tasks">
|
||||
Begin by deeply understanding what your tasks actually require. Consider the cognitive complexity involved, the depth of reasoning needed, the format of expected outputs, and the amount of context the model will need to process. This foundational analysis will guide every subsequent decision.
|
||||
</Step>
|
||||
<Step title="Map Model Capabilities">
|
||||
Once you understand your requirements, map them to model strengths. Different model families excel at different types of work; some are optimized for reasoning and analysis, others for creativity and content generation, and others for speed and efficiency.
|
||||
</Step>
|
||||
<Step title="Consider Constraints">
|
||||
Factor in your real-world operational constraints including budget limitations, latency requirements, data privacy needs, and infrastructure capabilities. The theoretically best model may not be the practically best choice for your situation.
|
||||
</Step>
|
||||
<Step title="Test and Iterate">
|
||||
Start with reliable, well-understood models and optimize based on actual performance in your specific use case. Real-world results often differ from theoretical benchmarks, so empirical testing is crucial.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Core Selection Framework
|
||||
|
||||
### a. Task-First Thinking
|
||||
|
||||
The most critical step in LLM selection is understanding what your task actually demands. Too often, teams select models based on general reputation or benchmark scores without carefully analyzing their specific requirements. This approach leads to either over-engineering simple tasks with expensive, complex models, or under-powering sophisticated work with models that lack the necessary capabilities.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Reasoning Complexity">
|
||||
- **Simple Tasks** represent the majority of everyday AI work and include basic instruction following, straightforward data processing, and simple formatting operations. These tasks typically have clear inputs and outputs with minimal ambiguity. The cognitive load is low, and the model primarily needs to follow explicit instructions rather than engage in complex reasoning.
|
||||
|
||||
- **Complex Tasks** require multi-step reasoning, strategic thinking, and the ability to handle ambiguous or incomplete information. These might involve analyzing multiple data sources, developing comprehensive strategies, or solving problems that require breaking down into smaller components. The model needs to maintain context across multiple reasoning steps and often must make inferences that aren't explicitly stated.
|
||||
|
||||
- **Creative Tasks** demand a different type of cognitive capability focused on generating novel, engaging, and contextually appropriate content. This includes storytelling, marketing copy creation, and creative problem-solving. The model needs to understand nuance, tone, and audience while producing content that feels authentic and engaging rather than formulaic.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Output Requirements">
|
||||
- **Structured Data** tasks require precision and consistency in format adherence. When working with JSON, XML, or database formats, the model must reliably produce syntactically correct output that can be programmatically processed. These tasks often have strict validation requirements and little tolerance for format errors, making reliability more important than creativity.
|
||||
|
||||
- **Creative Content** outputs demand a balance of technical competence and creative flair. The model needs to understand audience, tone, and brand voice while producing content that engages readers and achieves specific communication goals. Quality here is often subjective and requires models that can adapt their writing style to different contexts and purposes.
|
||||
|
||||
- **Technical Content** sits between structured data and creative content, requiring both precision and clarity. Documentation, code generation, and technical analysis need to be accurate and comprehensive while remaining accessible to the intended audience. The model must understand complex technical concepts and communicate them effectively.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Context Needs">
|
||||
- **Short Context** scenarios involve focused, immediate tasks where the model needs to process limited information quickly. These are often transactional interactions where speed and efficiency matter more than deep understanding. The model doesn't need to maintain extensive conversation history or process large documents.
|
||||
|
||||
- **Long Context** requirements emerge when working with substantial documents, extended conversations, or complex multi-part tasks. The model needs to maintain coherence across thousands of tokens while referencing earlier information accurately. This capability becomes crucial for document analysis, comprehensive research, and sophisticated dialogue systems.
|
||||
|
||||
- **Very Long Context** scenarios push the boundaries of what's currently possible, involving massive document processing, extensive research synthesis, or complex multi-session interactions. These use cases require models specifically designed for extended context handling and often involve trade-offs between context length and processing speed.
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### b. Model Capability Mapping
|
||||
|
||||
Understanding model capabilities requires looking beyond marketing claims and benchmark scores to understand the fundamental strengths and limitations of different model architectures and training approaches.
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Reasoning Models" icon="brain">
|
||||
Reasoning models represent a specialized category designed specifically for complex, multi-step thinking tasks. These models excel when problems require careful analysis, strategic planning, or systematic problem decomposition. They typically employ techniques like chain-of-thought reasoning or tree-of-thought processing to work through complex problems step by step.
|
||||
|
||||
The strength of reasoning models lies in their ability to maintain logical consistency across extended reasoning chains and to break down complex problems into manageable components. They're particularly valuable for strategic planning, complex analysis, and situations where the quality of reasoning matters more than speed of response.
|
||||
|
||||
However, reasoning models often come with trade-offs in terms of speed and cost. They may also be less suitable for creative tasks or simple operations where their sophisticated reasoning capabilities aren't needed. Consider these models when your tasks involve genuine complexity that benefits from systematic, step-by-step analysis.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="General Purpose Models" icon="microchip">
|
||||
General purpose models offer the most balanced approach to LLM selection, providing solid performance across a wide range of tasks without extreme specialization in any particular area. These models are trained on diverse datasets and optimized for versatility rather than peak performance in specific domains.
|
||||
|
||||
The primary advantage of general purpose models is their reliability and predictability across different types of work. They handle most standard business tasks competently, from research and analysis to content creation and data processing. This makes them excellent choices for teams that need consistent performance across varied workflows.
|
||||
|
||||
While general purpose models may not achieve the peak performance of specialized alternatives in specific domains, they offer operational simplicity and reduced complexity in model management. They're often the best starting point for new projects, allowing teams to understand their specific needs before potentially optimizing with more specialized models.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Fast & Efficient Models" icon="bolt">
|
||||
Fast and efficient models prioritize speed, cost-effectiveness, and resource efficiency over sophisticated reasoning capabilities. These models are optimized for high-throughput scenarios where quick responses and low operational costs are more important than nuanced understanding or complex reasoning.
|
||||
|
||||
These models excel in scenarios involving routine operations, simple data processing, function calling, and high-volume tasks where the cognitive requirements are relatively straightforward. They're particularly valuable for applications that need to process many requests quickly or operate within tight budget constraints.
|
||||
|
||||
The key consideration with efficient models is ensuring that their capabilities align with your task requirements. While they can handle many routine operations effectively, they may struggle with tasks requiring nuanced understanding, complex reasoning, or sophisticated content generation. They're best used for well-defined, routine operations where speed and cost matter more than sophistication.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Creative Models" icon="pen">
|
||||
Creative models are specifically optimized for content generation, writing quality, and creative thinking tasks. These models typically excel at understanding nuance, tone, and style while producing engaging, contextually appropriate content that feels natural and authentic.
|
||||
|
||||
The strength of creative models lies in their ability to adapt writing style to different audiences, maintain consistent voice and tone, and generate content that engages readers effectively. They often perform better on tasks involving storytelling, marketing copy, brand communications, and other content where creativity and engagement are primary goals.
|
||||
|
||||
When selecting creative models, consider not just their ability to generate text, but their understanding of audience, context, and purpose. The best creative models can adapt their output to match specific brand voices, target different audience segments, and maintain consistency across extended content pieces.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Open Source Models" icon="code">
|
||||
Open source models offer unique advantages in terms of cost control, customization potential, data privacy, and deployment flexibility. These models can be run locally or on private infrastructure, providing complete control over data handling and model behavior.
|
||||
|
||||
The primary benefits of open source models include elimination of per-token costs, ability to fine-tune for specific use cases, complete data privacy, and independence from external API providers. They're particularly valuable for organizations with strict data privacy requirements, budget constraints, or specific customization needs.
|
||||
|
||||
However, open source models require more technical expertise to deploy and maintain effectively. Teams need to consider infrastructure costs, model management complexity, and the ongoing effort required to keep models updated and optimized. The total cost of ownership may be higher than cloud-based alternatives when factoring in technical overhead.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Strategic Configuration Patterns
|
||||
|
||||
### a. Multi-Model Approach
|
||||
|
||||
<Tip>
|
||||
Use different models for different purposes within the same crew to optimize both performance and cost.
|
||||
</Tip>
|
||||
|
||||
The most sophisticated CrewAI implementations often employ multiple models strategically, assigning different models to different agents based on their specific roles and requirements. This approach allows teams to optimize for both performance and cost by using the most appropriate model for each type of work.
|
||||
|
||||
Planning agents benefit from reasoning models that can handle complex strategic thinking and multi-step analysis. These agents often serve as the "brain" of the operation, developing strategies and coordinating other agents' work. Content agents, on the other hand, perform best with creative models that excel at writing quality and audience engagement. Processing agents handling routine operations can use efficient models that prioritize speed and cost-effectiveness.
|
||||
|
||||
**Example: Research and Analysis Crew**
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew, LLM
|
||||
|
||||
# High-capability reasoning model for strategic planning
|
||||
manager_llm = LLM(model="gemini-2.5-flash-preview-05-20", temperature=0.1)
|
||||
|
||||
# Creative model for content generation
|
||||
content_llm = LLM(model="claude-3-5-sonnet-20241022", temperature=0.7)
|
||||
|
||||
# Efficient model for data processing
|
||||
processing_llm = LLM(model="gpt-4o-mini", temperature=0)
|
||||
|
||||
research_manager = Agent(
|
||||
role="Research Strategy Manager",
|
||||
goal="Develop comprehensive research strategies and coordinate team efforts",
|
||||
backstory="Expert research strategist with deep analytical capabilities",
|
||||
llm=manager_llm, # High-capability model for complex reasoning
|
||||
verbose=True
|
||||
)
|
||||
|
||||
content_writer = Agent(
|
||||
role="Research Content Writer",
|
||||
goal="Transform research findings into compelling, well-structured reports",
|
||||
backstory="Skilled writer who excels at making complex topics accessible",
|
||||
llm=content_llm, # Creative model for engaging content
|
||||
verbose=True
|
||||
)
|
||||
|
||||
data_processor = Agent(
|
||||
role="Data Analysis Specialist",
|
||||
goal="Extract and organize key data points from research sources",
|
||||
backstory="Detail-oriented analyst focused on accuracy and efficiency",
|
||||
llm=processing_llm, # Fast, cost-effective model for routine tasks
|
||||
verbose=True
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[research_manager, content_writer, data_processor],
|
||||
tasks=[...], # Your specific tasks
|
||||
manager_llm=manager_llm, # Manager uses the reasoning model
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
The key to successful multi-model implementation is understanding how different agents interact and ensuring that model capabilities align with agent responsibilities. This requires careful planning but can result in significant improvements in both output quality and operational efficiency.
|
||||
|
||||
### b. Component-Specific Selection
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Manager LLM">
|
||||
The manager LLM plays a crucial role in hierarchical CrewAI processes, serving as the coordination point for multiple agents and tasks. This model needs to excel at delegation, task prioritization, and maintaining context across multiple concurrent operations.
|
||||
|
||||
Effective manager LLMs require strong reasoning capabilities to make good delegation decisions, consistent performance to ensure predictable coordination, and excellent context management to track the state of multiple agents simultaneously. The model needs to understand the capabilities and limitations of different agents while optimizing task allocation for efficiency and quality.
|
||||
|
||||
Cost considerations are particularly important for manager LLMs since they're involved in every operation. The model needs to provide sufficient capability for effective coordination while remaining cost-effective for frequent use. This often means finding models that offer good reasoning capabilities without the premium pricing of the most sophisticated options.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Function Calling LLM">
|
||||
Function calling LLMs handle tool usage across all agents, making them critical for crews that rely heavily on external tools and APIs. These models need to excel at understanding tool capabilities, extracting parameters accurately, and handling tool responses effectively.
|
||||
|
||||
The most important characteristics for function calling LLMs are precision and reliability rather than creativity or sophisticated reasoning. The model needs to consistently extract the correct parameters from natural language requests and handle tool responses appropriately. Speed is also important since tool usage often involves multiple round trips that can impact overall performance.
|
||||
|
||||
Many teams find that specialized function calling models or general purpose models with strong tool support work better than creative or reasoning-focused models for this role. The key is ensuring that the model can reliably bridge the gap between natural language instructions and structured tool calls.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Agent-Specific Overrides">
|
||||
Individual agents can override crew-level LLM settings when their specific needs differ significantly from the general crew requirements. This capability allows for fine-tuned optimization while maintaining operational simplicity for most agents.
|
||||
|
||||
Consider agent-specific overrides when an agent's role requires capabilities that differ substantially from other crew members. For example, a creative writing agent might benefit from a model optimized for content generation, while a data analysis agent might perform better with a reasoning-focused model.
|
||||
|
||||
The challenge with agent-specific overrides is balancing optimization with operational complexity. Each additional model adds complexity to deployment, monitoring, and cost management. Teams should focus overrides on agents where the performance improvement justifies the additional complexity.
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Task Definition Framework
|
||||
|
||||
### a. Focus on Clarity Over Complexity
|
||||
|
||||
Effective task definition is often more important than model selection in determining the quality of CrewAI outputs. Well-defined tasks provide clear direction and context that enable even modest models to perform well, while poorly defined tasks can cause even sophisticated models to produce unsatisfactory results.
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Effective Task Descriptions" icon="list-check">
|
||||
The best task descriptions strike a balance between providing sufficient detail and maintaining clarity. They should define the specific objective clearly enough that there's no ambiguity about what success looks like, while explaining the approach or methodology in enough detail that the agent understands how to proceed.
|
||||
|
||||
Effective task descriptions include relevant context and constraints that help the agent understand the broader purpose and any limitations they need to work within. They break complex work into focused steps that can be executed systematically, rather than presenting overwhelming, multi-faceted objectives that are difficult to approach systematically.
|
||||
|
||||
Common mistakes include being too vague about objectives, failing to provide necessary context, setting unclear success criteria, or combining multiple unrelated tasks into a single description. The goal is to provide enough information for the agent to succeed while maintaining focus on a single, clear objective.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Expected Output Guidelines" icon="bullseye">
|
||||
Expected output guidelines serve as a contract between the task definition and the agent, clearly specifying what the deliverable should look like and how it will be evaluated. These guidelines should describe both the format and structure needed, as well as the key elements that must be included for the output to be considered complete.
|
||||
|
||||
The best output guidelines provide concrete examples of quality indicators and define completion criteria clearly enough that both the agent and human reviewers can assess whether the task has been completed successfully. This reduces ambiguity and helps ensure consistent results across multiple task executions.
|
||||
|
||||
Avoid generic output descriptions that could apply to any task, missing format specifications that leave agents guessing about structure, unclear quality standards that make evaluation difficult, or failing to provide examples or templates that help agents understand expectations.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
### b. Task Sequencing Strategy
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Sequential Dependencies">
|
||||
Sequential task dependencies are essential when tasks build upon previous outputs, information flows from one task to another, or quality depends on the completion of prerequisite work. This approach ensures that each task has access to the information and context it needs to succeed.
|
||||
|
||||
Implementing sequential dependencies effectively requires using the context parameter to chain related tasks, building complexity gradually through task progression, and ensuring that each task produces outputs that serve as meaningful inputs for subsequent tasks. The goal is to maintain logical flow between dependent tasks while avoiding unnecessary bottlenecks.
|
||||
|
||||
Sequential dependencies work best when there's a clear logical progression from one task to another and when the output of one task genuinely improves the quality or feasibility of subsequent tasks. However, they can create bottlenecks if not managed carefully, so it's important to identify which dependencies are truly necessary versus those that are merely convenient.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Parallel Execution">
|
||||
Parallel execution becomes valuable when tasks are independent of each other, time efficiency is important, or different expertise areas are involved that don't require coordination. This approach can significantly reduce overall execution time while allowing specialized agents to work on their areas of strength simultaneously.
|
||||
|
||||
Successful parallel execution requires identifying tasks that can truly run independently, grouping related but separate work streams effectively, and planning for result integration when parallel tasks need to be combined into a final deliverable. The key is ensuring that parallel tasks don't create conflicts or redundancies that reduce overall quality.
|
||||
|
||||
Consider parallel execution when you have multiple independent research streams, different types of analysis that don't depend on each other, or content creation tasks that can be developed simultaneously. However, be mindful of resource allocation and ensure that parallel execution doesn't overwhelm your available model capacity or budget.
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Optimizing Agent Configuration for LLM Performance
|
||||
|
||||
### a. Role-Driven LLM Selection
|
||||
|
||||
<Warning>
|
||||
Generic agent roles make it impossible to select the right LLM. Specific roles enable targeted model optimization.
|
||||
</Warning>
|
||||
|
||||
The specificity of your agent roles directly determines which LLM capabilities matter most for optimal performance. This creates a strategic opportunity to match precise model strengths with agent responsibilities.
|
||||
|
||||
**Generic vs. Specific Role Impact on LLM Choice:**
|
||||
|
||||
When defining roles, think about the specific domain knowledge, working style, and decision-making frameworks that would be most valuable for the tasks the agent will handle. The more specific and contextual the role definition, the better the model can embody that role effectively.
|
||||
```python
|
||||
# ✅ Specific role - clear LLM requirements
|
||||
specific_agent = Agent(
|
||||
role="SaaS Revenue Operations Analyst", # Clear domain expertise needed
|
||||
goal="Analyze recurring revenue metrics and identify growth opportunities",
|
||||
backstory="Specialist in SaaS business models with deep understanding of ARR, churn, and expansion revenue",
|
||||
llm=LLM(model="gpt-4o") # Reasoning model justified for complex analysis
|
||||
)
|
||||
```
|
||||
|
||||
**Role-to-Model Mapping Strategy:**
|
||||
|
||||
- **"Research Analyst"** → Reasoning model (GPT-4o, Claude Sonnet) for complex analysis
|
||||
- **"Content Editor"** → Creative model (Claude, GPT-4o) for writing quality
|
||||
- **"Data Processor"** → Efficient model (GPT-4o-mini, Gemini Flash) for structured tasks
|
||||
- **"API Coordinator"** → Function-calling optimized model (GPT-4o, Claude) for tool usage
|
||||
|
||||
### b. Backstory as Model Context Amplifier
|
||||
|
||||
<Info>
|
||||
Strategic backstories multiply your chosen LLM's effectiveness by providing domain-specific context that generic prompting cannot achieve.
|
||||
</Info>
|
||||
|
||||
A well-crafted backstory transforms your LLM choice from generic capability to specialized expertise. This is especially crucial for cost optimization - a well-contextualized efficient model can outperform a premium model without proper context.
|
||||
|
||||
**Context-Driven Performance Example:**
|
||||
|
||||
```python
|
||||
# Context amplifies model effectiveness
|
||||
domain_expert = Agent(
|
||||
role="B2B SaaS Marketing Strategist",
|
||||
goal="Develop comprehensive go-to-market strategies for enterprise software",
|
||||
backstory="""
|
||||
You have 10+ years of experience scaling B2B SaaS companies from Series A to IPO.
|
||||
You understand the nuances of enterprise sales cycles, the importance of product-market
|
||||
fit in different verticals, and how to balance growth metrics with unit economics.
|
||||
You've worked with companies like Salesforce, HubSpot, and emerging unicorns, giving
|
||||
you perspective on both established and disruptive go-to-market strategies.
|
||||
""",
|
||||
llm=LLM(model="claude-3-5-sonnet", temperature=0.3) # Balanced creativity with domain knowledge
|
||||
)
|
||||
|
||||
# This context enables Claude to perform like a domain expert
|
||||
# Without it, even it would produce generic marketing advice
|
||||
```
|
||||
|
||||
**Backstory Elements That Enhance LLM Performance:**
|
||||
- **Domain Experience**: "10+ years in enterprise SaaS sales"
|
||||
- **Specific Expertise**: "Specializes in technical due diligence for Series B+ rounds"
|
||||
- **Working Style**: "Prefers data-driven decisions with clear documentation"
|
||||
- **Quality Standards**: "Insists on citing sources and showing analytical work"
|
||||
|
||||
### c. Holistic Agent-LLM Optimization
|
||||
|
||||
The most effective agent configurations create synergy between role specificity, backstory depth, and LLM selection. Each element reinforces the others to maximize model performance.
|
||||
|
||||
**Optimization Framework:**
|
||||
|
||||
```python
|
||||
# Example: Technical Documentation Agent
|
||||
tech_writer = Agent(
|
||||
role="API Documentation Specialist", # Specific role for clear LLM requirements
|
||||
goal="Create comprehensive, developer-friendly API documentation",
|
||||
backstory="""
|
||||
You're a technical writer with 8+ years documenting REST APIs, GraphQL endpoints,
|
||||
and SDK integration guides. You've worked with developer tools companies and
|
||||
understand what developers need: clear examples, comprehensive error handling,
|
||||
and practical use cases. You prioritize accuracy and usability over marketing fluff.
|
||||
""",
|
||||
llm=LLM(
|
||||
model="claude-3-5-sonnet", # Excellent for technical writing
|
||||
temperature=0.1 # Low temperature for accuracy
|
||||
),
|
||||
tools=[code_analyzer_tool, api_scanner_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
**Alignment Checklist:**
|
||||
- ✅ **Role Specificity**: Clear domain and responsibilities
|
||||
- ✅ **LLM Match**: Model strengths align with role requirements
|
||||
- ✅ **Backstory Depth**: Provides domain context the LLM can leverage
|
||||
- ✅ **Tool Integration**: Tools support the agent's specialized function
|
||||
- ✅ **Parameter Tuning**: Temperature and settings optimize for role needs
|
||||
|
||||
The key is creating agents where every configuration choice reinforces your LLM selection strategy, maximizing performance while optimizing costs.
|
||||
|
||||
## Practical Implementation Checklist
|
||||
|
||||
Rather than repeating the strategic framework, here's a tactical checklist for implementing your LLM selection decisions in CrewAI:
|
||||
|
||||
<Steps>
|
||||
<Step title="Audit Your Current Setup" icon="clipboard-check">
|
||||
**What to Review:**
|
||||
- Are all agents using the same LLM by default?
|
||||
- Which agents handle the most complex reasoning tasks?
|
||||
- Which agents primarily do data processing or formatting?
|
||||
- Are any agents heavily tool-dependent?
|
||||
|
||||
**Action**: Document current agent roles and identify optimization opportunities.
|
||||
</Step>
|
||||
|
||||
<Step title="Implement Crew-Level Strategy" icon="users-gear">
|
||||
**Set Your Baseline:**
|
||||
```python
|
||||
# Start with a reliable default for the crew
|
||||
default_crew_llm = LLM(model="gpt-4o-mini") # Cost-effective baseline
|
||||
|
||||
crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
memory=True
|
||||
)
|
||||
```
|
||||
|
||||
**Action**: Establish your crew's default LLM before optimizing individual agents.
|
||||
</Step>
|
||||
|
||||
<Step title="Optimize High-Impact Agents" icon="star">
|
||||
**Identify and Upgrade Key Agents:**
|
||||
```python
|
||||
# Manager or coordination agents
|
||||
manager_agent = Agent(
|
||||
role="Project Manager",
|
||||
llm=LLM(model="gemini-2.5-flash-preview-05-20"), # Premium for coordination
|
||||
# ... rest of config
|
||||
)
|
||||
|
||||
# Creative or customer-facing agents
|
||||
content_agent = Agent(
|
||||
role="Content Creator",
|
||||
llm=LLM(model="claude-3-5-sonnet"), # Best for writing
|
||||
# ... rest of config
|
||||
)
|
||||
```
|
||||
|
||||
**Action**: Upgrade 20% of your agents that handle 80% of the complexity.
|
||||
</Step>
|
||||
|
||||
<Step title="Validate with Enterprise Testing" icon="test-tube">
|
||||
**Once you deploy your agents to production:**
|
||||
- Use [CrewAI Enterprise platform](https://app.crewai.com) to A/B test your model selections
|
||||
- Run multiple iterations with real inputs to measure consistency and performance
|
||||
- Compare cost vs. performance across your optimized setup
|
||||
- Share results with your team for collaborative decision-making
|
||||
|
||||
**Action**: Replace guesswork with data-driven validation using the testing platform.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
### When to Use Different Model Types
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Reasoning Models">
|
||||
Reasoning models become essential when tasks require genuine multi-step logical thinking, strategic planning, or high-level decision making that benefits from systematic analysis. These models excel when problems need to be broken down into components and analyzed systematically rather than handled through pattern matching or simple instruction following.
|
||||
|
||||
Consider reasoning models for business strategy development, complex data analysis that requires drawing insights from multiple sources, multi-step problem solving where each step depends on previous analysis, and strategic planning tasks that require considering multiple variables and their interactions.
|
||||
|
||||
However, reasoning models often come with higher costs and slower response times, so they're best reserved for tasks where their sophisticated capabilities provide genuine value rather than being used for simple operations that don't require complex reasoning.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Creative Models">
|
||||
Creative models become valuable when content generation is the primary output and the quality, style, and engagement level of that content directly impact success. These models excel when writing quality and style matter significantly, creative ideation or brainstorming is needed, or brand voice and tone are important considerations.
|
||||
|
||||
Use creative models for blog post writing and article creation, marketing copy that needs to engage and persuade, creative storytelling and narrative development, and brand communications where voice and tone are crucial. These models often understand nuance and context better than general purpose alternatives.
|
||||
|
||||
Creative models may be less suitable for technical or analytical tasks where precision and factual accuracy are more important than engagement and style. They're best used when the creative and communicative aspects of the output are primary success factors.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Efficient Models">
|
||||
Efficient models are ideal for high-frequency, routine operations where speed and cost optimization are priorities. These models work best when tasks have clear, well-defined parameters and don't require sophisticated reasoning or creative capabilities.
|
||||
|
||||
Consider efficient models for data processing and transformation tasks, simple formatting and organization operations, function calling and tool usage where precision matters more than sophistication, and high-volume operations where cost per operation is a significant factor.
|
||||
|
||||
The key with efficient models is ensuring that their capabilities align with task requirements. They can handle many routine operations effectively but may struggle with tasks requiring nuanced understanding, complex reasoning, or sophisticated content generation.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Open Source Models">
|
||||
Open source models become attractive when budget constraints are significant, data privacy requirements exist, customization needs are important, or local deployment is required for operational or compliance reasons.
|
||||
|
||||
Consider open source models for internal company tools where data privacy is paramount, privacy-sensitive applications that can't use external APIs, cost-optimized deployments where per-token pricing is prohibitive, and situations requiring custom model modifications or fine-tuning.
|
||||
|
||||
However, open source models require more technical expertise to deploy and maintain effectively. Consider the total cost of ownership including infrastructure, technical overhead, and ongoing maintenance when evaluating open source options.
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Common CrewAI Model Selection Pitfalls
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="The 'One Model Fits All' Trap" icon="triangle-exclamation">
|
||||
**The Problem**: Using the same LLM for all agents in a crew, regardless of their specific roles and responsibilities. This is often the default approach but rarely optimal.
|
||||
|
||||
**Real Example**: Using GPT-4o for both a strategic planning manager and a data extraction agent. The manager needs reasoning capabilities worth the premium cost, but the data extractor could perform just as well with GPT-4o-mini at a fraction of the price.
|
||||
|
||||
**CrewAI Solution**: Leverage agent-specific LLM configuration to match model capabilities with agent roles:
|
||||
```python
|
||||
# Strategic agent gets premium model
|
||||
manager = Agent(role="Strategy Manager", llm=LLM(model="gpt-4o"))
|
||||
|
||||
# Processing agent gets efficient model
|
||||
processor = Agent(role="Data Processor", llm=LLM(model="gpt-4o-mini"))
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Ignoring Crew-Level vs Agent-Level LLM Hierarchy" icon="shuffle">
|
||||
**The Problem**: Not understanding how CrewAI's LLM hierarchy works - crew LLM, manager LLM, and agent LLM settings can conflict or be poorly coordinated.
|
||||
|
||||
**Real Example**: Setting a crew to use Claude, but having agents configured with GPT models, creating inconsistent behavior and unnecessary model switching overhead.
|
||||
|
||||
**CrewAI Solution**: Plan your LLM hierarchy strategically:
|
||||
```python
|
||||
crew = Crew(
|
||||
agents=[agent1, agent2],
|
||||
tasks=[task1, task2],
|
||||
manager_llm=LLM(model="gpt-4o"), # For crew coordination
|
||||
process=Process.hierarchical # When using manager_llm
|
||||
)
|
||||
|
||||
# Agents inherit crew LLM unless specifically overridden
|
||||
agent1 = Agent(llm=LLM(model="claude-3-5-sonnet")) # Override for specific needs
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Function Calling Model Mismatch" icon="screwdriver-wrench">
|
||||
**The Problem**: Choosing models based on general capabilities while ignoring function calling performance for tool-heavy CrewAI workflows.
|
||||
|
||||
**Real Example**: Selecting a creative-focused model for an agent that primarily needs to call APIs, search tools, or process structured data. The agent struggles with tool parameter extraction and reliable function calls.
|
||||
|
||||
**CrewAI Solution**: Prioritize function calling capabilities for tool-heavy agents:
|
||||
```python
|
||||
# For agents that use many tools
|
||||
tool_agent = Agent(
|
||||
role="API Integration Specialist",
|
||||
tools=[search_tool, api_tool, data_tool],
|
||||
llm=LLM(model="gpt-4o"), # Excellent function calling
|
||||
# OR
|
||||
llm=LLM(model="claude-3-5-sonnet") # Also strong with tools
|
||||
)
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Premature Optimization Without Testing" icon="gear">
|
||||
**The Problem**: Making complex model selection decisions based on theoretical performance without validating with actual CrewAI workflows and tasks.
|
||||
|
||||
**Real Example**: Implementing elaborate model switching logic based on task types without testing if the performance gains justify the operational complexity.
|
||||
|
||||
**CrewAI Solution**: Start simple, then optimize based on real performance data:
|
||||
```python
|
||||
# Start with this
|
||||
crew = Crew(agents=[...], tasks=[...], llm=LLM(model="gpt-4o-mini"))
|
||||
|
||||
# Test performance, then optimize specific agents as needed
|
||||
# Use Enterprise platform testing to validate improvements
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Overlooking Context and Memory Limitations" icon="brain">
|
||||
**The Problem**: Not considering how model context windows interact with CrewAI's memory and context sharing between agents.
|
||||
|
||||
**Real Example**: Using a short-context model for agents that need to maintain conversation history across multiple task iterations, or in crews with extensive agent-to-agent communication.
|
||||
|
||||
**CrewAI Solution**: Match context capabilities to crew communication patterns.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Testing and Iteration Strategy
|
||||
|
||||
<Steps>
|
||||
<Step title="Start Simple" icon="play">
|
||||
Begin with reliable, general-purpose models that are well-understood and widely supported. This provides a stable foundation for understanding your specific requirements and performance expectations before optimizing for specialized needs.
|
||||
</Step>
|
||||
<Step title="Measure What Matters" icon="chart-line">
|
||||
Develop metrics that align with your specific use case and business requirements rather than relying solely on general benchmarks. Focus on measuring outcomes that directly impact your success rather than theoretical performance indicators.
|
||||
</Step>
|
||||
<Step title="Iterate Based on Results" icon="arrows-rotate">
|
||||
Make model changes based on observed performance in your specific context rather than theoretical considerations or general recommendations. Real-world performance often differs significantly from benchmark results or general reputation.
|
||||
</Step>
|
||||
<Step title="Consider Total Cost" icon="calculator">
|
||||
Evaluate the complete cost of ownership including model costs, development time, maintenance overhead, and operational complexity. The cheapest model per token may not be the most cost-effective choice when considering all factors.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<Tip>
|
||||
Focus on understanding your requirements first, then select models that best match those needs. The best LLM choice is the one that consistently delivers the results you need within your operational constraints.
|
||||
</Tip>
|
||||
|
||||
### Enterprise-Grade Model Validation
|
||||
|
||||
For teams serious about optimizing their LLM selection, the **CrewAI Enterprise platform** provides sophisticated testing capabilities that go far beyond basic CLI testing. The platform enables comprehensive model evaluation that helps you make data-driven decisions about your LLM strategy.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
**Advanced Testing Features:**
|
||||
|
||||
- **Multi-Model Comparison**: Test multiple LLMs simultaneously across the same tasks and inputs. Compare performance between GPT-4o, Claude, Llama, Groq, Cerebras, and other leading models in parallel to identify the best fit for your specific use case.
|
||||
|
||||
- **Statistical Rigor**: Configure multiple iterations with consistent inputs to measure reliability and performance variance. This helps identify models that not only perform well but do so consistently across runs.
|
||||
|
||||
- **Real-World Validation**: Use your actual crew inputs and scenarios rather than synthetic benchmarks. The platform allows you to test with your specific industry context, company information, and real use cases for more accurate evaluation.
|
||||
|
||||
- **Comprehensive Analytics**: Access detailed performance metrics, execution times, and cost analysis across all tested models. This enables data-driven decision making rather than relying on general model reputation or theoretical capabilities.
|
||||
|
||||
- **Team Collaboration**: Share testing results and model performance data across your team, enabling collaborative decision-making and consistent model selection strategies across projects.
|
||||
|
||||
Go to [app.crewai.com](https://app.crewai.com) to get started!
|
||||
|
||||
<Info>
|
||||
The Enterprise platform transforms model selection from guesswork into a data-driven process, enabling you to validate the principles in this guide with your actual use cases and requirements.
|
||||
</Info>
|
||||
|
||||
## Key Principles Summary
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Task-Driven Selection" icon="bullseye">
|
||||
Choose models based on what the task actually requires, not theoretical capabilities or general reputation.
|
||||
</Card>
|
||||
|
||||
<Card title="Capability Matching" icon="puzzle-piece">
|
||||
Align model strengths with agent roles and responsibilities for optimal performance.
|
||||
</Card>
|
||||
|
||||
<Card title="Strategic Consistency" icon="link">
|
||||
Maintain coherent model selection strategy across related components and workflows.
|
||||
</Card>
|
||||
|
||||
<Card title="Practical Testing" icon="flask">
|
||||
Validate choices through real-world usage rather than benchmarks alone.
|
||||
</Card>
|
||||
|
||||
<Card title="Iterative Improvement" icon="arrow-up">
|
||||
Start simple and optimize based on actual performance and needs.
|
||||
</Card>
|
||||
|
||||
<Card title="Operational Balance" icon="scale-balanced">
|
||||
Balance performance requirements with cost and complexity constraints.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<Check>
|
||||
Remember: The best LLM choice is the one that consistently delivers the results you need within your operational constraints. Focus on understanding your requirements first, then select models that best match those needs.
|
||||
</Check>
|
||||
|
||||
## Current Model Landscape (June 2025)
|
||||
|
||||
<Warning>
|
||||
**Snapshot in Time**: The following model rankings represent current leaderboard standings as of June 2025, compiled from [LMSys Arena](https://arena.lmsys.org/), [Artificial Analysis](https://artificialanalysis.ai/), and other leading benchmarks. LLM performance, availability, and pricing change rapidly. Always conduct your own evaluations with your specific use cases and data.
|
||||
</Warning>
|
||||
|
||||
### Leading Models by Category
|
||||
|
||||
The tables below show a representative sample of current top-performing models across different categories, with guidance on their suitability for CrewAI agents:
|
||||
|
||||
<Note>
|
||||
These tables/metrics showcase selected leading models in each category and are not exhaustive. Many excellent models exist beyond those listed here. The goal is to illustrate the types of capabilities to look for rather than provide a complete catalog.
|
||||
</Note>
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Reasoning & Planning">
|
||||
**Best for Manager LLMs and Complex Analysis**
|
||||
|
||||
| Model | Intelligence Score | Cost ($/M tokens) | Speed | Best Use in CrewAI |
|
||||
|:------|:------------------|:------------------|:------|:------------------|
|
||||
| **o3** | 70 | $17.50 | Fast | Manager LLM for complex multi-agent coordination |
|
||||
| **Gemini 2.5 Pro** | 69 | $3.44 | Fast | Strategic planning agents, research coordination |
|
||||
| **DeepSeek R1** | 68 | $0.96 | Moderate | Cost-effective reasoning for budget-conscious crews |
|
||||
| **Claude 4 Sonnet** | 53 | $6.00 | Fast | Analysis agents requiring nuanced understanding |
|
||||
| **Qwen3 235B (Reasoning)** | 62 | $2.63 | Moderate | Open-source alternative for reasoning tasks |
|
||||
|
||||
These models excel at multi-step reasoning and are ideal for agents that need to develop strategies, coordinate other agents, or analyze complex information.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Coding & Technical">
|
||||
**Best for Development and Tool-Heavy Workflows**
|
||||
|
||||
| Model | Coding Performance | Tool Use Score | Cost ($/M tokens) | Best Use in CrewAI |
|
||||
|:------|:------------------|:---------------|:------------------|:------------------|
|
||||
| **Claude 4 Sonnet** | Excellent | 72.7% | $6.00 | Primary coding agent, technical documentation |
|
||||
| **Claude 4 Opus** | Excellent | 72.5% | $30.00 | Complex software architecture, code review |
|
||||
| **DeepSeek V3** | Very Good | High | $0.48 | Cost-effective coding for routine development |
|
||||
| **Qwen2.5 Coder 32B** | Very Good | Medium | $0.15 | Budget-friendly coding agent |
|
||||
| **Llama 3.1 405B** | Good | 81.1% | $3.50 | Function calling LLM for tool-heavy workflows |
|
||||
|
||||
These models are optimized for code generation, debugging, and technical problem-solving, making them ideal for development-focused crews.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Speed & Efficiency">
|
||||
**Best for High-Throughput and Real-Time Applications**
|
||||
|
||||
| Model | Speed (tokens/s) | Latency (TTFT) | Cost ($/M tokens) | Best Use in CrewAI |
|
||||
|:------|:-----------------|:---------------|:------------------|:------------------|
|
||||
| **Llama 4 Scout** | 2,600 | 0.33s | $0.27 | High-volume processing agents |
|
||||
| **Gemini 2.5 Flash** | 376 | 0.30s | $0.26 | Real-time response agents |
|
||||
| **DeepSeek R1 Distill** | 383 | Variable | $0.04 | Cost-optimized high-speed processing |
|
||||
| **Llama 3.3 70B** | 2,500 | 0.52s | $0.60 | Balanced speed and capability |
|
||||
| **Nova Micro** | High | 0.30s | $0.04 | Simple, fast task execution |
|
||||
|
||||
These models prioritize speed and efficiency, perfect for agents handling routine operations or requiring quick responses. **Pro tip**: Pairing these models with fast inference providers like Groq can achieve even better performance, especially for open-source models like Llama.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Balanced Performance">
|
||||
**Best All-Around Models for General Crews**
|
||||
|
||||
| Model | Overall Score | Versatility | Cost ($/M tokens) | Best Use in CrewAI |
|
||||
|:------|:--------------|:------------|:------------------|:------------------|
|
||||
| **GPT-4.1** | 53 | Excellent | $3.50 | General-purpose crew LLM |
|
||||
| **Claude 3.7 Sonnet** | 48 | Very Good | $6.00 | Balanced reasoning and creativity |
|
||||
| **Gemini 2.0 Flash** | 48 | Good | $0.17 | Cost-effective general use |
|
||||
| **Llama 4 Maverick** | 51 | Good | $0.37 | Open-source general purpose |
|
||||
| **Qwen3 32B** | 44 | Good | $1.23 | Budget-friendly versatility |
|
||||
|
||||
These models offer good performance across multiple dimensions, suitable for crews with diverse task requirements.
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Selection Framework for Current Models
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="High-Performance Crews" icon="rocket">
|
||||
**When performance is the priority**: Use top-tier models like **o3**, **Gemini 2.5 Pro**, or **Claude 4 Sonnet** for manager LLMs and critical agents. These models excel at complex reasoning and coordination but come with higher costs.
|
||||
|
||||
**Strategy**: Implement a multi-model approach where premium models handle strategic thinking while efficient models handle routine operations.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Cost-Conscious Crews" icon="dollar-sign">
|
||||
**When budget is a primary constraint**: Focus on models like **DeepSeek R1**, **Llama 4 Scout**, or **Gemini 2.0 Flash**. These provide strong performance at significantly lower costs.
|
||||
|
||||
**Strategy**: Use cost-effective models for most agents, reserving premium models only for the most critical decision-making roles.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Specialized Workflows" icon="screwdriver-wrench">
|
||||
**For specific domain expertise**: Choose models optimized for your primary use case. **Claude 4** series for coding, **Gemini 2.5 Pro** for research, **Llama 405B** for function calling.
|
||||
|
||||
**Strategy**: Select models based on your crew's primary function, ensuring the core capability aligns with model strengths.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Enterprise & Privacy" icon="shield">
|
||||
**For data-sensitive operations**: Consider open-source models like **Llama 4** series, **DeepSeek V3**, or **Qwen3** that can be deployed locally while maintaining competitive performance.
|
||||
|
||||
**Strategy**: Deploy open-source models on private infrastructure, accepting potential performance trade-offs for data control.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
### Key Considerations for Model Selection
|
||||
|
||||
- **Performance Trends**: The current landscape shows strong competition between reasoning-focused models (o3, Gemini 2.5 Pro) and balanced models (Claude 4, GPT-4.1). Specialized models like DeepSeek R1 offer excellent cost-performance ratios.
|
||||
|
||||
- **Speed vs. Intelligence Trade-offs**: Models like Llama 4 Scout prioritize speed (2,600 tokens/s) while maintaining reasonable intelligence, whereas models like o3 maximize reasoning capability at the cost of speed and price.
|
||||
|
||||
- **Open Source Viability**: The gap between open-source and proprietary models continues to narrow, with models like Llama 4 Maverick and DeepSeek V3 offering competitive performance at attractive price points. Fast inference providers particularly shine with open-source models, often delivering better speed-to-cost ratios than proprietary alternatives.
|
||||
|
||||
<Info>
|
||||
**Testing is Essential**: Leaderboard rankings provide general guidance, but your specific use case, prompting style, and evaluation criteria may produce different results. Always test candidate models with your actual tasks and data before making final decisions.
|
||||
</Info>
|
||||
|
||||
### Practical Implementation Strategy
|
||||
|
||||
<Steps>
|
||||
<Step title="Start with Proven Models">
|
||||
Begin with well-established models like **GPT-4.1**, **Claude 3.7 Sonnet**, or **Gemini 2.0 Flash** that offer good performance across multiple dimensions and have extensive real-world validation.
|
||||
</Step>
|
||||
|
||||
<Step title="Identify Specialized Needs">
|
||||
Determine if your crew has specific requirements (coding, reasoning, speed) that would benefit from specialized models like **Claude 4 Sonnet** for development or **o3** for complex analysis. For speed-critical applications, consider fast inference providers like **Groq** alongside model selection.
|
||||
</Step>
|
||||
|
||||
<Step title="Implement Multi-Model Strategy">
|
||||
Use different models for different agents based on their roles. High-capability models for managers and complex tasks, efficient models for routine operations.
|
||||
</Step>
|
||||
|
||||
<Step title="Monitor and Optimize">
|
||||
Track performance metrics relevant to your use case and be prepared to adjust model selections as new models are released or pricing changes.
|
||||
</Step>
|
||||
</Steps>
|
||||
140
docs/en/learn/multimodal-agents.mdx
Normal file
140
docs/en/learn/multimodal-agents.mdx
Normal file
@@ -0,0 +1,140 @@
|
||||
---
|
||||
title: Using Multimodal Agents
|
||||
description: Learn how to enable and use multimodal capabilities in your agents for processing images and other non-text content within the CrewAI framework.
|
||||
icon: video
|
||||
---
|
||||
|
||||
## Using Multimodal Agents
|
||||
|
||||
CrewAI supports multimodal agents that can process both text and non-text content like images. This guide will show you how to enable and use multimodal capabilities in your agents.
|
||||
|
||||
### Enabling Multimodal Capabilities
|
||||
|
||||
To create a multimodal agent, simply set the `multimodal` parameter to `True` when initializing your agent:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
|
||||
agent = Agent(
|
||||
role="Image Analyst",
|
||||
goal="Analyze and extract insights from images",
|
||||
backstory="An expert in visual content interpretation with years of experience in image analysis",
|
||||
multimodal=True # This enables multimodal capabilities
|
||||
)
|
||||
```
|
||||
|
||||
When you set `multimodal=True`, the agent is automatically configured with the necessary tools for handling non-text content, including the `AddImageTool`.
|
||||
|
||||
### Working with Images
|
||||
|
||||
The multimodal agent comes pre-configured with the `AddImageTool`, which allows it to process images. You don't need to manually add this tool - it's automatically included when you enable multimodal capabilities.
|
||||
|
||||
Here's a complete example showing how to use a multimodal agent to analyze an image:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Create a multimodal agent
|
||||
image_analyst = Agent(
|
||||
role="Product Analyst",
|
||||
goal="Analyze product images and provide detailed descriptions",
|
||||
backstory="Expert in visual product analysis with deep knowledge of design and features",
|
||||
multimodal=True
|
||||
)
|
||||
|
||||
# Create a task for image analysis
|
||||
task = Task(
|
||||
description="Analyze the product image at https://example.com/product.jpg and provide a detailed description",
|
||||
expected_output="A detailed description of the product image",
|
||||
agent=image_analyst
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[image_analyst],
|
||||
tasks=[task]
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
### Advanced Usage with Context
|
||||
|
||||
You can provide additional context or specific questions about the image when creating tasks for multimodal agents. The task description can include specific aspects you want the agent to focus on:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Create a multimodal agent for detailed analysis
|
||||
expert_analyst = Agent(
|
||||
role="Visual Quality Inspector",
|
||||
goal="Perform detailed quality analysis of product images",
|
||||
backstory="Senior quality control expert with expertise in visual inspection",
|
||||
multimodal=True # AddImageTool is automatically included
|
||||
)
|
||||
|
||||
# Create a task with specific analysis requirements
|
||||
inspection_task = Task(
|
||||
description="""
|
||||
Analyze the product image at https://example.com/product.jpg with focus on:
|
||||
1. Quality of materials
|
||||
2. Manufacturing defects
|
||||
3. Compliance with standards
|
||||
Provide a detailed report highlighting any issues found.
|
||||
""",
|
||||
expected_output="A detailed report highlighting any issues found",
|
||||
agent=expert_analyst
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[expert_analyst],
|
||||
tasks=[inspection_task]
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
### Tool Details
|
||||
|
||||
When working with multimodal agents, the `AddImageTool` is automatically configured with the following schema:
|
||||
|
||||
```python
|
||||
class AddImageToolSchema:
|
||||
image_url: str # Required: The URL or path of the image to process
|
||||
action: Optional[str] = None # Optional: Additional context or specific questions about the image
|
||||
```
|
||||
|
||||
The multimodal agent will automatically handle the image processing through its built-in tools, allowing it to:
|
||||
- Access images via URLs or local file paths
|
||||
- Process image content with optional context or specific questions
|
||||
- Provide analysis and insights based on the visual information and task requirements
|
||||
|
||||
### Best Practices
|
||||
|
||||
When working with multimodal agents, keep these best practices in mind:
|
||||
|
||||
1. **Image Access**
|
||||
- Ensure your images are accessible via URLs that the agent can reach
|
||||
- For local images, consider hosting them temporarily or using absolute file paths
|
||||
- Verify that image URLs are valid and accessible before running tasks
|
||||
|
||||
2. **Task Description**
|
||||
- Be specific about what aspects of the image you want the agent to analyze
|
||||
- Include clear questions or requirements in the task description
|
||||
- Consider using the optional `action` parameter for focused analysis
|
||||
|
||||
3. **Resource Management**
|
||||
- Image processing may require more computational resources than text-only tasks
|
||||
- Some language models may require base64 encoding for image data
|
||||
- Consider batch processing for multiple images to optimize performance
|
||||
|
||||
4. **Environment Setup**
|
||||
- Verify that your environment has the necessary dependencies for image processing
|
||||
- Ensure your language model supports multimodal capabilities
|
||||
- Test with small images first to validate your setup
|
||||
|
||||
5. **Error Handling**
|
||||
- Implement proper error handling for image loading failures
|
||||
- Have fallback strategies for when image processing fails
|
||||
- Monitor and log image processing operations for debugging
|
||||
158
docs/en/learn/overview.mdx
Normal file
158
docs/en/learn/overview.mdx
Normal file
@@ -0,0 +1,158 @@
|
||||
---
|
||||
title: "Overview"
|
||||
description: "Learn how to build, customize, and optimize your CrewAI applications with comprehensive guides and tutorials"
|
||||
icon: "face-smile"
|
||||
---
|
||||
|
||||
## Learn CrewAI
|
||||
|
||||
This section provides comprehensive guides and tutorials to help you master CrewAI, from basic concepts to advanced techniques. Whether you're just getting started or looking to optimize your existing implementations, these resources will guide you through every aspect of building powerful AI agent workflows.
|
||||
|
||||
## Getting Started Guides
|
||||
|
||||
### Core Concepts
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Sequential Process" icon="list-ol" href="/en/learn/sequential-process">
|
||||
Learn how to execute tasks in a sequential order for structured workflows.
|
||||
</Card>
|
||||
|
||||
<Card title="Hierarchical Process" icon="sitemap" href="/en/learn/hierarchical-process">
|
||||
Implement hierarchical task execution with manager agents overseeing workflows.
|
||||
</Card>
|
||||
|
||||
<Card title="Conditional Tasks" icon="code-branch" href="/en/learn/conditional-tasks">
|
||||
Create dynamic workflows with conditional task execution based on outcomes.
|
||||
</Card>
|
||||
|
||||
<Card title="Async Kickoff" icon="bolt" href="/en/learn/kickoff-async">
|
||||
Execute crews asynchronously for improved performance and concurrency.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Agent Development
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Customizing Agents" icon="user-gear" href="/en/learn/customizing-agents">
|
||||
Learn how to customize agent behavior, roles, and capabilities.
|
||||
</Card>
|
||||
|
||||
<Card title="Coding Agents" icon="code" href="/en/learn/coding-agents">
|
||||
Build agents that can write, execute, and debug code automatically.
|
||||
</Card>
|
||||
|
||||
<Card title="Multimodal Agents" icon="images" href="/en/learn/multimodal-agents">
|
||||
Create agents that can process text, images, and other media types.
|
||||
</Card>
|
||||
|
||||
<Card title="Custom Manager Agent" icon="user-tie" href="/en/learn/custom-manager-agent">
|
||||
Implement custom manager agents for complex hierarchical workflows.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Workflow Control
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Human in the Loop" icon="user-check" href="/en/learn/human-in-the-loop">
|
||||
Integrate human oversight and intervention into agent workflows.
|
||||
</Card>
|
||||
|
||||
<Card title="Human Input on Execution" icon="hand-paper" href="/en/learn/human-input-on-execution">
|
||||
Allow human input during task execution for dynamic decision making.
|
||||
</Card>
|
||||
|
||||
<Card title="Replay Tasks" icon="rotate-left" href="/en/learn/replay-tasks-from-latest-crew-kickoff">
|
||||
Replay and resume tasks from previous crew executions.
|
||||
</Card>
|
||||
|
||||
<Card title="Kickoff for Each" icon="repeat" href="/en/learn/kickoff-for-each">
|
||||
Execute crews multiple times with different inputs efficiently.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Customization & Integration
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Custom LLM" icon="brain" href="/en/learn/custom-llm">
|
||||
Integrate custom language models and providers with CrewAI.
|
||||
</Card>
|
||||
|
||||
<Card title="LLM Connections" icon="link" href="/en/learn/llm-connections">
|
||||
Configure and manage connections to various LLM providers.
|
||||
</Card>
|
||||
|
||||
<Card title="Create Custom Tools" icon="wrench" href="/en/learn/create-custom-tools">
|
||||
Build custom tools to extend agent capabilities.
|
||||
</Card>
|
||||
|
||||
<Card title="Using Annotations" icon="at" href="/en/learn/using-annotations">
|
||||
Use Python annotations for cleaner, more maintainable code.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Specialized Applications
|
||||
|
||||
### Content & Media
|
||||
<CardGroup cols={2}>
|
||||
<Card title="DALL-E Image Generation" icon="image" href="/en/learn/dalle-image-generation">
|
||||
Generate images using DALL-E integration with your agents.
|
||||
</Card>
|
||||
|
||||
<Card title="Bring Your Own Agent" icon="user-plus" href="/en/learn/bring-your-own-agent">
|
||||
Integrate existing agents and models into CrewAI workflows.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Tool Management
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Force Tool Output as Result" icon="hammer" href="/en/learn/force-tool-output-as-result">
|
||||
Configure tools to return their output directly as task results.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Learning Path Recommendations
|
||||
|
||||
### For Beginners
|
||||
1. Start with **Sequential Process** to understand basic workflow execution
|
||||
2. Learn **Customizing Agents** to create effective agent configurations
|
||||
3. Explore **Create Custom Tools** to extend functionality
|
||||
4. Try **Human in the Loop** for interactive workflows
|
||||
|
||||
### For Intermediate Users
|
||||
1. Master **Hierarchical Process** for complex multi-agent systems
|
||||
2. Implement **Conditional Tasks** for dynamic workflows
|
||||
3. Use **Async Kickoff** for performance optimization
|
||||
4. Integrate **Custom LLM** for specialized models
|
||||
|
||||
### For Advanced Users
|
||||
1. Build **Multimodal Agents** for complex media processing
|
||||
2. Create **Custom Manager Agents** for sophisticated orchestration
|
||||
3. Implement **Bring Your Own Agent** for hybrid systems
|
||||
4. Use **Replay Tasks** for robust error recovery
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Development
|
||||
- **Start Simple**: Begin with basic sequential workflows before adding complexity
|
||||
- **Test Incrementally**: Test each component before integrating into larger systems
|
||||
- **Use Annotations**: Leverage Python annotations for cleaner, more maintainable code
|
||||
- **Custom Tools**: Build reusable tools that can be shared across different agents
|
||||
|
||||
### Production
|
||||
- **Error Handling**: Implement robust error handling and recovery mechanisms
|
||||
- **Performance**: Use async execution and optimize LLM calls for better performance
|
||||
- **Monitoring**: Integrate observability tools to track agent performance
|
||||
- **Human Oversight**: Include human checkpoints for critical decisions
|
||||
|
||||
### Optimization
|
||||
- **Resource Management**: Monitor and optimize token usage and API costs
|
||||
- **Workflow Design**: Design workflows that minimize unnecessary LLM calls
|
||||
- **Tool Efficiency**: Create efficient tools that provide maximum value with minimal overhead
|
||||
- **Iterative Improvement**: Use feedback and metrics to continuously improve agent performance
|
||||
|
||||
## Getting Help
|
||||
|
||||
- **Documentation**: Each guide includes detailed examples and explanations
|
||||
- **Community**: Join the [CrewAI Forum](https://community.crewai.com) for discussions and support
|
||||
- **Examples**: Check the Examples section for complete working implementations
|
||||
- **Support**: Contact [support@crewai.com](mailto:support@crewai.com) for technical assistance
|
||||
|
||||
Start with the guides that match your current needs and gradually explore more advanced topics as you become comfortable with the fundamentals.
|
||||
78
docs/en/learn/replay-tasks-from-latest-crew-kickoff.mdx
Normal file
78
docs/en/learn/replay-tasks-from-latest-crew-kickoff.mdx
Normal file
@@ -0,0 +1,78 @@
|
||||
---
|
||||
title: Replay Tasks from Latest Crew Kickoff
|
||||
description: Replay tasks from the latest crew.kickoff(...)
|
||||
icon: arrow-right
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI provides the ability to replay from a task specified from the latest crew kickoff. This feature is particularly useful when you've finished a kickoff and may want to retry certain tasks or don't need to refetch data over and your agents already have the context saved from the kickoff execution so you just need to replay the tasks you want to.
|
||||
|
||||
<Note>
|
||||
You must run `crew.kickoff()` before you can replay a task.
|
||||
Currently, only the latest kickoff is supported, so if you use `kickoff_for_each`, it will only allow you to replay from the most recent crew run.
|
||||
</Note>
|
||||
|
||||
Here's an example of how to replay from a task:
|
||||
|
||||
### Replaying from Specific Task Using the CLI
|
||||
|
||||
To use the replay feature, follow these steps:
|
||||
|
||||
<Steps>
|
||||
<Step title="Open your terminal or command prompt."></Step>
|
||||
<Step title="Navigate to the directory where your CrewAI project is located."></Step>
|
||||
<Step title="Run the following commands:">
|
||||
To view the latest kickoff task_ids use:
|
||||
|
||||
```shell
|
||||
crewai log-tasks-outputs
|
||||
```
|
||||
|
||||
Once you have your `task_id` to replay, use:
|
||||
|
||||
```shell
|
||||
crewai replay -t <task_id>
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<Note>
|
||||
Ensure `crewai` is installed and configured correctly in your development environment.
|
||||
</Note>
|
||||
|
||||
### Replaying from a Task Programmatically
|
||||
|
||||
To replay from a task programmatically, use the following steps:
|
||||
|
||||
<Steps>
|
||||
<Step title="Specify the `task_id` and input parameters for the replay process.">
|
||||
Specify the `task_id` and input parameters for the replay process.
|
||||
</Step>
|
||||
<Step title="Execute the replay command within a try-except block to handle potential errors.">
|
||||
Execute the replay command within a try-except block to handle potential errors.
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
def replay():
|
||||
"""
|
||||
Replay the crew execution from a specific task.
|
||||
"""
|
||||
task_id = '<task_id>'
|
||||
inputs = {"topic": "CrewAI Training"} # This is optional; you can pass in the inputs you want to replay; otherwise, it uses the previous kickoff's inputs.
|
||||
try:
|
||||
YourCrewName_Crew().crew().replay(task_id=task_id, inputs=inputs)
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
raise Exception(f"An error occurred while replaying the crew: {e}")
|
||||
|
||||
except Exception as e:
|
||||
raise Exception(f"An unexpected error occurred: {e}")
|
||||
```
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Conclusion
|
||||
|
||||
With the above enhancements and detailed functionality, replaying specific tasks in CrewAI has been made more efficient and robust.
|
||||
Ensure you follow the commands and steps precisely to make the most of these features.
|
||||
127
docs/en/learn/sequential-process.mdx
Normal file
127
docs/en/learn/sequential-process.mdx
Normal file
@@ -0,0 +1,127 @@
|
||||
---
|
||||
title: Sequential Processes
|
||||
description: A comprehensive guide to utilizing the sequential processes for task execution in CrewAI projects.
|
||||
icon: forward
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI offers a flexible framework for executing tasks in a structured manner, supporting both sequential and hierarchical processes.
|
||||
This guide outlines how to effectively implement these processes to ensure efficient task execution and project completion.
|
||||
|
||||
## Sequential Process Overview
|
||||
|
||||
The sequential process ensures tasks are executed one after the other, following a linear progression.
|
||||
This approach is ideal for projects requiring tasks to be completed in a specific order.
|
||||
|
||||
### Key Features
|
||||
|
||||
- **Linear Task Flow**: Ensures orderly progression by handling tasks in a predetermined sequence.
|
||||
- **Simplicity**: Best suited for projects with clear, step-by-step tasks.
|
||||
- **Easy Monitoring**: Facilitates easy tracking of task completion and project progress.
|
||||
|
||||
## Implementing the Sequential Process
|
||||
|
||||
To use the sequential process, assemble your crew and define tasks in the order they need to be executed.
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Process, Agent, Task, TaskOutput, CrewOutput
|
||||
|
||||
# Define your agents
|
||||
researcher = Agent(
|
||||
role='Researcher',
|
||||
goal='Conduct foundational research',
|
||||
backstory='An experienced researcher with a passion for uncovering insights'
|
||||
)
|
||||
analyst = Agent(
|
||||
role='Data Analyst',
|
||||
goal='Analyze research findings',
|
||||
backstory='A meticulous analyst with a knack for uncovering patterns'
|
||||
)
|
||||
writer = Agent(
|
||||
role='Writer',
|
||||
goal='Draft the final report',
|
||||
backstory='A skilled writer with a talent for crafting compelling narratives'
|
||||
)
|
||||
|
||||
# Define your tasks
|
||||
research_task = Task(
|
||||
description='Gather relevant data...',
|
||||
agent=researcher,
|
||||
expected_output='Raw Data'
|
||||
)
|
||||
analysis_task = Task(
|
||||
description='Analyze the data...',
|
||||
agent=analyst,
|
||||
expected_output='Data Insights'
|
||||
)
|
||||
writing_task = Task(
|
||||
description='Compose the report...',
|
||||
agent=writer,
|
||||
expected_output='Final Report'
|
||||
)
|
||||
|
||||
# Form the crew with a sequential process
|
||||
report_crew = Crew(
|
||||
agents=[researcher, analyst, writer],
|
||||
tasks=[research_task, analysis_task, writing_task],
|
||||
process=Process.sequential
|
||||
)
|
||||
|
||||
# Execute the crew
|
||||
result = report_crew.kickoff()
|
||||
|
||||
# Accessing the type-safe output
|
||||
task_output: TaskOutput = result.tasks[0].output
|
||||
crew_output: CrewOutput = result.output
|
||||
```
|
||||
|
||||
### Note:
|
||||
|
||||
Each task in a sequential process **must** have an agent assigned. Ensure that every `Task` includes an `agent` parameter.
|
||||
|
||||
### Workflow in Action
|
||||
|
||||
1. **Initial Task**: In a sequential process, the first agent completes their task and signals completion.
|
||||
2. **Subsequent Tasks**: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or directives guiding their execution.
|
||||
3. **Completion**: The process concludes once the final task is executed, leading to project completion.
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Task Delegation
|
||||
|
||||
In sequential processes, if an agent has `allow_delegation` set to `True`, they can delegate tasks to other agents in the crew.
|
||||
This feature is automatically set up when there are multiple agents in the crew.
|
||||
|
||||
### Asynchronous Execution
|
||||
|
||||
Tasks can be executed asynchronously, allowing for parallel processing when appropriate.
|
||||
To create an asynchronous task, set `async_execution=True` when defining the task.
|
||||
|
||||
### Memory and Caching
|
||||
|
||||
CrewAI supports both memory and caching features:
|
||||
|
||||
- **Memory**: Enable by setting `memory=True` when creating the Crew. This allows agents to retain information across tasks.
|
||||
- **Caching**: By default, caching is enabled. Set `cache=False` to disable it.
|
||||
|
||||
### Callbacks
|
||||
|
||||
You can set callbacks at both the task and step level:
|
||||
|
||||
- `task_callback`: Executed after each task completion.
|
||||
- `step_callback`: Executed after each step in an agent's execution.
|
||||
|
||||
### Usage Metrics
|
||||
|
||||
CrewAI tracks token usage across all tasks and agents. You can access these metrics after execution.
|
||||
|
||||
## Best Practices for Sequential Processes
|
||||
|
||||
1. **Order Matters**: Arrange tasks in a logical sequence where each task builds upon the previous one.
|
||||
2. **Clear Task Descriptions**: Provide detailed descriptions for each task to guide the agents effectively.
|
||||
3. **Appropriate Agent Selection**: Match agents' skills and roles to the requirements of each task.
|
||||
4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones.
|
||||
|
||||
This updated documentation ensures that details accurately reflect the latest changes in the codebase and clearly describes how to leverage new features and configurations.
|
||||
The content is kept simple and direct to ensure easy understanding.
|
||||
141
docs/en/learn/using-annotations.mdx
Normal file
141
docs/en/learn/using-annotations.mdx
Normal file
@@ -0,0 +1,141 @@
|
||||
---
|
||||
title: "Using Annotations in crew.py"
|
||||
description: "Learn how to use annotations to properly structure agents, tasks, and components in CrewAI"
|
||||
icon: "at"
|
||||
---
|
||||
|
||||
This guide explains how to use annotations to properly reference **agents**, **tasks**, and other components in the `crew.py` file.
|
||||
|
||||
## Introduction
|
||||
|
||||
Annotations in the CrewAI framework are used to decorate classes and methods, providing metadata and functionality to various components of your crew. These annotations help in organizing and structuring your code, making it more readable and maintainable.
|
||||
|
||||
## Available Annotations
|
||||
|
||||
The CrewAI framework provides the following annotations:
|
||||
|
||||
- `@CrewBase`: Used to decorate the main crew class.
|
||||
- `@agent`: Decorates methods that define and return Agent objects.
|
||||
- `@task`: Decorates methods that define and return Task objects.
|
||||
- `@crew`: Decorates the method that creates and returns the Crew object.
|
||||
- `@llm`: Decorates methods that initialize and return Language Model objects.
|
||||
- `@tool`: Decorates methods that initialize and return Tool objects.
|
||||
- `@callback`: Used for defining callback methods.
|
||||
- `@output_json`: Used for methods that output JSON data.
|
||||
- `@output_pydantic`: Used for methods that output Pydantic models.
|
||||
- `@cache_handler`: Used for defining cache handling methods.
|
||||
|
||||
## Usage Examples
|
||||
|
||||
Let's go through examples of how to use these annotations:
|
||||
|
||||
### 1. Crew Base Class
|
||||
|
||||
```python
|
||||
@CrewBase
|
||||
class LinkedinProfileCrew():
|
||||
"""LinkedinProfile crew"""
|
||||
agents_config = 'config/agents.yaml'
|
||||
tasks_config = 'config/tasks.yaml'
|
||||
```
|
||||
|
||||
The `@CrewBase` annotation is used to decorate the main crew class. This class typically contains configurations and methods for creating agents, tasks, and the crew itself.
|
||||
|
||||
### 2. Tool Definition
|
||||
|
||||
```python
|
||||
@tool
|
||||
def myLinkedInProfileTool(self):
|
||||
return LinkedInProfileTool()
|
||||
```
|
||||
|
||||
The `@tool` annotation is used to decorate methods that return tool objects. These tools can be used by agents to perform specific tasks.
|
||||
|
||||
### 3. LLM Definition
|
||||
|
||||
```python
|
||||
@llm
|
||||
def groq_llm(self):
|
||||
api_key = os.getenv('api_key')
|
||||
return ChatGroq(api_key=api_key, temperature=0, model_name="mixtral-8x7b-32768")
|
||||
```
|
||||
|
||||
The `@llm` annotation is used to decorate methods that initialize and return Language Model objects. These LLMs are used by agents for natural language processing tasks.
|
||||
|
||||
### 4. Agent Definition
|
||||
|
||||
```python
|
||||
@agent
|
||||
def researcher(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['researcher']
|
||||
)
|
||||
```
|
||||
|
||||
The `@agent` annotation is used to decorate methods that define and return Agent objects.
|
||||
|
||||
### 5. Task Definition
|
||||
|
||||
```python
|
||||
@task
|
||||
def research_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['research_linkedin_task'],
|
||||
agent=self.researcher()
|
||||
)
|
||||
```
|
||||
|
||||
The `@task` annotation is used to decorate methods that define and return Task objects. These methods specify the task configuration and the agent responsible for the task.
|
||||
|
||||
### 6. Crew Creation
|
||||
|
||||
```python
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
"""Creates the LinkedinProfile crew"""
|
||||
return Crew(
|
||||
agents=self.agents,
|
||||
tasks=self.tasks,
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
The `@crew` annotation is used to decorate the method that creates and returns the `Crew` object. This method assembles all the components (agents and tasks) into a functional crew.
|
||||
|
||||
## YAML Configuration
|
||||
|
||||
The agent configurations are typically stored in a YAML file. Here's an example of how the `agents.yaml` file might look for the researcher agent:
|
||||
|
||||
```yaml
|
||||
researcher:
|
||||
role: >
|
||||
LinkedIn Profile Senior Data Researcher
|
||||
goal: >
|
||||
Uncover detailed LinkedIn profiles based on provided name {name} and domain {domain}
|
||||
Generate a Dall-E image based on domain {domain}
|
||||
backstory: >
|
||||
You're a seasoned researcher with a knack for uncovering the most relevant LinkedIn profiles.
|
||||
Known for your ability to navigate LinkedIn efficiently, you excel at gathering and presenting
|
||||
professional information clearly and concisely.
|
||||
allow_delegation: False
|
||||
verbose: True
|
||||
llm: groq_llm
|
||||
tools:
|
||||
- myLinkedInProfileTool
|
||||
- mySerperDevTool
|
||||
- myDallETool
|
||||
```
|
||||
|
||||
This YAML configuration corresponds to the researcher agent defined in the `LinkedinProfileCrew` class. The configuration specifies the agent's role, goal, backstory, and other properties such as the LLM and tools it uses.
|
||||
|
||||
Note how the `llm` and `tools` in the YAML file correspond to the methods decorated with `@llm` and `@tool` in the Python class.
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Consistent Naming**: Use clear and consistent naming conventions for your methods. For example, agent methods could be named after their roles (e.g., researcher, reporting_analyst).
|
||||
- **Environment Variables**: Use environment variables for sensitive information like API keys.
|
||||
- **Flexibility**: Design your crew to be flexible by allowing easy addition or removal of agents and tasks.
|
||||
- **YAML-Code Correspondence**: Ensure that the names and structures in your YAML files correspond correctly to the decorated methods in your Python code.
|
||||
|
||||
By following these guidelines and properly using annotations, you can create well-structured and maintainable crews using the CrewAI framework.
|
||||
Reference in New Issue
Block a user