Documentation Improvements: LLM Configuration and Usage (#1684)

* docs: improve tasks documentation clarity and structure

- Add Task Execution Flow section
- Add variable interpolation explanation
- Add Task Dependencies section with examples
- Improve overall document structure and readability
- Update code examples with proper syntax highlighting

* docs: update agent documentation with improved examples and formatting

- Replace DuckDuckGoSearchRun with SerperDevTool
- Update code block formatting to be consistent
- Improve template examples with actual syntax
- Update LLM examples to use current models
- Clean up formatting and remove redundant comments

* docs: enhance LLM documentation with Cerebras provider and formatting improvements

* docs: simplify LLMs documentation title

* docs: improve installation guide clarity and structure

- Add clear Python version requirements with check command
- Simplify installation options to recommended method
- Improve upgrade section clarity for existing users
- Add better visual structure with Notes and Tips
- Update description and formatting

* docs: improve introduction page organization and clarity

- Update organizational analogy in Note section
- Improve table formatting and alignment
- Remove emojis from component table for cleaner look
- Add 'helps you' to make the note more action-oriented

* docs: add enterprise and community cards

- Add Enterprise deployment card in quickstart
- Add community card focused on open source discussions
- Remove deployment reference from community description
- Clean up introduction page cards
- Remove link from Enterprise description text
This commit is contained in:
Tony Kipkemboi
2024-12-02 09:50:12 -05:00
committed by GitHub
parent 404da4066b
commit aa532acdd2
7 changed files with 1369 additions and 736 deletions

View File

@@ -1,267 +1,343 @@
---
title: Agents
description: What are CrewAI Agents and how to use them.
description: Detailed guide on creating and managing agents within the CrewAI framework.
icon: robot
---
## What is an agent?
## Overview of an Agent
An agent is an **autonomous unit** programmed to:
<ul>
<li class="leading-3">Perform tasks</li>
<li class="leading-3">Make decisions</li>
<li class="leading-3">Communicate with other agents</li>
</ul>
In the CrewAI framework, an `Agent` is an autonomous unit that can:
- Perform specific tasks
- Make decisions based on its role and goal
- Use tools to accomplish objectives
- Communicate and collaborate with other agents
- Maintain memory of interactions
- Delegate tasks when allowed
<Tip>
Think of an agent as a member of a team, with specific skills and a particular
job to do. Agents can have different roles like `Researcher`, `Writer`, or
`Customer Support`, each contributing to the overall goal of the crew.
Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content.
</Tip>
## Agent attributes
## Agent Attributes
| Attribute | Parameter | Description |
| :-------------------------------------- | :----------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | `goal` | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | `backstory` | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **LLM** _(optional)_ | `llm` | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
| **Tools** _(optional)_ | `tools` | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. |
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. |
| **Max Iter** _(optional)_ | `max_iter` | Max Iter is the maximum number of iterations the agent can perform before being forced to give its best answer. Default is `25`. |
| **Max RPM** _(optional)_ | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **Max Execution Time** _(optional)_ | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
| **Verbose** _(optional)_ | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** _(optional)_ | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`. |
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Cache** _(optional)_ | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| **System Template** _(optional)_ | `system_template` | Specifies the system format for the agent. Default is `None`. |
| **Prompt Template** _(optional)_ | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
| **Response Template** _(optional)_ | `response_template` | Specifies the response format for the agent. Default is `None`. |
| **Allow Code Execution** _(optional)_ | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. |
| **Respect Context Window** _(optional)_ | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. |
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | Determines the mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution on the host machine). Default is `safe`. |
| Attribute | Parameter | Type | Description |
| :-------------------------------------- | :----------------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. |
| **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. |
| **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. |
| **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. |
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. |
| **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. |
| **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. |
| **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. |
| **Memory** _(optional)_ | `memory` | `bool` | Whether the agent should maintain memory of interactions. Default is True. |
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. |
| **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. |
| **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. |
| **Cache** _(optional)_ | `cache` | `bool` | Enable caching for tool usage. Default is True. |
| **System Template** _(optional)_ | `system_template` | `Optional[str]` | Custom system prompt template for the agent. |
| **Prompt Template** _(optional)_ | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. |
| **Response Template** _(optional)_ | `response_template` | `Optional[str]` | Custom response template for the agent. |
| **Allow Code Execution** _(optional)_ | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. |
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
| **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
| **Embedder Config** _(optional)_ | `embedder_config` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
## Creating an agent
## Creating Agents
There are two ways to create agents in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**.
### YAML Configuration (Recommended)
Using YAML configuration provides a cleaner, more maintainable way to define agents. We strongly recommend using this approach in your CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements.
<Note>
**Agent interaction**: Agents can interact with each other using CrewAI's
built-in delegation and communication mechanisms. This allows for dynamic task
management and problem-solving within the crew.
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
```python Code
crew.kickoff(inputs={'topic': 'AI Agents'})
```
</Note>
Agents can be created using one of the following methods:
<ul>
<li class="leading-3">
**With a YAML configuration (recommended):** Define agent properties in a
structured YAML file, promoting reusability and cleaner code.
</li>
<li class="leading-3">
**Without a YAML configuration:** Define agent properties directly in your
code.
</li>
</ul>
### Creating an agent with a YAML configuration (recommended)
The YAML configuration approach allows you to separate the agent's required properties (i.e., `role`, `goal`, and `backstory`) from the code logic. This makes your setup modular, easier to manage, and reusable across multiple projects.
#### Step 1: Define an agent in a YAML file
The YAML file contains required properties of the agent:
<ul>
<li class="leading-3">
`role`: A short description of the agent's purpose or position.
</li>
<li class="leading-3">
`goal`: The primary objective the agent is designed to accomplish.
</li>
<li class="leading-3">
`backstory`: A detailed description of the agent's context to enhance its
understanding and responses.
</li>
</ul>
<Note>
The YAML file should only include these three properties (i.e., `role`,
`goal`, and `backstory`). Any additional configurations must be defined in the
`crew.py` file.
</Note>
Here's an example of how to configure agents using YAML:
```yaml agents.yaml
data_analyst:
role: |
Data Analyst
goal: |
Extract actionable insights
# src/latest_ai_development/config/agents.yaml
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a data analyst at a large company. You're responsible for analyzing
data and providing insights to the business. You're currently working on a
project to analyze the performance of our marketing campaigns.
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.
```
#### Step 2: Initialize the agent in `crew.py`
To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`:
After defining the agent in the YAML file, you need to create an instance of the `Agent` class in your code and link it to the YAML configuration. This is done in the `crew.py` file, where additional properties can be specified.
```python crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
```python Code
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process
from crewai.project import CrewBase, agent, crew
from crewai_tools import SerperDevTool
@CrewBase
class ResearchCrew():
"""Research crew"""
agents_config = "your/path/to/agents.yaml"
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
@agent
def data_analyst(self) -> Agent:
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['data_analyst'],
tools=[my_tool1, my_tool2], # Optional, defaults to an empty list
llm=my_llm, # Optional
function_calling_llm=my_llm, # Optional
max_iter=15, # Optional
max_rpm=None, # Optional
max_execution_time=None, # Optional
verbose=True, # Optional
allow_delegation=False, # Optional
step_callback=my_intermediate_step_callback, # Optional
cache=True, # Optional
system_template=my_system_template, # Optional
prompt_template=my_prompt_template, # Optional
response_template=my_response_template, # Optional
config=my_config, # Optional
crew=my_crew, # Optional
tools_handler=my_tools_handler, # Optional
cache_handler=my_cache_handler, # Optional
callbacks=[callback1, callback2], # Optional
allow_code_execution=True, # Optional
max_retry_limit=2, # Optional
use_system_prompt=True, # Optional
respect_context_window=True, # Optional
code_execution_mode='safe', # Optional, defaults to 'safe'
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
# ... remaining code
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
```
### Creating an agent without a YAML configuration
<Note>
The names you use in your YAML files (`agents.yaml`) should match the method names in your Python code.
</Note>
The non-YAML configuration approach allows you to define the agent's properties directly in your code. Initialize an instance of the `Agent` class with the desired properties.
### Direct Code Definition
```python Code example
You can create agents directly in code by instantiating the `Agent` class. Here's a comprehensive example showing all available parameters:
```python Code
from crewai import Agent
from crewai_tools import SerperDevTool
# Create an agent with all available parameters
agent = Agent(
role='Data Analyst',
goal='Extract actionable insights',
backstory="""You're a data analyst at a large company.
You're responsible for analyzing data and providing insights
to the business.
You're currently working on a project to analyze the
performance of our marketing campaigns.""",
tools=[my_tool1, my_tool2], # Optional, defaults to an empty list
llm=my_llm, # Optional
function_calling_llm=my_llm, # Optional
max_iter=15, # Optional
max_rpm=None, # Optional
max_execution_time=None, # Optional
verbose=True, # Optional
allow_delegation=False, # Optional
step_callback=my_intermediate_step_callback, # Optional
cache=True, # Optional
system_template=my_system_template, # Optional
prompt_template=my_prompt_template, # Optional
response_template=my_response_template, # Optional
config=my_config, # Optional
crew=my_crew, # Optional
tools_handler=my_tools_handler, # Optional
cache_handler=my_cache_handler, # Optional
callbacks=[callback1, callback2], # Optional
allow_code_execution=True, # Optional
max_retry_limit=2, # Optional
use_system_prompt=True, # Optional
respect_context_window=True, # Optional
code_execution_mode='safe', # Optional, defaults to 'safe'
role="Senior Data Scientist",
goal="Analyze and interpret complex datasets to provide actionable insights",
backstory="With over 10 years of experience in data science and machine learning, "
"you excel at finding patterns in complex datasets.",
llm="gpt-4", # Default: OPENAI_MODEL_NAME or "gpt-4"
function_calling_llm=None, # Optional: Separate LLM for tool calling
memory=True, # Default: True
verbose=False, # Default: False
allow_delegation=False, # Default: False
max_iter=20, # Default: 20 iterations
max_rpm=None, # Optional: Rate limit for API calls
max_execution_time=None, # Optional: Maximum execution time in seconds
max_retry_limit=2, # Default: 2 retries on error
allow_code_execution=False, # Default: False
code_execution_mode="safe", # Default: "safe" (options: "safe", "unsafe")
respect_context_window=True, # Default: True
use_system_prompt=True, # Default: True
tools=[SerperDevTool()], # Optional: List of tools
knowledge_sources=None, # Optional: List of knowledge sources
embedder_config=None, # Optional: Custom embedder configuration
system_template=None, # Optional: Custom system prompt template
prompt_template=None, # Optional: Custom prompt template
response_template=None, # Optional: Custom response template
step_callback=None, # Optional: Callback function for monitoring
)
```
## Setting prompt templates
Let's break down some key parameter combinations for common use cases:
Prompt templates are used to format the prompt for the agent. You can use to update the system, regular and response templates for the agent. Here's an example of how to set prompt templates:
#### Basic Research Agent
```python Code
research_agent = Agent(
role="Research Analyst",
goal="Find and summarize information about specific topics",
backstory="You are an experienced researcher with attention to detail",
tools=[SerperDevTool()],
verbose=True # Enable logging for debugging
)
```
```python Code example
agent = Agent(
role="{topic} specialist",
goal="Figure {goal} out",
backstory="I am the master of {role}",
system_template="""<|start_header_id|>system<|end_header_id|>
#### Code Development Agent
```python Code
dev_agent = Agent(
role="Senior Python Developer",
goal="Write and debug Python code",
backstory="Expert Python developer with 10 years of experience",
allow_code_execution=True,
code_execution_mode="safe", # Uses Docker for safety
max_execution_time=300, # 5-minute timeout
max_retry_limit=3 # More retries for complex code tasks
)
```
#### Long-Running Analysis Agent
```python Code
analysis_agent = Agent(
role="Data Analyst",
goal="Perform deep analysis of large datasets",
backstory="Specialized in big data analysis and pattern recognition",
memory=True,
respect_context_window=True,
max_rpm=10, # Limit API calls
function_calling_llm="gpt-4o-mini" # Cheaper model for tool calls
)
```
#### Custom Template Agent
```python Code
custom_agent = Agent(
role="Customer Service Representative",
goal="Assist customers with their inquiries",
backstory="Experienced in customer support with a focus on satisfaction",
system_template="""<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>""",
prompt_template="""<|start_header_id|>user<|end_header_id|>
prompt_template="""<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>""",
response_template="""<|start_header_id|>assistant<|end_header_id|>
response_template="""<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>""",
)
```
## Bring your third-party agents
### Parameter Details
Extend your third-party agents like LlamaIndex, Langchain, Autogen or fully custom agents using the the CrewAI's `BaseAgent` class.
#### Critical Parameters
- `role`, `goal`, and `backstory` are required and shape the agent's behavior
- `llm` determines the language model used (default: OpenAI's GPT-4)
#### Memory and Context
- `memory`: Enable to maintain conversation history
- `respect_context_window`: Prevents token limit issues
- `knowledge_sources`: Add domain-specific knowledge bases
#### Execution Control
- `max_iter`: Maximum attempts before giving best answer
- `max_execution_time`: Timeout in seconds
- `max_rpm`: Rate limiting for API calls
- `max_retry_limit`: Retries on error
#### Code Execution
- `allow_code_execution`: Must be True to run code
- `code_execution_mode`:
- `"safe"`: Uses Docker (recommended for production)
- `"unsafe"`: Direct execution (use only in trusted environments)
#### Templates
- `system_template`: Defines agent's core behavior
- `prompt_template`: Structures input format
- `response_template`: Formats agent responses
<Note>
**BaseAgent** includes attributes and methods required to integrate with your
crews to run and delegate tasks to other agents within your own crew.
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{input}` in your templates. These will be automatically populated during execution.
</Note>
CrewAI is a universal multi-agent framework that allows for all agents to work together to automate tasks and solve problems.
## Agent Tools
```python Code example
from crewai import Agent, Task, Crew
from custom_agent import CustomAgent # You need to build and extend your own agent logic with the CrewAI BaseAgent class then import it here.
Agents can be equipped with various tools to enhance their capabilities. CrewAI supports tools from:
- [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools)
- [LangChain Tools](https://python.langchain.com/docs/integrations/tools)
from langchain.agents import load_tools
Here's how to add tools to an agent:
langchain_tools = load_tools(["google-serper"], llm=llm)
```python Code
from crewai import Agent
from crewai_tools import SerperDevTool, WikipediaTools
agent1 = CustomAgent(
role="agent role",
goal="who is {input}?",
backstory="agent backstory",
verbose=True,
# Create tools
search_tool = SerperDevTool()
wiki_tool = WikipediaTools()
# Add tools to agent
researcher = Agent(
role="AI Technology Researcher",
goal="Research the latest AI developments",
tools=[search_tool, wiki_tool],
verbose=True
)
task1 = Task(
expected_output="a short biography of {input}",
description="a short biography of {input}",
agent=agent1,
)
agent2 = Agent(
role="agent role",
goal="summarize the short bio for {input} and if needed do more research",
backstory="agent backstory",
verbose=True,
)
task2 = Task(
description="a tldr summary of the short biography",
expected_output="5 bullet point summary of the biography",
agent=agent2,
context=[task1],
)
my_crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew = my_crew.kickoff(inputs={"input": "Mark Twain"})
```
## Conclusion
## Agent Memory and Context
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents,
you can create sophisticated AI systems that leverage the power of collaborative intelligence. The `code_execution_mode` attribute provides flexibility in how agents execute code, allowing for both secure and direct execution options.
Agents can maintain memory of their interactions and use context from previous tasks. This is particularly useful for complex workflows where information needs to be retained across multiple tasks.
```python Code
from crewai import Agent
analyst = Agent(
role="Data Analyst",
goal="Analyze and remember complex data patterns",
memory=True, # Enable memory
verbose=True
)
```
<Note>
When `memory` is enabled, the agent will maintain context across multiple interactions, improving its ability to handle complex, multi-step tasks.
</Note>
## Important Considerations and Best Practices
### Security and Code Execution
- When using `allow_code_execution`, be cautious with user input and always validate it
- Use `code_execution_mode: "safe"` (Docker) in production environments
- Consider setting appropriate `max_execution_time` limits to prevent infinite loops
### Performance Optimization
- Use `respect_context_window: true` to prevent token limit issues
- Set appropriate `max_rpm` to avoid rate limiting
- Enable `cache: true` to improve performance for repetitive tasks
- Adjust `max_iter` and `max_retry_limit` based on task complexity
### Memory and Context Management
- Use `memory: true` for tasks requiring historical context
- Leverage `knowledge_sources` for domain-specific information
- Configure `embedder_config` when using custom embedding models
- Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior
### Agent Collaboration
- Enable `allow_delegation: true` when agents need to work together
- Use `step_callback` to monitor and log agent interactions
- Consider using different LLMs for different purposes:
- Main `llm` for complex reasoning
- `function_calling_llm` for efficient tool usage
### Model Compatibility
- Set `use_system_prompt: false` for older models that don't support system messages
- Ensure your chosen `llm` supports the features you need (like function calling)
## Troubleshooting Common Issues
1. **Rate Limiting**: If you're hitting API rate limits:
- Implement appropriate `max_rpm`
- Use caching for repetitive operations
- Consider batching requests
2. **Context Window Errors**: If you're exceeding context limits:
- Enable `respect_context_window`
- Use more efficient prompts
- Clear agent memory periodically
3. **Code Execution Issues**: If code execution fails:
- Verify Docker is installed for safe mode
- Check execution permissions
- Review code sandbox settings
4. **Memory Issues**: If agent responses seem inconsistent:
- Verify memory is enabled
- Check knowledge source configuration
- Review conversation history management
Remember that agents are most effective when configured according to their specific use case. Take time to understand your requirements and adjust these parameters accordingly.

View File

@@ -6,100 +6,205 @@ icon: book
# Using Knowledge in CrewAI
## Introduction
Knowledge in CrewAI serves as a foundational component for enriching AI agents with contextual and relevant information. It enables agents to access and utilize structured data sources during their execution processes, making them more intelligent and responsive.
The Knowledge class in CrewAI provides a powerful way to manage and query knowledge sources for your AI agents. This guide will show you how to implement knowledge management in your CrewAI projects.
## What is Knowledge?
The `Knowledge` class in CrewAI manages various sources that store information, which can be queried and retrieved by AI agents. This modular approach allows you to integrate diverse data formats such as text, PDFs, spreadsheets, and more into your AI workflows.
Knowledge in CrewAI is a powerful system that allows AI agents to access and utilize external information sources during their tasks. Think of it as giving your agents a reference library they can consult while working.
Additionally, we have specific tools for generate knowledge sources for strings, text files, PDF's, and Spreadsheets. You can expand on any source type by extending the `KnowledgeSource` class.
<Info>
Key benefits of using Knowledge:
- Enhance agents with domain-specific information
- Support decisions with real-world data
- Maintain context across conversations
- Ground responses in factual information
</Info>
## Basic Implementation
## Supported Knowledge Sources
Here's a simple example of how to use the Knowledge class:
CrewAI supports various types of knowledge sources out of the box:
<CardGroup cols={2}>
<Card title="Text Sources" icon="text">
- Raw strings
- Text files (.txt)
- PDF documents
</Card>
<Card title="Structured Data" icon="table">
- CSV files
- Excel spreadsheets
- JSON documents
</Card>
</CardGroup>
## Quick Start
Here's a simple example using string-based knowledge:
```python
from crewai import Agent, Task, Crew, Process, LLM
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from crewai import Agent, Task, Crew
from crewai.knowledge import StringKnowledgeSource
# Create a knowledge source
content = "Users name is John. He is 30 years old and lives in San Francisco."
string_source = StringKnowledgeSource(
content=content, metadata={"preference": "personal"}
# 1. Create a knowledge source
product_info = StringKnowledgeSource(
content="""Our product X1000 has the following features:
- 10-hour battery life
- Water-resistant
- Available in black and silver
Price: $299.99""",
metadata={"category": "product"}
)
# Create an agent with the knowledge store
agent = Agent(
role="About User",
goal="You know everything about the user.",
backstory="""You are a master at understanding people and their preferences.""",
verbose=True
# 2. Create an agent with knowledge
sales_agent = Agent(
role="Sales Representative",
goal="Accurately answer customer questions about products",
backstory="Expert in product features and customer service",
knowledge_sources=[product_info] # Attach knowledge to agent
)
task = Task(
description="Answer the following questions about the user: {question}",
expected_output="An answer to the question.",
agent=agent,
# 3. Create a task
answer_task = Task(
description="Answer: What colors is the X1000 available in and how much does it cost?",
agent=sales_agent
)
# 4. Create and run the crew
crew = Crew(
agents=[sales_agent],
tasks=[answer_task]
)
result = crew.kickoff()
```
## Knowledge Configuration
### Collection Names
Knowledge sources are organized into collections for better management:
```python
# Create knowledge sources with specific collections
tech_specs = StringKnowledgeSource(
content="Technical specifications...",
collection_name="product_tech_specs"
)
pricing_info = StringKnowledgeSource(
content="Pricing information...",
collection_name="product_pricing"
)
```
### Metadata and Filtering
Add metadata to organize and filter knowledge:
```python
knowledge_source = StringKnowledgeSource(
content="Product details...",
metadata={
"category": "electronics",
"product_line": "premium",
"last_updated": "2024-03"
}
)
```
### Chunking Configuration
Control how your content is split for processing:
```python
knowledge_source = PDFKnowledgeSource(
file_path="product_manual.pdf",
chunk_size=2000, # Characters per chunk
chunk_overlap=200 # Overlap between chunks
)
```
## Advanced Usage
### Custom Knowledge Sources
Create your own knowledge source by extending the base class:
```python
from crewai.knowledge.source import BaseKnowledgeSource
class APIKnowledgeSource(BaseKnowledgeSource):
def __init__(self, api_endpoint: str, **kwargs):
super().__init__(**kwargs)
self.api_endpoint = api_endpoint
def load_content(self):
# Implement API data fetching
response = requests.get(self.api_endpoint)
return response.json()
def add(self):
content = self.load_content()
# Process and store content
self.save_documents({"source": "api"})
```
### Embedder Configuration
Customize the embedding process:
```python
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
knowledge_sources=[string_source], # Enable knowledge by adding the sources here.
)
result = crew.kickoff(inputs={"question": "What city does John live in and how old is he?"})
```
## Appending Knowledge Sources To Individual Agents
Sometimes you may want to append knowledge sources to an individual agent. You can do this by setting the `knowledge` parameter in the `Agent` class.
```python
agent = Agent(
...
knowledge_sources=[
StringKnowledgeSource(
content="Users name is John. He is 30 years old and lives in San Francisco.",
metadata={"preference": "personal"},
)
],
knowledge_sources=[source],
embedder_config={
"model": "BAAI/bge-small-en-v1.5",
"normalize": True,
"max_length": 512
}
)
```
## Agent Level Knowledge Sources
## Best Practices
You can also append knowledge sources to an individual agent by setting the `knowledge_sources` parameter in the `Agent` class.
<AccordionGroup>
<Accordion title="Content Organization">
- Use meaningful collection names
- Add detailed metadata for filtering
- Keep chunk sizes appropriate for your content
- Consider content overlap for context preservation
</Accordion>
<Accordion title="Performance Tips">
- Use smaller chunk sizes for precise retrieval
- Implement metadata filtering for faster searches
- Choose appropriate embedding models for your use case
- Cache frequently accessed knowledge
</Accordion>
<Accordion title="Error Handling">
- Validate knowledge source content
- Handle missing or corrupted files
- Monitor embedding generation
- Implement fallback options
</Accordion>
</AccordionGroup>
```python
string_source = StringKnowledgeSource(
content="Users name is John. He is 30 years old and lives in San Francisco.",
metadata={"preference": "personal"},
)
agent = Agent(
...
knowledge_sources=[string_source],
)
```
## Common Issues and Solutions
## Embedder Configuration
You can also configure the embedder for the knowledge store. This is useful if you want to use a different embedder for the knowledge store than the one used for the agents.
```python
...
string_source = StringKnowledgeSource(
content="Users name is John. He is 30 years old and lives in San Francisco.",
metadata={"preference": "personal"}
)
crew = Crew(
...
knowledge_sources=[string_source],
embedder_config={"provider": "ollama", "config": {"model": "nomic-embed-text:latest"}},
)
```
<AccordionGroup>
<Accordion title="Content Not Found">
If agents can't find relevant information:
- Check chunk sizes
- Verify knowledge source loading
- Review metadata filters
- Test with simpler queries first
</Accordion>
<Accordion title="Performance Issues">
If knowledge retrieval is slow:
- Reduce chunk sizes
- Optimize metadata filtering
- Consider using a lighter embedding model
- Cache frequently accessed content
</Accordion>
</AccordionGroup>

View File

@@ -1,205 +1,323 @@
---
title: LLMs
description: Learn how to configure and optimize LLMs for your CrewAI projects.
icon: microchip-ai
title: 'LLMs'
description: 'A comprehensive guide to configuring and using Large Language Models (LLMs) in your CrewAI projects'
icon: 'microchip-ai'
---
# Large Language Models (LLMs) in CrewAI
<Note>
CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
</Note>
Large Language Models (LLMs) are the backbone of intelligent agents in the CrewAI framework. This guide will help you understand, configure, and optimize LLM usage for your CrewAI projects.
## What are LLMs?
## Key Concepts
Large Language Models (LLMs) are the core intelligence behind CrewAI agents. They enable agents to understand context, make decisions, and generate human-like responses. Here's what you need to know:
- **LLM**: Large Language Model, the AI powering agent intelligence
- **Agent**: A CrewAI entity that uses an LLM to perform tasks
- **Provider**: A service that offers LLM capabilities (e.g., OpenAI, Anthropic, Ollama, [more providers](https://docs.litellm.ai/docs/providers))
<CardGroup cols={2}>
<Card title="LLM Basics" icon="brain">
Large Language Models are AI systems trained on vast amounts of text data. They power the intelligence of your CrewAI agents, enabling them to understand and generate human-like text.
</Card>
<Card title="Context Window" icon="window">
The context window determines how much text an LLM can process at once. Larger windows (e.g., 128K tokens) allow for more context but may be more expensive and slower.
</Card>
<Card title="Temperature" icon="temperature-three-quarters">
Temperature (0.0 to 1.0) controls response randomness. Lower values (e.g., 0.2) produce more focused, deterministic outputs, while higher values (e.g., 0.8) increase creativity and variability.
</Card>
<Card title="Provider Selection" icon="server">
Each LLM provider (e.g., OpenAI, Anthropic, Google) offers different models with varying capabilities, pricing, and features. Choose based on your needs for accuracy, speed, and cost.
</Card>
</CardGroup>
## Configuring LLMs for Agents
## Available Models and Their Capabilities
CrewAI offers flexible options for setting up LLMs:
### 1. Default Configuration
By default, CrewAI uses the `gpt-4o-mini` model. It uses environment variables if no LLM is specified:
- `OPENAI_MODEL_NAME` (defaults to "gpt-4o-mini" if not set)
- `OPENAI_API_BASE`
- `OPENAI_API_KEY`
### 2. Updating YAML files
You can update the `agents.yml` file to refer to the LLM you want to use:
```yaml Code
researcher:
role: Research Specialist
goal: Conduct comprehensive research and analysis to gather relevant information,
synthesize findings, and produce well-documented insights.
backstory: A dedicated research professional with years of experience in academic
investigation, literature review, and data analysis, known for thorough and
methodical approaches to complex research questions.
verbose: true
llm: openai/gpt-4o
# llm: azure/gpt-4o-mini
# llm: gemini/gemini-pro
# llm: anthropic/claude-3-5-sonnet-20240620
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
# llm: mistral/mistral-large-latest
# llm: ollama/llama3:70b
# llm: groq/llama-3.2-90b-vision-preview
# llm: watsonx/meta-llama/llama-3-1-70b-instruct
# llm: nvidia_nim/meta/llama3-70b-instruct
# llm: sambanova/Meta-Llama-3.1-8B-Instruct
# ...
```
Keep in mind that you will need to set certain ENV vars depending on the model you are
using to account for the credentials or set a custom LLM object like described below.
Here are some of the required ENV vars for some of the LLM integrations:
<AccordionGroup>
<Accordion title="OpenAI">
```python Code
OPENAI_API_KEY=<your-api-key>
OPENAI_API_BASE=<optional-custom-base-url>
OPENAI_MODEL_NAME=<openai-model-name>
OPENAI_ORGANIZATION=<your-org-id> # OPTIONAL
OPENAI_API_BASE=<openaiai-api-base> # OPTIONAL
```
</Accordion>
<Accordion title="Anthropic">
```python Code
ANTHROPIC_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="Google">
```python Code
GEMINI_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="Azure">
```python Code
AZURE_API_KEY=<your-api-key> # "my-azure-api-key"
AZURE_API_BASE=<your-resource-url> # "https://example-endpoint.openai.azure.com"
AZURE_API_VERSION=<api-version> # "2023-05-15"
AZURE_AD_TOKEN=<your-azure-ad-token> # Optional
AZURE_API_TYPE=<your-azure-api-type> # Optional
```
</Accordion>
<Accordion title="AWS Bedrock">
```python Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
</Accordion>
<Accordion title="Mistral">
```python Code
MISTRAL_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="Groq">
```python Code
GROQ_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="IBM watsonx.ai">
```python Code
WATSONX_URL=<your-url> # (required) Base URL of your WatsonX instance
WATSONX_APIKEY=<your-apikey> # (required) IBM cloud API key
WATSONX_TOKEN=<your-token> # (required) IAM auth token (alternative to APIKEY)
WATSONX_PROJECT_ID=<your-project-id> # (optional) Project ID of your WatsonX instance
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id> # (optional) ID of deployment space for deployed models
```
</Accordion>
</AccordionGroup>
### 3. Custom LLM Objects
Pass a custom LLM implementation or object from another library.
See below for examples.
Here's a detailed breakdown of supported models and their capabilities:
<Tabs>
<Tab title="String Identifier">
```python Code
agent = Agent(llm="gpt-4o", ...)
```
</Tab>
<Tab title="OpenAI">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
<Tab title="LLM Instance">
```python Code
from crewai import LLM
<Note>
1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words.
</Note>
</Tab>
<Tab title="Groq">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| Llama 3.1 70B/8B | 131,072 tokens | High-performance, large context tasks |
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks |
| Mixtral 8x7B | 32,768 tokens | Balanced performance and context |
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks |
llm = LLM(model="gpt-4", temperature=0.7)
agent = Agent(llm=llm, ...)
```
</Tab>
<Tip>
Groq is known for its fast inference speeds, making it suitable for real-time applications.
</Tip>
</Tab>
<Tab title="Others">
| Provider | Context Window | Key Features |
|----------|---------------|--------------|
| Deepseek Chat | 128,000 tokens | Specialized in technical discussions |
| Claude 3 | Up to 200K tokens | Strong reasoning, code understanding |
| Gemini | Varies by model | Multimodal capabilities |
<Info>
Provider selection should consider factors like:
- API availability in your region
- Pricing structure
- Required features (e.g., streaming, function calling)
- Performance requirements
</Info>
</Tab>
</Tabs>
## Connecting to OpenAI-Compatible LLMs
## Setting Up Your LLM
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
There are three ways to configure LLMs in CrewAI. Choose the method that best fits your workflow:
<Tabs>
<Tab title="Using Environment Variables">
```python Code
import os
<Tab title="1. Environment Variables">
The simplest way to get started. Set these variables in your environment:
os.environ["OPENAI_API_KEY"] = "your-api-key"
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
```bash
# Required: Your API key for authentication
OPENAI_API_KEY=<your-api-key>
# Optional: Default model selection
OPENAI_MODEL_NAME=gpt-4o-mini # Default if not set
# Optional: Organization ID (if applicable)
OPENAI_ORGANIZATION_ID=<your-org-id>
```
</Tab>
<Tab title="Using LLM Class Attributes">
```python Code
<Warning>
Never commit API keys to version control. Use environment files (.env) or your system's secret management.
</Warning>
</Tab>
<Tab title="2. YAML Configuration">
Create a YAML file to define your agent configurations. This method is great for version control and team collaboration:
```yaml
researcher:
# Agent Definition
role: Research Specialist
goal: Conduct comprehensive research and analysis
backstory: A dedicated research professional with years of experience
verbose: true
# Model Selection (uncomment your choice)
# OpenAI Models - Known for reliability and performance
llm: openai/gpt-4o-mini
# llm: openai/gpt-4 # More accurate but expensive
# llm: openai/gpt-4-turbo # Fast with large context
# llm: openai/gpt-4o # Optimized for longer texts
# llm: openai/o1-preview # Latest features
# llm: openai/o1-mini # Cost-effective
# Azure Models - For enterprise deployments
# llm: azure/gpt-4o-mini
# llm: azure/gpt-4
# llm: azure/gpt-35-turbo
# Anthropic Models - Strong reasoning capabilities
# llm: anthropic/claude-3-opus-20240229-v1:0
# llm: anthropic/claude-3-sonnet-20240229-v1:0
# llm: anthropic/claude-3-haiku-20240307-v1:0
# llm: anthropic/claude-2.1
# llm: anthropic/claude-2.0
# Google Models - Good for general tasks
# llm: gemini/gemini-pro
# llm: gemini/gemini-1.5-pro-latest
# llm: gemini/gemini-1.0-pro-latest
# AWS Bedrock Models - Enterprise-grade
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
# llm: bedrock/anthropic.claude-v2:1
# llm: bedrock/amazon.titan-text-express-v1
# llm: bedrock/meta.llama2-70b-chat-v1
# Mistral Models - Open source alternative
# llm: mistral/mistral-large-latest
# llm: mistral/mistral-medium-latest
# llm: mistral/mistral-small-latest
# Groq Models - Fast inference
# llm: groq/mixtral-8x7b-32768
# llm: groq/llama-3.1-70b-versatile
# llm: groq/llama-3.2-90b-text-preview
# llm: groq/gemma2-9b-it
# llm: groq/gemma-7b-it
# IBM watsonx.ai Models - Enterprise features
# llm: watsonx/ibm/granite-13b-chat-v2
# llm: watsonx/meta-llama/llama-3-1-70b-instruct
# llm: watsonx/bigcode/starcoder2-15b
# Ollama Models - Local deployment
# llm: ollama/llama3:70b
# llm: ollama/codellama
# llm: ollama/mistral
# llm: ollama/mixtral
# llm: ollama/phi
# Fireworks AI Models - Specialized tasks
# llm: fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct
# llm: fireworks_ai/accounts/fireworks/models/mixtral-8x7b
# llm: fireworks_ai/accounts/fireworks/models/zephyr-7b-beta
# Perplexity AI Models - Research focused
# llm: pplx/llama-3.1-sonar-large-128k-online
# llm: pplx/mistral-7b-instruct
# llm: pplx/codellama-34b-instruct
# llm: pplx/mixtral-8x7b-instruct
# Hugging Face Models - Community models
# llm: huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct
# llm: huggingface/mistralai/Mixtral-8x7B-Instruct-v0.1
# llm: huggingface/tiiuae/falcon-180B-chat
# llm: huggingface/google/gemma-7b-it
# Nvidia NIM Models - GPU-optimized
# llm: nvidia_nim/meta/llama3-70b-instruct
# llm: nvidia_nim/mistral/mixtral-8x7b
# llm: nvidia_nim/google/gemma-7b
# SambaNova Models - Enterprise AI
# llm: sambanova/Meta-Llama-3.1-8B-Instruct
# llm: sambanova/BioMistral-7B
# llm: sambanova/Falcon-180B
```
<Info>
The YAML configuration allows you to:
- Version control your agent settings
- Easily switch between different models
- Share configurations across team members
- Document model choices and their purposes
</Info>
</Tab>
<Tab title="3. Direct Code">
For maximum flexibility, configure LLMs directly in your Python code:
```python
from crewai import LLM
# Basic configuration
llm = LLM(model="gpt-4")
# Advanced configuration with detailed parameters
llm = LLM(
model="gpt-4o-mini",
temperature=0.7, # Higher for more creative outputs
timeout=120, # Seconds to wait for response
max_tokens=4000, # Maximum length of response
top_p=0.9, # Nucleus sampling parameter
frequency_penalty=0.1, # Reduce repetition
presence_penalty=0.1, # Encourage topic diversity
response_format={"type": "json"}, # For structured outputs
seed=42 # For reproducible results
)
```
<Info>
Parameter explanations:
- `temperature`: Controls randomness (0.0-1.0)
- `timeout`: Maximum wait time for response
- `max_tokens`: Limits response length
- `top_p`: Alternative to temperature for sampling
- `frequency_penalty`: Reduces word repetition
- `presence_penalty`: Encourages new topics
- `response_format`: Specifies output structure
- `seed`: Ensures consistent outputs
</Info>
</Tab>
</Tabs>
## Advanced Features and Optimization
Learn how to get the most out of your LLM configuration:
<AccordionGroup>
<Accordion title="Context Window Management">
CrewAI includes smart context management features:
```python
from crewai import LLM
# CrewAI automatically handles:
# 1. Token counting and tracking
# 2. Content summarization when needed
# 3. Task splitting for large contexts
llm = LLM(
model="custom-model-name",
api_key="your-api-key",
base_url="https://api.your-provider.com/v1"
model="gpt-4",
max_tokens=4000, # Limit response length
)
agent = Agent(llm=llm, ...)
```
</Tab>
</Tabs>
## LLM Configuration Options
<Info>
Best practices for context management:
1. Choose models with appropriate context windows
2. Pre-process long inputs when possible
3. Use chunking for large documents
4. Monitor token usage to optimize costs
</Info>
</Accordion>
When configuring an LLM for your agent, you have access to a wide range of parameters:
<Accordion title="Performance Optimization">
<Steps>
<Step title="Token Usage Optimization">
Choose the right context window for your task:
- Small tasks (up to 4K tokens): Standard models
- Medium tasks (between 4K-32K): Enhanced models
- Large tasks (over 32K): Large context models
```python
# Configure model with appropriate settings
llm = LLM(
model="openai/gpt-4-turbo-preview",
temperature=0.7, # Adjust based on task
max_tokens=4096, # Set based on output needs
timeout=300 # Longer timeout for complex tasks
)
```
<Tip>
- Lower temperature (0.1 to 0.3) for factual responses
- Higher temperature (0.7 to 0.9) for creative tasks
</Tip>
</Step>
| Parameter | Type | Description |
|:------------------|:---------------:|:-------------------------------------------------------------------------------------------------|
| **model** | `str` | Name of the model to use (e.g., "gpt-4", "gpt-3.5-turbo", "ollama/llama3.1"). For more options, visit the providers documentation. |
| **timeout** | `float, int` | Maximum time (in seconds) to wait for a response. |
| **temperature** | `float` | Controls randomness in output (0.0 to 1.0). |
| **top_p** | `float` | Controls diversity of output (0.0 to 1.0). |
| **n** | `int` | Number of completions to generate. |
| **stop** | `str, List[str]` | Sequence(s) where generation should stop. |
| **max_tokens** | `int` | Maximum number of tokens to generate. |
| **presence_penalty** | `float` | Penalizes new tokens based on their presence in prior text. |
| **frequency_penalty**| `float` | Penalizes new tokens based on their frequency in prior text. |
| **logit_bias** | `Dict[int, float]`| Modifies likelihood of specified tokens appearing. |
| **response_format** | `Dict[str, Any]` | Specifies the format of the response (e.g., JSON object). |
| **seed** | `int` | Sets a random seed for deterministic results. |
| **logprobs** | `bool` | Returns log probabilities of output tokens if enabled. |
| **top_logprobs** | `int` | Number of most likely tokens for which to return log probabilities. |
| **base_url** | `str` | The base URL for the API endpoint. |
| **api_version** | `str` | Version of the API to use. |
| **api_key** | `str` | Your API key for authentication. |
<Step title="Best Practices">
1. Monitor token usage
2. Implement rate limiting
3. Use caching when possible
4. Set appropriate max_tokens limits
</Step>
</Steps>
<Info>
Remember to regularly monitor your token usage and adjust your configuration as needed to optimize costs and performance.
</Info>
</Accordion>
</AccordionGroup>
These are examples of how to configure LLMs for your agent.
## Provider Configuration Examples
<AccordionGroup>
<Accordion title="OpenAI">
<Accordion title="OpenAI">
```python Code
# Required
OPENAI_API_KEY=sk-...
# Optional
OPENAI_API_BASE=<custom-base-url>
OPENAI_ORGANIZATION=<your-org-id>
```
Example usage:
```python Code
from crewai import LLM
@@ -211,193 +329,306 @@ These are examples of how to configure LLMs for your agent.
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42,
base_url="https://api.openai.com/v1",
api_key="your-api-key-here"
seed=42
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Cerebras">
</Accordion>
<Accordion title="Anthropic">
```python Code
from crewai import LLM
ANTHROPIC_API_KEY=sk-ant-...
```
Example usage:
```python Code
llm = LLM(
model="cerebras/llama-3.1-70b",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Ollama (Local LLMs)">
CrewAI supports using Ollama for running open-source models locally:
1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama2`
3. Configure agent:
```python Code
from crewai import LLM
agent = Agent(
llm=LLM(
model="ollama/llama3.1",
base_url="http://localhost:11434"
),
...
model="anthropic/claude-3-sonnet-20240229-v1:0",
temperature=0.7
)
```
</Accordion>
<Accordion title="Groq">
</Accordion>
<Accordion title="Google">
```python Code
from crewai import LLM
GEMINI_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="groq/llama3-8b-8192",
api_key="your-api-key-here"
model="gemini/gemini-pro",
temperature=0.7
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Anthropic">
</Accordion>
<Accordion title="Azure">
```python Code
from crewai import LLM
# Required
AZURE_API_KEY=<your-api-key>
AZURE_API_BASE=<your-resource-url>
AZURE_API_VERSION=<api-version>
# Optional
AZURE_AD_TOKEN=<your-azure-ad-token>
AZURE_API_TYPE=<your-azure-api-type>
```
Example usage:
```python Code
llm = LLM(
model="anthropic/claude-3-5-sonnet-20241022",
api_key="your-api-key-here"
model="azure/gpt-4",
api_version="2023-05-15"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
</Accordion>
<Accordion title="Fireworks AI">
<Accordion title="AWS Bedrock">
```python Code
from crewai import LLM
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
Example usage:
```python Code
llm = LLM(
model="fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct",
api_key="your-api-key-here"
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Gemini">
</Accordion>
<Accordion title="Mistral">
```python Code
from crewai import LLM
MISTRAL_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="gemini/gemini-1.5-pro-002",
api_key="your-api-key-here"
model="mistral/mistral-large-latest",
temperature=0.7
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Perplexity AI (pplx-api)">
</Accordion>
<Accordion title="Groq">
```python Code
from crewai import LLM
GROQ_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="llama-3.1-sonar-large-128k-online",
base_url="https://api.perplexity.ai/",
api_key="your-api-key-here"
model="groq/llama-3.2-90b-text-preview",
temperature=0.7
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="IBM watsonx.ai">
You can use IBM Watson by seeting the following ENV vars:
</Accordion>
<Accordion title="IBM watsonx.ai">
```python Code
# Required
WATSONX_URL=<your-url>
WATSONX_APIKEY=<your-apikey>
WATSONX_PROJECT_ID=<your-project-id>
# Optional
WATSONX_TOKEN=<your-token>
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id>
```
You can then define your agents llms by updating the `agents.yml`
```yaml Code
researcher:
role: Research Specialist
goal: Conduct comprehensive research and analysis to gather relevant information,
synthesize findings, and produce well-documented insights.
backstory: A dedicated research professional with years of experience in academic
investigation, literature review, and data analysis, known for thorough and
methodical approaches to complex research questions.
verbose: true
llm: watsonx/meta-llama/llama-3-1-70b-instruct
```
You can also set up agents more dynamically as a base level LLM instance, like bellow:
Example usage:
```python Code
from crewai import LLM
llm = LLM(
model="watsonx/ibm/granite-13b-chat-v2",
base_url="https://api.watsonx.ai/v1",
api_key="your-api-key-here"
model="watsonx/meta-llama/llama-3-1-70b-instruct",
base_url="https://api.watsonx.ai/v1"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
</Accordion>
<Accordion title="Hugging Face">
<Accordion title="Ollama (Local LLMs)">
1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama2`
3. Configure:
```python Code
from crewai import LLM
llm = LLM(
model="ollama/llama3:70b",
base_url="http://localhost:11434"
)
```
</Accordion>
<Accordion title="Fireworks AI">
```python Code
FIREWORKS_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="Perplexity AI">
```python Code
PERPLEXITY_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="llama-3.1-sonar-large-128k-online",
base_url="https://api.perplexity.ai/"
)
```
</Accordion>
<Accordion title="Hugging Face">
```python Code
HUGGINGFACE_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct",
api_key="your-api-key-here",
base_url="your_api_endpoint"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
</Accordion>
<Accordion title="Nvidia NIM">
```python Code
NVIDIA_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="nvidia_nim/meta/llama3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="SambaNova">
```python Code
SAMBANOVA_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="sambanova/Meta-Llama-3.1-8B-Instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="Cerebras">
```python Code
# Required
CEREBRAS_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="cerebras/llama3.1-70b",
temperature=0.7,
max_tokens=8192
)
```
<Info>
Cerebras features:
- Fast inference speeds
- Competitive pricing
- Good balance of speed and quality
- Support for long context windows
</Info>
</Accordion>
</AccordionGroup>
## Changing the Base API URL
## Common Issues and Solutions
You can change the base API URL for any LLM provider by setting the `base_url` parameter:
<Tabs>
<Tab title="Authentication">
<Warning>
Most authentication issues can be resolved by checking API key format and environment variable names.
</Warning>
```bash
# OpenAI
OPENAI_API_KEY=sk-...
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
```
</Tab>
<Tab title="Model Names">
<Check>
Always include the provider prefix in model names
</Check>
```python
# Correct
llm = LLM(model="openai/gpt-4")
# Incorrect
llm = LLM(model="gpt-4")
```
</Tab>
<Tab title="Context Length">
<Tip>
Use larger context models for extensive tasks
</Tip>
```python
# Large context model
llm = LLM(model="openai/gpt-4o") # 128K tokens
```
</Tab>
</Tabs>
```python Code
from crewai import LLM
## Getting Help
llm = LLM(
model="custom-model-name",
base_url="https://api.your-provider.com/v1",
api_key="your-api-key"
)
agent = Agent(llm=llm, ...)
```
If you need assistance, these resources are available:
This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.
<CardGroup cols={3}>
<Card
title="LiteLLM Documentation"
href="https://docs.litellm.ai/docs/"
icon="book"
>
Comprehensive documentation for LiteLLM integration and troubleshooting common issues.
</Card>
<Card
title="GitHub Issues"
href="https://github.com/joaomdmoura/crewAI/issues"
icon="bug"
>
Report bugs, request features, or browse existing issues for solutions.
</Card>
<Card
title="Community Forum"
href="https://community.crewai.com"
icon="comment-question"
>
Connect with other CrewAI users, share experiences, and get help from the community.
</Card>
</CardGroup>
## Best Practices
1. **Choose the right model**: Balance capability and cost.
2. **Optimize prompts**: Clear, concise instructions improve output.
3. **Manage tokens**: Monitor and limit token usage for efficiency.
4. **Use appropriate temperature**: Lower for factual tasks, higher for creative ones.
5. **Implement error handling**: Gracefully manage API errors and rate limits.
## Troubleshooting
- **API Errors**: Check your API key, network connection, and rate limits.
- **Unexpected Outputs**: Refine your prompts and adjust temperature or top_p.
- **Performance Issues**: Consider using a more powerful model or optimizing your queries.
- **Timeout Errors**: Increase the `timeout` parameter or optimize your input.
<Note>
Best Practices for API Key Security:
- Use environment variables or secure vaults
- Never commit keys to version control
- Rotate keys regularly
- Use separate keys for development and production
- Monitor key usage for unusual patterns
</Note>

View File

@@ -1,6 +1,6 @@
---
title: Tasks
description: Detailed guide on managing and creating tasks within the CrewAI framework, reflecting the latest codebase updates.
description: Detailed guide on managing and creating tasks within the CrewAI framework.
icon: list-check
---
@@ -8,41 +8,171 @@ icon: list-check
In the CrewAI framework, a `Task` is a specific assignment completed by an `Agent`.
They provide all necessary details for execution, such as a description, the agent responsible, required tools, and more, facilitating a wide range of action complexities.
Tasks provide all necessary details for execution, such as a description, the agent responsible, required tools, and more, facilitating a wide range of action complexities.
Tasks within CrewAI can be collaborative, requiring multiple agents to work together. This is managed through the task properties and orchestrated by the Crew's process, enhancing teamwork and efficiency.
### Task Execution Flow
Tasks can be executed in two ways:
- **Sequential**: Tasks are executed in the order they are defined
- **Hierarchical**: Tasks are assigned to agents based on their roles and expertise
The execution flow is defined when creating the crew:
```python Code
crew = Crew(
agents=[agent1, agent2],
tasks=[task1, task2],
process=Process.sequential # or Process.hierarchical
)
```
## Task Attributes
| Attribute | Parameters | Type | Description |
| :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
| **Agent** | `agent` | `Optional[BaseAgent]` | The agent responsible for the task, assigned either directly or by the crew's process. |
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
| **Tools** _(optional)_ | `tools` | `Optional[List[Any]]` | The functions or capabilities the agent can utilize to perform the task. Defaults to an empty list. |
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | If set, the task executes asynchronously, allowing progression without waiting for completion. Defaults to False. |
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Specifies tasks whose outputs are used as context for this task. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Additional configuration details for the agent executing the task, allowing further customization. Defaults to None. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. |
| **Output** _(optional)_ | `output` | `Optional[TaskOutput]` | An instance of `TaskOutput`, containing the raw, JSON, and Pydantic output plus additional details. |
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | A callable that is executed with the task's output upon completion. |
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Indicates if the task should involve human review at the end, useful for tasks needing human oversight. Defaults to False.|
| **Converter Class** _(optional)_ | `converter_cls` | `Optional[Type[Converter]]` | A converter class used to export structured output. Defaults to None. |
| **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
| **Agent** _(optional)_ | `agent` | `Optional[BaseAgent]` | The agent responsible for executing the task. |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. |
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Other tasks whose outputs will be used as context for this task. |
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | Whether the task should be executed asynchronously. Defaults to False. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
## Creating a Task
## Creating Tasks
Creating a task involves defining its scope, responsible agent, and any additional attributes for flexibility:
There are two ways to create tasks in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**.
### YAML Configuration (Recommended)
Using YAML configuration provides a cleaner, more maintainable way to define tasks. We strongly recommend using this approach to define tasks in your CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/tasks.yaml` file and modify the template to match your specific task requirements.
<Note>
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
```python Code
crew.kickoff(inputs={'topic': 'AI Agents'})
```
</Note>
Here's an example of how to configure tasks using YAML:
```yaml tasks.yaml
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2024.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst
output_file: report.md
```
To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`:
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task']
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task']
)
@crew
def crew(self) -> Crew:
return Crew(
agents=[
self.researcher(),
self.reporting_analyst()
],
tasks=[
self.research_task(),
self.reporting_task()
],
process=Process.sequential
)
```
<Note>
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
</Note>
### Direct Code Definition (Alternative)
Alternatively, you can define tasks directly in your code without using YAML configuration:
```python task.py
from crewai import Task
task = Task(
description='Find and summarize the latest and most relevant news on AI',
agent=sales_agent,
expected_output='A bullet list summary of the top 5 most important AI news',
research_task = Task(
description="""
Conduct a thorough research about AI Agents.
Make sure you find any interesting and relevant information given
the current year is 2024.
""",
expected_output="""
A list with 10 bullet points of the most relevant information about AI Agents
""",
agent=researcher
)
reporting_task = Task(
description="""
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
""",
expected_output="""
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
""",
agent=reporting_analyst,
output_file="report.md"
)
```
@@ -52,6 +182,8 @@ task = Task(
## Task Output
Understanding task outputs is crucial for building effective AI workflows. CrewAI provides a structured way to handle task results through the `TaskOutput` class, which supports multiple output formats and can be easily passed between tasks.
The output of a task in CrewAI framework is encapsulated within the `TaskOutput` class. This class provides a structured way to access results of a task, including various formats such as raw output, JSON, and Pydantic models.
By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput` will only include the `pydantic` or `json_dict` output if the original `Task` object was configured with `output_pydantic` or `output_json`, respectively.
@@ -112,6 +244,25 @@ if task_output.pydantic:
print(f"Pydantic Output: {task_output.pydantic}")
```
## Task Dependencies and Context
Tasks can depend on the output of other tasks using the `context` attribute. For example:
```python Code
research_task = Task(
description="Research the latest developments in AI",
expected_output="A list of recent AI developments",
agent=researcher
)
analysis_task = Task(
description="Analyze the research findings and identify key trends",
expected_output="Analysis report of AI trends",
agent=analyst,
context=[research_task] # This task will wait for research_task to complete
)
```
## Integrating Tools with Tasks
Leverage tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools) for enhanced task performance and agent interaction.
@@ -167,16 +318,16 @@ This is useful when you have a task that depends on the output of another task t
# ...
research_ai_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
description="Research the latest developments in AI",
expected_output="A list of recent AI developments",
async_execution=True,
agent=research_agent,
tools=[search_tool]
)
research_ops_task = Task(
description='Find and summarize the latest AI Ops news',
expected_output='A bullet list summary of the top 5 most important AI Ops news',
description="Research the latest developments in AI Ops",
expected_output="A list of recent AI Ops developments",
async_execution=True,
agent=research_agent,
tools=[search_tool]
@@ -184,7 +335,7 @@ research_ops_task = Task(
write_blog_task = Task(
description="Write a full blog post about the importance of AI and its latest news",
expected_output='Full blog post that is 4 paragraphs long',
expected_output="Full blog post that is 4 paragraphs long",
agent=writer_agent,
context=[research_ai_task, research_ops_task]
)

View File

@@ -1,128 +1,145 @@
---
title: Installation
description:
description: Get started with CrewAI - Install, configure, and build your first AI crew
icon: wrench
---
This guide will walk you through the installation process for CrewAI and its dependencies.
CrewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently.
Let's get started! 🚀
<Note>
**Python Version Requirements**
CrewAI requires `Python >=3.10 and <=3.13`. Here's how to check your version:
```bash
python3 --version
```
If you need to update Python, visit [python.org/downloads](https://python.org/downloads)
</Note>
<Tip>
Make sure you have `Python >=3.10 <=3.13` installed on your system before you proceed.
</Tip>
# Installing CrewAI
CrewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently.
Let's get you set up! 🚀
<Steps>
<Step title="Install CrewAI">
Install the main CrewAI package with the following command:
<CodeGroup>
```shell Terminal
pip install crewai
```
</CodeGroup>
You can also install the main CrewAI package and the tools package that include a series of helpful tools for your agents:
<CodeGroup>
```shell Terminal
pip install 'crewai[tools]'
```
</CodeGroup>
Alternatively, you can also use:
<CodeGroup>
```shell Terminal
pip install crewai crewai-tools
```
</CodeGroup>
</Step>
<Step title="Upgrade CrewAI">
To upgrade CrewAI and CrewAI Tools to the latest version, run the following command
<CodeGroup>
```shell Terminal
pip install --upgrade crewai crewai-tools
```
</CodeGroup>
<Note>
1. If you're using an older version of CrewAI, you may receive a warning about using `Poetry` for dependency management.
![Error from older versions](./images/crewai-run-poetry-error.png)
2. In this case, you'll need to run the command below to update your project.
This command will migrate your project to use [UV](https://github.com/astral-sh/uv) and update the necessary files.
Install CrewAI with all recommended tools using either method:
```shell Terminal
crewai update
pip install 'crewai[tools]'
```
or
```shell Terminal
pip install crewai crewai-tools
```
3. After running the command above, you should see the following output:
![Successfully migrated to UV](./images/crewai-update.png)
4. You're all set! You can now proceed to the next step! 🎉
</Note>
<Note>
Both methods install the core package and additional tools needed for most use cases.
</Note>
</Step>
<Step title="Verify the installation">
To verify that `crewai` and `crewai-tools` are installed correctly, run the following command
<CodeGroup>
```shell Terminal
pip freeze | grep crewai
```
</CodeGroup>
You should see the version number of `crewai` and `crewai-tools`.
<CodeGroup>
```markdown Version
crewai==X.X.X
crewai-tools==X.X.X
```
</CodeGroup>
If you see the version number, then the installation was successful! 🎉
<Step title="Upgrade CrewAI (Existing Installations Only)">
If you have an older version of CrewAI installed, you can upgrade it:
```shell Terminal
pip install --upgrade crewai crewai-tools
```
<Warning>
If you see a Poetry-related warning, you'll need to migrate to our new dependency manager:
```shell Terminal
crewai update
```
This will update your project to use [UV](https://github.com/astral-sh/uv), our new faster dependency manager.
</Warning>
<Note>
Skip this step if you're doing a fresh installation.
</Note>
</Step>
<Step title="Verify Installation">
Check your installed versions:
```shell Terminal
pip freeze | grep crewai
```
You should see something like:
```markdown Output
crewai==X.X.X
crewai-tools==X.X.X
```
<Check>Installation successful! You're ready to create your first crew.</Check>
</Step>
</Steps>
## Create a new CrewAI project
# Creating a New Project
The next step is to create a new CrewAI project.
We recommend using the YAML Template scaffolding to get started as it provides a structured approach to defining agents and tasks.
<Info>
We recommend using the YAML Template scaffolding for a structured approach to defining agents and tasks.
</Info>
<Steps>
<Step title="Create a new CrewAI project using the YAML Template Configuration">
To create a new CrewAI project, run the following CLI (Command Line Interface) command:
<CodeGroup>
```shell Terminal
crewai create crew <project_name>
```
</CodeGroup>
This command creates a new project folder with the following structure:
| File/Directory | Description |
|:------------------------|:-------------------------------------------------|
| `my_project/` | Root directory of the project |
| ├── `.gitignore` | Specifies files and directories to ignore in Git |
| ├── `pyproject.toml` | Project configuration and dependencies |
| ├── `README.md` | Project documentation |
| ├── `.env` | Environment variables |
| └── `src/` | Source code directory |
| &nbsp;&nbsp;&nbsp;&nbsp;└── `my_project/` | Main application package |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;├── `__init__.py` | Marks the directory as a Python package |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;├── `main.py` | Main application script |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;├── `crew.py` | Crew-related functionalities |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;├── `tools/` | Custom tools directory |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;│ ├── `custom_tool.py` | Custom tool implementation |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;│ └── `__init__.py` | Marks tools directory as a package |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;└── `config/` | Configuration files directory |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;├── `agents.yaml` | Agent configurations |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;└── `tasks.yaml` | Task configurations |
<Step title="Generate Project Structure">
Run the CrewAI CLI command:
```shell Terminal
crewai create crew <project_name>
```
You can now start developing your crew by editing the files in the `src/my_project` folder.
The `main.py` file is the entry point of the project, the `crew.py` file is where you define your crew, the `agents.yaml` file is where you define your agents,
and the `tasks.yaml` file is where you define your tasks.
This creates a new project with the following structure:
<Frame>
```
my_project/
├── .gitignore
├── pyproject.toml
├── README.md
├── .env
└── src/
└── my_project/
├── __init__.py
├── main.py
├── crew.py
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml
└── tasks.yaml
```
</Frame>
</Step>
<Step title="Customize your project">
To customize your project, you can:
- Modify `src/my_project/config/agents.yaml` to define your agents.
- Modify `src/my_project/config/tasks.yaml` to define your tasks.
- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments.
- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks.
- Add your environment variables into the `.env` file.
<Step title="Customize Your Project">
Your project will contain these essential files:
| File | Purpose |
| --- | --- |
| `agents.yaml` | Define your AI agents and their roles |
| `tasks.yaml` | Set up agent tasks and workflows |
| `.env` | Store API keys and environment variables |
| `main.py` | Project entry point and execution flow |
| `crew.py` | Crew orchestration and coordination |
| `tools/` | Directory for custom agent tools |
<Tip>
Start by editing `agents.yaml` and `tasks.yaml` to define your crew's behavior.
Keep sensitive information like API keys in `.env`.
</Tip>
</Step>
</Steps>
## Next steps
## Next Steps
Now that you have installed `crewai` and `crewai-tools`, you're ready to spin up your first crew!
- 👨‍💻 Build your first agent with CrewAI by following the [Quickstart](/quickstart) guide.
- 💬 Join the [Community](https://community.crewai.com) to get help and share your feedback.
<CardGroup cols={2}>
<Card
title="Build Your First Agent"
icon="code"
href="/quickstart"
>
Follow our quickstart guide to create your first CrewAI agent and get hands-on experience.
</Card>
<Card
title="Join the Community"
icon="comments"
href="https://community.crewai.com"
>
Connect with other developers, get help, and share your CrewAI experiences.
</Card>
</CardGroup>

View File

@@ -1,49 +1,85 @@
---
title: Introduction
description: Welcome to CrewAI docs!
description: Build AI agent teams that work together to tackle complex tasks
icon: handshake
---
# What is CrewAI?
**CrewAI is a cutting-edge Python framework for orchestrating role-playing, autonomous AI agents.**
**CrewAI is a cutting-edge framework for orchestrating autonomous AI agents.**
By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
CrewAI enables you to create AI teams where each agent has specific roles, tools, and goals, working together to accomplish complex tasks.
<Frame caption="CrewAI Mindmap">
<img src="crewAI-mindmap.png" alt="CrewAI Mindmap" />
</Frame>
Think of it as assembling your dream team - each member (agent) brings unique skills and expertise, collaborating seamlessly to achieve your objectives.
## Why CrewAI?
- 🤼‍♀️ **Role-Playing Agents**: Agents can take on different roles and personas to better understand and interact with complex systems.
- 🤖 **Autonomous Decision Making**: Agents can make decisions autonomously based on the given context and available tools.
- 🤝 **Seamless Collaboration**: Agents can work together seamlessly, sharing information and resources to achieve common goals.
- 🧠 **Complex Task Tackling**: CrewAI is designed to tackle complex tasks, such as multi-step workflows, decision making, and problem solving.
## How CrewAI Works
# Get Started with CrewAI
<Note>
Just like a company has departments (Sales, Engineering, Marketing) working together under leadership to achieve business goals, CrewAI helps you create an organization of AI agents with specialized roles collaborating to accomplish complex tasks.
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="crewAI-mindmap.png" alt="CrewAI Framework Overview" />
</Frame>
| Component | Description | Key Features |
|:----------|:-----------:|:------------|
| **Crew** | The top-level organization | • Manages AI agent teams<br/>• Oversees workflows<br/>• Ensures collaboration<br/>• Delivers outcomes |
| **AI Agents** | Specialized team members | • Have specific roles (researcher, writer)<br/>• Use designated tools<br/>• Can delegate tasks<br/>• Make autonomous decisions |
| **Process** | Workflow management system | • Defines collaboration patterns<br/>• Controls task assignments<br/>• Manages interactions<br/>• Ensures efficient execution |
| **Tasks** | Individual assignments | • Have clear objectives<br/>• Use specific tools<br/>• Feed into larger process<br/>• Produce actionable results |
### How It All Works Together
1. The **Crew** organizes the overall operation
2. **AI Agents** work on their specialized tasks
3. The **Process** ensures smooth collaboration
4. **Tasks** get completed to achieve the goal
## Key Features
<CardGroup cols={2}>
<Card title="Role-Based Agents" icon="users">
Create specialized agents with defined roles, expertise, and goals - from researchers to analysts to writers
</Card>
<Card title="Flexible Tools" icon="screwdriver-wrench">
Equip agents with custom tools and APIs to interact with external services and data sources
</Card>
<Card title="Intelligent Collaboration" icon="people-arrows">
Agents work together, sharing insights and coordinating tasks to achieve complex objectives
</Card>
<Card title="Task Management" icon="list-check">
Define sequential or parallel workflows, with agents automatically handling task dependencies
</Card>
</CardGroup>
## Why Choose CrewAI?
- 🧠 **Autonomous Operation**: Agents make intelligent decisions based on their roles and available tools
- 📝 **Natural Interaction**: Agents communicate and collaborate like human team members
- 🛠️ **Extensible Design**: Easy to add new tools, roles, and capabilities
- 🚀 **Production Ready**: Built for reliability and scalability in real-world applications
<CardGroup cols={3}>
<Card
title="Quickstart"
color="#F3A78B"
href="quickstart"
icon="terminal"
iconType="solid"
title="Install CrewAI"
icon="wrench"
href="/installation"
>
Getting started with CrewAI
Get started with CrewAI in your development environment.
</Card>
<Card
title="Quick Start"
icon="bolt"
href="/quickstart"
>
Follow our quickstart guide to create your first CrewAI agent and get hands-on experience.
</Card>
<Card
title="Join the Community"
color="#F3A78B"
icon="comments"
href="https://community.crewai.com"
icon="comment-question"
iconType="duotone"
>
Join the CrewAI community and get help with your project!
</Card>
</CardGroup>
## Next Step
- [Install CrewAI](/installation) to get started with your first agent.
>
Connect with other developers, get help, and share your CrewAI experiences.
</Card>
</CardGroup>

View File

@@ -356,4 +356,21 @@ This will clear the crew's memory, allowing for a fresh start.
## Deploying Your Project
The easiest way to deploy your crew is through [CrewAI Enterprise](http://app.crewai.com/), where you can deploy your crew in a few clicks.
The easiest way to deploy your crew is through CrewAI Enterprise, where you can deploy your crew in a few clicks.
<CardGroup cols={2}>
<Card
title="Deploy on Enterprise"
icon="rocket"
href="http://app.crewai.com"
>
Get started with CrewAI Enterprise and deploy your crew in a production environment with just a few clicks.
</Card>
<Card
title="Join the Community"
icon="comments"
href="https://community.crewai.com"
>
Join our open source community to discuss ideas, share your projects, and connect with other CrewAI developers.
</Card>
</CardGroup>