Compare commits

..

7 Commits

Author SHA1 Message Date
Devin AI
196c86dbb2 Add API cross-references and finalize documentation improvements
Co-Authored-By: João <joao@crewai.com>
2025-06-21 21:14:31 +00:00
Devin AI
1df9b1b166 Add negative test cases and improve test coverage
- Add test for malformed template handling
- Add test for missing required parameters with proper error handling
- Improve test documentation and edge case coverage

Addresses GitHub review feedback from joaomdmoura and mplachta

Co-Authored-By: João <joao@crewai.com>
2025-06-21 21:01:12 +00:00
Devin AI
80e2c34b7f Address comprehensive GitHub review feedback
- Add explicit security warnings about prompt injection and stop sequence pitfalls
- Enhance troubleshooting section with additional actionable guidance
- Improve default parameter behavior documentation
- Add cross-references for better navigation
- Clean up duplicate warnings from previous commits

Addresses feedback from joaomdmoura and mplachta reviews

Co-Authored-By: João <joao@crewai.com>
2025-06-21 20:54:54 +00:00
Devin AI
16110623b5 Fix failing CI test by correcting Task API usage
- Replace non-existent 'output_format' attribute with 'output_json'
- Update test_custom_format_instructions to use correct Pydantic model approach
- Enhance test_stop_words_configuration to properly test agent executor creation
- Update documentation example to use correct API (output_json instead of output_format)
- Validated API corrections work with local test script

Co-Authored-By: João <joao@crewai.com>
2025-06-21 20:45:55 +00:00
Devin AI
a6d9741d18 Fix failing tests and address comprehensive GitHub review feedback
- Fix undefined i18n variable error in test_i18n_slice_access method
- Replace Mock tools with proper BaseTool instances to fix validation errors
- Add comprehensive docstrings to all test methods explaining validation purpose
- Add pytest fixtures for test isolation with @pytest.fixture(autouse=True)
- Add parametrized tests for agent initialization patterns using @pytest.mark.parametrize
- Add negative test cases for default template behavior and incomplete templates
- Remove unused Mock and patch imports to fix lint errors
- Improve test organization by moving Pydantic models to top of file
- Add metadata (title, description, categoryId, priority) to documentation frontmatter
- Add showLineNumbers to all Python code blocks for better readability
- Add explicit security warnings about stop sequence pitfalls and template injection
- Improve header hierarchy consistency using #### for subsections
- Add cross-references between troubleshooting sections
- Document default parameter behaviors explicitly
- Add additional troubleshooting steps for debugging prompts

Addresses all actionable feedback from GitHub reviews by joaomdmoura and mplachta.
Fixes failing CI tests by using proper CrewAI API patterns and BaseTool instances.

Co-Authored-By: João <joao@crewai.com>
2025-06-21 20:37:04 +00:00
Devin AI
f341d25fe6 Fix lint and import issues in test file
- Remove unused imports (pytest, Crew) to fix lint errors
- Fix LiteAgent import path from crewai.lite_agent
- Resolves CI test collection error for Python 3.10

Co-Authored-By: João <joao@crewai.com>
2025-06-21 20:24:09 +00:00
Devin AI
ba052dc7f3 Add comprehensive prompt customization documentation
- Create detailed guide explaining CrewAI's prompt generation system
- Document template system stored in translations/en.json
- Explain prompt assembly process using Prompts class
- Document LiteAgent prompt generation methods
- Show how to customize system/user prompts with templates
- Explain format parameter and structured output control
- Document stop words configuration through response_template
- Add practical examples for common customization scenarios
- Include test file validating all documentation examples

Addresses issue #3045: How system and user prompts are generated

Co-Authored-By: João <joao@crewai.com>
2025-06-21 20:20:16 +00:00
363 changed files with 1602 additions and 35087 deletions

View File

@@ -71,7 +71,7 @@ There are two ways to create agents in CrewAI: using **YAML configuration (recom
Using YAML configuration provides a cleaner, more maintainable way to define agents. We strongly recommend using this approach in your CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements.
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements.
<Note>
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
@@ -312,7 +312,7 @@ multimodal_agent = Agent(
<Note>
When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting.
</Note>
</Note>
<Note>
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution.
@@ -425,7 +425,7 @@ strict_agent = Agent(
```python Code
# Perfect for document processing
document_processor = Agent(
role="Document Analyst",
role="Document Analyst",
goal="Extract insights from large research papers",
backstory="Expert at analyzing extensive documentation",
respect_context_window=True, # Handle large documents gracefully

View File

@@ -45,7 +45,7 @@ There are two ways to create crews in CrewAI: using **YAML configuration (recomm
Using YAML configuration provides a cleaner, more maintainable way to define crews and is consistent with how agents and tasks are defined in CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, you can define your crew in a class that inherits from `CrewBase` and uses decorators to define agents, tasks, and the crew itself.
After creating your CrewAI project as outlined in the [Installation](/installation) section, you can define your crew in a class that inherits from `CrewBase` and uses decorators to define agents, tasks, and the crew itself.
#### Example Crew Class with Decorators
@@ -66,8 +66,8 @@ class YourCrewName:
# To see an example agent and task defined in YAML, checkout the following:
# - Task: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
# - Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@before_kickoff
def prepare_inputs(self, inputs):
@@ -111,7 +111,7 @@ class YourCrewName:
def crew(self) -> Crew:
return Crew(
agents=self.agents, # Automatically collected by the @agent decorator
tasks=self.tasks, # Automatically collected by the @task decorator.
tasks=self.tasks, # Automatically collected by the @task decorator.
process=Process.sequential,
verbose=True,
)

View File

@@ -66,7 +66,7 @@ There are two ways to create tasks in CrewAI: using **YAML configuration (recomm
Using YAML configuration provides a cleaner, more maintainable way to define tasks. We strongly recommend using this approach to define tasks in your CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, navigate to the `src/latest_ai_development/config/tasks.yaml` file and modify the template to match your specific task requirements.
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/tasks.yaml` file and modify the template to match your specific task requirements.
<Note>
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
@@ -277,7 +277,7 @@ formatted_task = Task(
When `markdown=True`, the agent will receive additional instructions to format the output using:
- `#` for headers
- `**text**` for bold text
- `**text**` for bold text
- `*text*` for italic text
- `-` or `*` for bullet points
- `` `code` `` for inline code

File diff suppressed because it is too large Load Diff

View File

@@ -1,231 +0,0 @@
---
title: "Maxim Integration"
description: "Start Agent monitoring, evaluation, and observability"
icon: "infinity"
---
# Maxim Overview
Maxim AI provides comprehensive agent monitoring, evaluation, and observability for your CrewAI applications. With Maxim's one-line integration, you can easily trace and analyse agent interactions, performance metrics, and more.
## Features
### Prompt Management
Maxim's Prompt Management capabilities enable you to create, organize, and optimize prompts for your CrewAI agents. Rather than hardcoding instructions, leverage Maxims SDK to dynamically retrieve and apply version-controlled prompts.
<Tabs>
<Tab title="Prompt Playground">
Create, refine, experiment and deploy your prompts via the playground. Organize of your prompts using folders and versions, experimenting with the real world cases by linking tools and context, and deploying based on custom logic.
Easily experiment across models by [**configuring models**](https://www.getmaxim.ai/docs/introduction/quickstart/setting-up-workspace#add-model-api-keys) and selecting the relevant model from the dropdown at the top of the prompt playground.
<img src='https://raw.githubusercontent.com/akmadan/crewAI/docs_maxim_observability/docs/images/maxim_playground.png'> </img>
</Tab>
<Tab title="Prompt Versions">
As teams build their AI applications, a big part of experimentation is iterating on the prompt structure. In order to collaborate effectively and organize your changes clearly, Maxim allows prompt versioning and comparison runs across versions.
<img src='https://raw.githubusercontent.com/akmadan/crewAI/docs_maxim_observability/docs/images/maxim_versions.png'> </img>
</Tab>
<Tab title="Prompt Comparisons">
Iterating on Prompts as you evolve your AI application would need experiments across models, prompt structures, etc. In order to compare versions and make informed decisions about changes, the comparison playground allows a side by side view of results.
## **Why use Prompt comparison?**
Prompt comparison combines multiple single Prompts into one view, enabling a streamlined approach for various workflows:
1. **Model comparison**: Evaluate the performance of different models on the same Prompt.
2. **Prompt optimization**: Compare different versions of a Prompt to identify the most effective formulation.
3. **Cross-Model consistency**: Ensure consistent outputs across various models for the same Prompt.
4. **Performance benchmarking**: Analyze metrics like latency, cost, and token count across different models and Prompts.
</Tab>
</Tabs>
### Observability & Evals
Maxim AI provides comprehensive observability & evaluation for your CrewAI agents, helping you understand exactly what's happening during each execution.
<Tabs>
<Tab title="Agent Tracing">
Track your agents complete lifecycle, including tool calls, agent trajectories, and decision flows effortlessly.
<img src='https://raw.githubusercontent.com/akmadan/crewAI/docs_maxim_observability/docs/images/maxim_agent_tracking.png'> </img>
</Tab>
<Tab title="Analytics + Evals">
Run detailed evaluations on full traces or individual nodes with support for:
- Multi-step interactions and granular trace analysis
- Session Level Evaluations
- Simulations for real-world testing
<img src='https://raw.githubusercontent.com/akmadan/crewAI/docs_maxim_observability/docs/images/maxim_trace_eval.png'> </img>
<CardGroup cols={3}>
<Card title="Auto Evals on Logs" icon="e" href="https://www.getmaxim.ai/docs/observe/how-to/evaluate-logs/auto-evaluation">
<p>
Evaluate captured logs automatically from the UI based on filters and sampling
</p>
</Card>
<Card title="Human Evals on Logs" icon="hand" href="https://www.getmaxim.ai/docs/observe/how-to/evaluate-logs/human-evaluation">
<p>
Use human evaluation or rating to assess the quality of your logs and evaluate them.
</p>
</Card>
<Card title="Node Level Evals" icon="road" href="https://www.getmaxim.ai/docs/observe/how-to/evaluate-logs/node-level-evaluation">
<p>
Evaluate any component of your trace or log to gain insights into your agents behavior.
</p>
</Card>
</CardGroup>
---
</Tab>
<Tab title="Alerting">
Set thresholds on **error**, **cost, token usage, user feedback, latency** and get real-time alerts via Slack or PagerDuty.
<img src='https://raw.githubusercontent.com/akmadan/crewAI/docs_maxim_observability/docs/images/maxim_alerts_1.png'> </img>
</Tab>
<Tab title="Dashboards">
Visualize Traces over time, usage metrics, latency & error rates with ease.
<img src='https://raw.githubusercontent.com/akmadan/crewAI/docs_maxim_observability/docs/images/maxim_dashboard_1.png'> </img>
</Tab>
</Tabs>
## Getting Started
### Prerequisites
- Python version \>=3.10
- A Maxim account ([sign up here](https://getmaxim.ai/))
- Generate Maxim API Key
- A CrewAI project
### Installation
Install the Maxim SDK via pip:
```python
pip install maxim-py
```
Or add it to your `requirements.txt`:
```
maxim-py
```
### Basic Setup
### 1. Set up environment variables
```python
### Environment Variables Setup
# Create a `.env` file in your project root:
# Maxim API Configuration
MAXIM_API_KEY=your_api_key_here
MAXIM_LOG_REPO_ID=your_repo_id_here
```
### 2. Import the required packages
```python
from crewai import Agent, Task, Crew, Process
from maxim import Maxim
from maxim.logger.crewai import instrument_crewai
```
### 3. Initialise Maxim with your API key
```python {8}
# Instrument CrewAI with just one line
instrument_crewai(Maxim().logger())
```
### 4. Create and run your CrewAI application as usual
```python
# Create your agent
researcher = Agent(
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI',
backstory="You are an expert researcher at a tech think tank...",
verbose=True,
llm=llm
)
# Define the task
research_task = Task(
description="Research the latest AI advancements...",
expected_output="",
agent=researcher
)
# Configure and run the crew
crew = Crew(
agents=[researcher],
tasks=[research_task],
verbose=True
)
try:
result = crew.kickoff()
finally:
maxim.cleanup() # Ensure cleanup happens even if errors occur
```
That's it\! All your CrewAI agent interactions will now be logged and available in your Maxim dashboard.
Check this Google Colab Notebook for a quick reference - [Notebook](https://colab.research.google.com/drive/1ZKIZWsmgQQ46n8TH9zLsT1negKkJA6K8?usp=sharing)
## Viewing Your Traces
After running your CrewAI application:
1. Log in to your [Maxim Dashboard](https://app.getmaxim.ai/login)
2. Navigate to your repository
3. View detailed agent traces, including:
- Agent conversations
- Tool usage patterns
- Performance metrics
- Cost analytics
<img src='https://raw.githubusercontent.com/akmadan/crewAI/docs_maxim_observability/docs/images/crewai_traces.gif'> </img>
## Troubleshooting
### Common Issues
- **No traces appearing**: Ensure your API key and repository ID are correct
- Ensure you've **`called instrument_crewai()`** **_before_** running your crew. This initializes logging hooks correctly.
- Set `debug=True` in your `instrument_crewai()` call to surface any internal errors:
```python
instrument_crewai(logger, debug=True)
```
- Configure your agents with `verbose=True` to capture detailed logs:
```python
agent = CrewAgent(..., verbose=True)
```
- Double-check that `instrument_crewai()` is called **before** creating or executing agents. This might be obvious, but it's a common oversight.
## Resources
<CardGroup cols="3">
<Card title="CrewAI Docs" icon="book" href="https://docs.crewai.com/">
Official CrewAI documentation
</Card>
<Card title="Maxim Docs" icon="book" href="https://getmaxim.ai/docs">
Official Maxim documentation
</Card>
<Card title="Maxim Github" icon="github" href="https://github.com/maximhq">
Maxim Github
</Card>
</CardGroup>

View File

@@ -1,236 +0,0 @@
---
title: Oxylabs Scrapers
description: >
Oxylabs Scrapers allow to easily access the information from the respective sources. Please see the list of available sources below:
- `Amazon Product`
- `Amazon Search`
- `Google Seach`
- `Universal`
icon: globe
---
## Installation
Get the credentials by creating an Oxylabs Account [here](https://oxylabs.io).
```shell
pip install 'crewai[tools]' oxylabs
```
Check [Oxylabs Documentation](https://developers.oxylabs.io/scraping-solutions/web-scraper-api/targets) to get more information about API parameters.
# `OxylabsAmazonProductScraperTool`
### Example
```python
from crewai_tools import OxylabsAmazonProductScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsAmazonProductScraperTool()
result = tool.run(query="AAAAABBBBCC")
print(result)
```
### Parameters
- `query` - 10-symbol ASIN code.
- `domain` - domain localization for Amazon.
- `geo_location` - the _Deliver to_ location.
- `user_agent_type` - device type and browser.
- `render` - enables JavaScript rendering when set to `html`.
- `callback_url` - URL to your callback endpoint.
- `context` - Additional advanced settings and controls for specialized requirements.
- `parse` - returns parsed data when set to true.
- `parsing_instructions` - define your own parsing and data transformation logic that will be executed on an HTML scraping result.
### Advanced example
```python
from crewai_tools import OxylabsAmazonProductScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsAmazonProductScraperTool(
config={
"domain": "com",
"parse": True,
"context": [
{
"key": "autoselect_variant",
"value": True
}
]
}
)
result = tool.run(query="AAAAABBBBCC")
print(result)
```
# `OxylabsAmazonSearchScraperTool`
### Example
```python
from crewai_tools import OxylabsAmazonSearchScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsAmazonSearchScraperTool()
result = tool.run(query="headsets")
print(result)
```
### Parameters
- `query` - Amazon search term.
- `domain` - Domain localization for Bestbuy.
- `start_page` - starting page number.
- `pages` - number of pages to retrieve.
- `geo_location` - the _Deliver to_ location.
- `user_agent_type` - device type and browser.
- `render` - enables JavaScript rendering when set to `html`.
- `callback_url` - URL to your callback endpoint.
- `context` - Additional advanced settings and controls for specialized requirements.
- `parse` - returns parsed data when set to true.
- `parsing_instructions` - define your own parsing and data transformation logic that will be executed on an HTML scraping result.
### Advanced example
```python
from crewai_tools import OxylabsAmazonSearchScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsAmazonSearchScraperTool(
config={
"domain": 'nl',
"start_page": 2,
"pages": 2,
"parse": True,
"context": [
{'key': 'category_id', 'value': 16391693031}
],
}
)
result = tool.run(query='nirvana tshirt')
print(result)
```
# `OxylabsGoogleSearchScraperTool`
### Example
```python
from crewai_tools import OxylabsGoogleSearchScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsGoogleSearchScraperTool()
result = tool.run(query="iPhone 16")
print(result)
```
### Parameters
- `query` - search keyword.
- `domain` - domain localization for Google.
- `start_page` - starting page number.
- `pages` - number of pages to retrieve.
- `limit` - number of results to retrieve in each page.
- `locale` - `Accept-Language` header value which changes your Google search page web interface language.
- `geo_location` - the geographical location that the result should be adapted for. Using this parameter correctly is extremely important to get the right data.
- `user_agent_type` - device type and browser.
- `render` - enables JavaScript rendering when set to `html`.
- `callback_url` - URL to your callback endpoint.
- `context` - Additional advanced settings and controls for specialized requirements.
- `parse` - returns parsed data when set to true.
- `parsing_instructions` - define your own parsing and data transformation logic that will be executed on an HTML scraping result.
### Advanced example
```python
from crewai_tools import OxylabsGoogleSearchScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsGoogleSearchScraperTool(
config={
"parse": True,
"geo_location": "Paris, France",
"user_agent_type": "tablet",
}
)
result = tool.run(query="iPhone 16")
print(result)
```
# `OxylabsUniversalScraperTool`
### Example
```python
from crewai_tools import OxylabsUniversalScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsUniversalScraperTool()
result = tool.run(url="https://ip.oxylabs.io")
print(result)
```
### Parameters
- `url` - website url to scrape.
- `user_agent_type` - device type and browser.
- `geo_location` - sets the proxy's geolocation to retrieve data.
- `render` - enables JavaScript rendering when set to `html`.
- `callback_url` - URL to your callback endpoint.
- `context` - Additional advanced settings and controls for specialized requirements.
- `parse` - returns parsed data when set to `true`, as long as a dedicated parser exists for the submitted URL's page type.
- `parsing_instructions` - define your own parsing and data transformation logic that will be executed on an HTML scraping result.
### Advanced example
```python
from crewai_tools import OxylabsUniversalScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsUniversalScraperTool(
config={
"render": "html",
"user_agent_type": "mobile",
"context": [
{"key": "force_headers", "value": True},
{"key": "force_cookies", "value": True},
{
"key": "headers",
"value": {
"Custom-Header-Name": "custom header content",
},
},
{
"key": "cookies",
"value": [
{"key": "NID", "value": "1234567890"},
{"key": "1P JAR", "value": "0987654321"},
],
},
{"key": "http_method", "value": "get"},
{"key": "follow_redirects", "value": True},
{"key": "successful_status_codes", "value": [808, 909]},
],
}
)
result = tool.run(url="https://ip.oxylabs.io")
print(result)
```

View File

@@ -10,11 +10,11 @@ icon: "people-arrows"
## Getting Started
<iframe
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/-kSOTtYzgEw"
title="Building Crews with CrewAI CLI"
title="Building Crews with CrewAI CLI"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
@@ -23,13 +23,13 @@ icon: "people-arrows"
### Installation and Setup
<Card title="Follow Standard Installation" icon="wrench" href="/en/installation">
<Card title="Follow Standard Installation" icon="wrench" href="/installation">
Follow our standard installation guide to set up CrewAI CLI and create your first project.
</Card>
### Building Your Crew
<Card title="Quickstart Tutorial" icon="rocket" href="/en/quickstart">
<Card title="Quickstart Tutorial" icon="rocket" href="/quickstart">
Follow our quickstart guide to create your first agent crew using YAML configuration.
</Card>
@@ -40,4 +40,4 @@ For Enterprise-specific support or questions, contact our dedicated support team
<Card title="Schedule a Demo" icon="calendar" href="mailto:support@crewai.com">
Book time with our team to learn more about Enterprise features and how they can benefit your organization.
</Card>
</Card>

View File

@@ -122,7 +122,7 @@ The CrewAI CLI offers several commands to manage your deployments:
# Remove a deployment
crewai deploy remove <deployment_id>
```
```
## Option 2: Deploy Directly via Web Interface
@@ -132,14 +132,14 @@ You can also deploy your crews directly through the CrewAI Enterprise web interf
<Step title="Pushing to GitHub">
You need to push your crew to a GitHub repository. If you haven't created a crew yet, you can [follow this tutorial](/en/quickstart).
You need to push your crew to a GitHub repository. If you haven't created a crew yet, you can [follow this tutorial](/quickstart).
</Step>
<Step title="Connecting GitHub to CrewAI Enterprise">
1. Log in to [CrewAI Enterprise](https://app.crewai.com)
2. Click on the button "Connect GitHub"
2. Click on the button "Connect GitHub"
<Frame>
![Connect GitHub Button](/images/enterprise/connect-github.png)
@@ -201,20 +201,20 @@ For security reasons, the following environment variable naming patterns are **a
**Blocked Patterns:**
- Variables ending with `_TOKEN` (e.g., `MY_API_TOKEN`)
- Variables ending with `_PASSWORD` (e.g., `DB_PASSWORD`)
- Variables ending with `_PASSWORD` (e.g., `DB_PASSWORD`)
- Variables ending with `_SECRET` (e.g., `API_SECRET`)
- Variables ending with `_KEY` in certain contexts
**Specific Blocked Variables:**
- `GITHUB_USER`, `GITHUB_TOKEN`
- `AWS_REGION`, `AWS_DEFAULT_REGION`
- `AWS_REGION`, `AWS_DEFAULT_REGION`
- Various internal CrewAI system variables
### Allowed Exceptions
Some variables are explicitly allowed despite matching blocked patterns:
- `AZURE_AD_TOKEN`
- `AZURE_OPENAI_AD_TOKEN`
- `AZURE_OPENAI_AD_TOKEN`
- `ENTERPRISE_ACTION_TOKEN`
- `CREWAI_ENTEPRISE_TOOLS_TOKEN`
@@ -228,7 +228,7 @@ OPENAI_TOKEN=sk-...
DATABASE_PASSWORD=mypassword
API_SECRET=secret123
# ✅ Use these naming patterns instead
# ✅ Use these naming patterns instead
OPENAI_API_KEY=sk-...
DATABASE_CREDENTIALS=mypassword
API_CONFIG=secret123

View File

@@ -56,8 +56,8 @@ CrewAI Enterprise extends the power of the open-source framework with features d
<Steps>
<Step title="Sign up for an account">
Create your account at [app.crewai.com](https://app.crewai.com)
<Card
title="Sign Up"
<Card
title="Sign Up"
icon="user"
href="https://app.crewai.com/signup"
>
@@ -66,34 +66,34 @@ CrewAI Enterprise extends the power of the open-source framework with features d
</Step>
<Step title="Build your first crew">
Use code or Crew Studio to build your crew
<Card
title="Build Crew"
<Card
title="Build Crew"
icon="paintbrush"
href="/en/enterprise/guides/build-crew"
href="/enterprise/guides/build-crew"
>
Build Crew
</Card>
</Step>
<Step title="Deploy your crew">
Deploy your crew to the Enterprise platform
<Card
title="Deploy Crew"
<Card
title="Deploy Crew"
icon="rocket"
href="/en/enterprise/guides/deploy-crew"
href="/enterprise/guides/deploy-crew"
>
Deploy Crew
</Card>
</Step>
<Step title="Access your crew">
Integrate with your crew via the generated API endpoints
<Card
title="API Access"
<Card
title="API Access"
icon="code"
href="/en/enterprise/guides/kickoff-crew"
href="/enterprise/guides/use-crew-api"
>
Use the Crew API
</Card>
</Step>
</Steps>
For detailed instructions, check out our [deployment guide](/en/enterprise/guides/deploy-crew) or click the button below to get started.
For detailed instructions, check out our [deployment guide](/enterprise/guides/deploy-crew) or click the button below to get started.

View File

@@ -48,12 +48,12 @@ icon: "circle-question"
To integrate human input into agent execution, set the `human_input` flag in the task definition. When enabled, the agent prompts the user for input before delivering its final answer. This input can provide extra context, clarify ambiguities, or validate the agent's output.
For detailed implementation guidance, see our [Human-in-the-Loop guide](/en/how-to/human-in-the-loop).
For detailed implementation guidance, see our [Human-in-the-Loop guide](/how-to/human-in-the-loop).
</Accordion>
<Accordion title="What advanced customization options are available for tailoring and enhancing agent behavior and capabilities in CrewAI?">
CrewAI provides a range of advanced customization options:
- **Language Model Customization**: Agents can be customized with specific language models (`llm`) and function-calling language models (`function_calling_llm`)
- **Performance and Debugging Settings**: Adjust an agent's performance and monitor its operations
- **Verbose Mode**: Enables detailed logging of an agent's actions, useful for debugging and optimization
@@ -129,12 +129,12 @@ icon: "circle-question"
Here's a tutorial on how to consistently get structured outputs from your agents:
<Frame>
<iframe
<iframe
height="400"
width="100%"
src="https://www.youtube.com/embed/dNpKQk5uxHw"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
src="https://www.youtube.com/embed/dNpKQk5uxHw"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen></iframe>
</Frame>
</Accordion>
@@ -148,4 +148,4 @@ icon: "circle-question"
<Accordion title="How can you control the maximum number of requests per minute that the entire crew can perform?">
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
</Accordion>
</AccordionGroup>
</AccordionGroup>

View File

@@ -44,7 +44,7 @@ Based on your agent configuration, CrewAI adds different default instructions:
"I MUST use these formats, my job depends on it!"
```
#### For Agents With Tools
#### For Agents With Tools
```text
"IMPORTANT: Use the following format in your response:
@@ -127,7 +127,7 @@ custom_prompt_template = """Task: {input}
Please complete this task thoughtfully."""
agent = Agent(
role="Research Assistant",
role="Research Assistant",
goal="Help users find accurate information",
backstory="You are a helpful research assistant.",
system_template=custom_system_template,
@@ -164,7 +164,7 @@ crew = Crew(
```python
agent = Agent(
role="Analyst",
goal="Analyze data",
goal="Analyze data",
backstory="Expert analyst",
use_system_prompt=False # Disables system prompt separation
)
@@ -174,13 +174,13 @@ agent = Agent(
For production transparency, integrate with observability platforms to monitor all prompts and LLM interactions. This allows you to see exactly what prompts (including default instructions) are being sent to your LLMs.
See our [Observability documentation](/en/observability/overview) for detailed integration guides with various platforms including Langfuse, MLflow, Weights & Biases, and custom logging solutions.
See our [Observability documentation](/how-to/observability) for detailed integration guides with various platforms including Langfuse, MLflow, Weights & Biases, and custom logging solutions.
### Best Practices for Production
1. **Always inspect generated prompts** before deploying to production
2. **Use custom templates** when you need full control over prompt content
3. **Integrate observability tools** for ongoing prompt monitoring (see [Observability docs](/en/observability/overview))
3. **Integrate observability tools** for ongoing prompt monitoring (see [Observability docs](/how-to/observability))
4. **Test with different LLMs** as default instructions may work differently across models
5. **Document your prompt customizations** for team transparency
@@ -313,4 +313,4 @@ Low-level prompt customization in CrewAI opens the door to super custom, complex
<Check>
You now have the foundation for advanced prompt customizations in CrewAI. Whether you're adapting for model-specific structures or domain-specific constraints, this low-level approach lets you shape agent interactions in highly specialized ways.
</Check>
</Check>

View File

@@ -448,5 +448,5 @@ Congratulations! You now understand the principles and practices of effective ag
## Next Steps
- Experiment with different agent configurations for your specific use case
- Learn about [building your first crew](/en/guides/crews/first-crew) to see how agents work together
- Explore [CrewAI Flows](/en/guides/flows/first-flow) for more advanced orchestration
- Learn about [building your first crew](/guides/crews/first-crew) to see how agents work together
- Explore [CrewAI Flows](/guides/flows/first-flow) for more advanced orchestration

View File

@@ -11,7 +11,7 @@ When building AI applications with CrewAI, one of the most important decisions y
At the heart of this decision is understanding the relationship between **complexity** and **precision** in your application:
<Frame caption="Complexity vs. Precision Matrix for CrewAI Applications">
<img src="/images/complexity_precision.png" alt="Complexity vs. Precision Matrix" />
<img src="../../images/complexity_precision.png" alt="Complexity vs. Precision Matrix" />
</Frame>
This matrix helps visualize how different approaches align with varying requirements for complexity and precision. Let's explore what each quadrant means and how it guides your architectural choices.
@@ -497,7 +497,7 @@ You now have a framework for evaluating CrewAI use cases and choosing the right
## Next Steps
- Learn more about [crafting effective agents](/en/guides/agents/crafting-effective-agents)
- Explore [building your first crew](/en/guides/crews/first-crew)
- Dive into [mastering flow state management](/en/guides/flows/mastering-flow-state)
- Check out the [core concepts](/en/concepts/agents) for deeper understanding
- Learn more about [crafting effective agents](/guides/agents/crafting-effective-agents)
- Explore [building your first crew](/guides/crews/first-crew)
- Dive into [mastering flow state management](/guides/flows/mastering-flow-state)
- Check out the [core concepts](/concepts/agents) for deeper understanding

View File

@@ -32,9 +32,9 @@ Let's get started building your first crew!
Before starting, make sure you have:
1. Installed CrewAI following the [installation guide](/en/installation)
1. Installed CrewAI following the [installation guide](/installation)
2. Set up your LLM API key in your environment, following the [LLM setup
guide](/en/concepts/llms#setting-up-your-llm)
guide](/concepts/llms#setting-up-your-llm)
3. Basic understanding of Python
## Step 1: Create a New CrewAI Project
@@ -54,7 +54,7 @@ This will generate a project with the basic structure needed for your crew. The
- A main script to run the crew
<Frame caption="CrewAI Framework Overview">
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
<img src="../../images/crews.png" alt="CrewAI Framework Overview" />
</Frame>
@@ -287,7 +287,7 @@ SERPER_API_KEY=your_serper_api_key
# Add your provider's API key here too.
```
See the [LLM Setup guide](/en/concepts/llms#setting-up-your-llm) for details on configuring your provider of choice. You can get a Serper API key from [Serper.dev](https://serper.dev/).
See the [LLM Setup guide](/concepts/llms#setting-up-your-llm) for details on configuring your provider of choice. You can get a Serper API key from [Serper.dev](https://serper.dev/).
## Step 8: Install Dependencies
@@ -388,7 +388,7 @@ Now that you've built your first crew, you can:
2. Try more complex task structures and workflows
3. Implement custom tools to give your agents new capabilities
4. Apply your crew to different topics or problem domains
5. Explore [CrewAI Flows](/en/guides/flows/first-flow) for more advanced workflows with procedural programming
5. Explore [CrewAI Flows](/guides/flows/first-flow) for more advanced workflows with procedural programming
<Check>
Congratulations! You've successfully built your first CrewAI crew that can research and analyze any topic you provide. This foundational experience has equipped you with the skills to create increasingly sophisticated AI systems that can tackle complex, multi-stage problems through collaborative intelligence.

View File

@@ -42,9 +42,9 @@ Let's dive in and build your first flow!
Before starting, make sure you have:
1. Installed CrewAI following the [installation guide](/en/installation)
1. Installed CrewAI following the [installation guide](/installation)
2. Set up your LLM API key in your environment, following the [LLM setup
guide](/en/concepts/llms#setting-up-your-llm)
guide](/concepts/llms#setting-up-your-llm)
3. Basic understanding of Python
## Step 1: Create a New CrewAI Flow Project
@@ -59,7 +59,7 @@ cd guide_creator_flow
This will generate a project with the basic structure needed for your flow.
<Frame caption="CrewAI Framework Overview">
<img src="/images/flows.png" alt="CrewAI Framework Overview" />
<img src="../../images/flows.png" alt="CrewAI Framework Overview" />
</Frame>
## Step 2: Understanding the Project Structure
@@ -443,7 +443,7 @@ This is the power of flows - combining different types of processing (user inter
## Step 6: Set Up Your Environment Variables
Create a `.env` file in your project root with your API keys. See the [LLM setup
guide](/en/concepts/llms#setting-up-your-llm) for details on configuring a provider.
guide](/concepts/llms#setting-up-your-llm) for details on configuring a provider.
```sh .env
OPENAI_API_KEY=your_openai_api_key

View File

@@ -767,5 +767,5 @@ You've now mastered the concepts and practices of state management in CrewAI Flo
- Experiment with both structured and unstructured state in your flows
- Try implementing state persistence for long-running workflows
- Explore [building your first crew](/en/guides/crews/first-crew) to see how crews and flows can work together
- Check out the [Flow reference documentation](/en/concepts/flows) for more advanced features
- Explore [building your first crew](/guides/crews/first-crew) to see how crews and flows can work together
- Check out the [Flow reference documentation](/concepts/flows) for more advanced features

View File

@@ -0,0 +1,343 @@
---
title: "Customize Agent Prompts"
description: "Learn how to customize system and user prompts in CrewAI agents for precise control over agent behavior and output formatting."
categoryId: "how-to-guides"
priority: 1
---
# Customize Agent Prompts
CrewAI provides fine-grained control over how agents generate and format their responses through a sophisticated prompt generation system. This guide explains how system and user prompts are constructed and how you can customize them for your specific use cases.
## Understanding Prompt Generation
CrewAI uses a template-based system to generate prompts, combining different components based on agent configuration:
### Core Prompt Components
All prompt templates are stored in the internationalization system and include:
- **Role Playing**: `"You are {role}. {backstory}\nYour personal goal is: {goal}"`
- **Tools**: Instructions for agents with access to tools
- **No Tools**: Instructions for agents without tools
- **Task**: The specific task execution prompt
- **Format Instructions**: Output formatting requirements
### Prompt Assembly Process
CrewAI assembles prompts differently based on agent type:
1. **Regular Agents**: Use the `Prompts` class to combine template slices
2. **LiteAgents**: Use dedicated system prompt methods with specific templates
3. **System/User Split**: When `use_system_prompt=True`, prompts are split into system and user components
## Basic Prompt Customization
### Custom System and Prompt Templates
You can override the default prompt structure using custom templates:
```python showLineNumbers
from crewai import Agent, Task, Crew
# Define custom templates
system_template = """{{ .System }}
Additional context: You are working in a production environment.
Always prioritize accuracy and provide detailed explanations."""
prompt_template = """{{ .Prompt }}
Remember to validate your approach before proceeding."""
response_template = """Please format your response as follows:
{{ .Response }}
End of response."""
# Create agent with custom templates
agent = Agent(
role="Data Analyst",
goal="Analyze data with precision and accuracy",
backstory="You are an experienced data analyst with expertise in statistical analysis.",
system_template=system_template,
prompt_template=prompt_template,
response_template=response_template,
use_system_prompt=True
)
```
### Template Placeholders
Custom templates support these placeholders:
- `{{ .System }}`: Replaced with the assembled system prompt components
- `{{ .Prompt }}`: Replaced with the task-specific prompt
- `{{ .Response }}`: Placeholder for the agent's response (used in response_template)
## System/User Prompt Split
Enable system/user prompt separation for better LLM compatibility:
```python showLineNumbers
agent = Agent(
role="Research Assistant",
goal="Conduct thorough research on given topics",
backstory="You are a meticulous researcher with access to various information sources.",
use_system_prompt=True # Enables system/user split
)
```
When `use_system_prompt=True`:
- **System Prompt**: Contains role, backstory, goal, and tool instructions
- **User Prompt**: Contains the specific task and expected output format
## Output Format Customization
### Structured Output with Pydantic Models
Control output formatting using Pydantic models:
```python showLineNumbers
from pydantic import BaseModel
from typing import List
class ResearchOutput(BaseModel):
summary: str
key_findings: List[str]
confidence_score: float
task = Task(
description="Research the latest trends in AI development",
expected_output="A structured research report",
output_pydantic=ResearchOutput,
agent=agent
)
```
### Custom Format Instructions
Add specific formatting requirements using Pydantic models:
```python showLineNumbers
from pydantic import BaseModel
from typing import List
class SalesAnalysisOutput(BaseModel):
total_sales: float
growth_rate: str
top_products: List[str]
recommendations: str
task = Task(
description="Analyze the quarterly sales data",
expected_output="Analysis in JSON format with specific fields",
output_json=SalesAnalysisOutput
)
```
## Stop Words Configuration
### Default Stop Words
CrewAI automatically configures stop words based on agent setup:
```python showLineNumbers
# Default stop word is "\nObservation:" for tool-enabled agents
agent = Agent(
role="Analyst",
goal="Perform analysis tasks",
backstory="You are a skilled analyst.",
tools=[some_tool] # Stop words include "\nObservation:"
)
```
> **Note:** If `system_template`, `prompt_template`, or `response_template` are not provided, the default templates from `translations/en.json` are used. The default system template includes role-playing instructions, tool descriptions (if applicable), and task formatting guidelines.
### Custom Stop Words via Response Template
Modify stop words by customizing the response template:
```python showLineNumbers
response_template = """Provide your analysis:
{{ .Response }}
---END---"""
agent = Agent(
role="Analyst",
goal="Perform detailed analysis",
backstory="You are an expert analyst.",
response_template=response_template # Stop words will include "---END---"
)
```
> ⚠️ **Warning:** If your stop sequence (e.g., `---END---`) can appear naturally within the model's response, this may cause premature output truncation. Always select distinctive, unlikely-to-occur sequences for stopping generation.
## LiteAgent Prompt Customization
LiteAgents use a simplified prompt system with direct customization:
```python showLineNumbers
from crewai import LiteAgent
# LiteAgent with tools
lite_agent = LiteAgent(
role="Code Reviewer",
goal="Review code for quality and security",
backstory="You are an experienced software engineer specializing in code review.",
tools=[code_analysis_tool],
response_format=CodeReviewOutput # Pydantic model for structured output
)
# The system prompt will automatically include tool instructions and format requirements
```
## Advanced Customization Examples
### Example 1: Multi-Language Support
```python showLineNumbers
# Custom templates for different languages
spanish_system_template = """{{ .System }}
Instrucciones adicionales: Responde siempre en español y proporciona explicaciones detalladas."""
agent = Agent(
role="Asistente de Investigación",
goal="Realizar investigación exhaustiva en español",
backstory="Eres un investigador experimentado que trabaja en español.",
system_template=spanish_system_template,
use_system_prompt=True
)
```
### Example 2: Domain-Specific Formatting
```python showLineNumbers
# Medical report formatting
medical_response_template = """MEDICAL ANALYSIS REPORT
{{ .Response }}
DISCLAIMER: This analysis is for informational purposes only."""
medical_agent = Agent(
role="Medical Data Analyst",
goal="Analyze medical data with clinical precision",
backstory="You are a certified medical data analyst with 10 years of experience.",
response_template=medical_response_template,
use_system_prompt=True
)
```
### Example 3: Complex Workflow Integration
```python showLineNumbers
from crewai import Flow
class CustomPromptFlow(Flow):
@start()
def research_phase(self):
# Agent with research-specific prompts
researcher = Agent(
role="Senior Researcher",
goal="Gather comprehensive information",
backstory="You are a senior researcher with expertise in data collection.",
system_template="""{{ .System }}
Research Guidelines:
- Verify all sources
- Provide confidence ratings
- Include methodology notes""",
use_system_prompt=True
)
task = Task(
description="Research the given topic thoroughly",
expected_output="Detailed research report with sources",
agent=researcher
)
return task.execute()
@listen(research_phase)
def analysis_phase(self, research_result):
# Agent with analysis-specific prompts
analyst = Agent(
role="Data Analyst",
goal="Provide actionable insights",
backstory="You are an expert data analyst specializing in trend analysis.",
response_template="""ANALYSIS RESULTS:
{{ .Response }}
CONFIDENCE LEVEL: [Specify confidence level]
NEXT STEPS: [Recommend next actions]""",
use_system_prompt=True
)
return f"Analysis based on: {research_result}"
```
## Best Practices
#### Precision and Accuracy
- Use specific role definitions and detailed backstories for consistent behavior
- Include validation requirements in custom templates
- Test prompt variations to ensure predictable outputs
#### Security Considerations
- Validate all user inputs before including them in prompts
- Use structured output formats to prevent prompt injection
- Implement guardrails for sensitive operations
> 🛡️ **Security Warning:** Never inject raw or untrusted user inputs directly into prompt templates without proper validation and sanitization. This can lead to prompt injection attacks where malicious users manipulate agent behavior. Always validate inputs, use parameterized templates, and consider implementing input filtering for production systems. Additionally, be cautious with custom stop sequences - if they can appear naturally in model responses, they may cause premature truncation of legitimate outputs.
#### Performance Optimization
- Keep system prompts concise while maintaining necessary context
- Use appropriate stop words to prevent unnecessary token generation
- Test prompt efficiency with your target LLM models
#### Complexity Handling
- Break complex requirements into multiple template components
- Use conditional prompt assembly for different scenarios
- Implement fallback templates for error handling
## Troubleshooting
### Common Issues
**Prompt Not Applied**: Ensure you're using the correct template parameter names and that `use_system_prompt` is set appropriately. See the [Basic Prompt Customization](#basic-prompt-customization) section for examples.
**Format Not Working**: Verify that your `output_json` or `output_pydantic` model matches the expected structure. Refer to [Output Format Customization](#output-format-customization) for details.
**Stop Words Not Effective**: Check that your `response_template` includes the desired stop sequence after the `{{ .Response }}` placeholder. See [Stop Words Configuration](#stop-words-configuration) for guidance.
**Template Injection Concerns**: Review the [Security Considerations](#security-considerations) section for guidance on preventing prompt injection attacks.
For complete API details, see the [Agent API Reference](../reference/agent) and [Task API Reference](../reference/task) documentation.
### Debugging Prompts
Enable verbose mode to see the actual prompts being sent to the LLM:
```python showLineNumbers
agent = Agent(
role="Debug Agent",
goal="Help debug prompt issues",
backstory="You are a debugging specialist.",
verbose=True # Shows detailed prompt information
)
```
### Additional Troubleshooting Steps
- **Verify prompt payloads**: Use verbose mode to inspect the actual prompts sent to the LLM
- **Test stop word effects**: Carefully verify that stop sequences don't cause premature truncation
- **Check template syntax**: Ensure placeholders like `{{ .System }}` are correctly formatted
- **Validate security**: Review custom templates for potential injection vulnerabilities as described in [Security Considerations](#security-considerations)
- **Revert to defaults**: If custom templates aren't working, temporarily remove them to isolate the issue
- **Test incrementally**: Add one custom template at a time to identify which component is causing problems
- **Validate template parameters**: Ensure all required parameters (role, goal, backstory) are provided when using custom templates
For more troubleshooting guidance, see the sections above on [Best Practices](#best-practices) and [Security Considerations](#security-considerations).
This comprehensive prompt customization system gives you precise control over agent behavior while maintaining the reliability and consistency that CrewAI is known for in production environments.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 617 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 845 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

View File

@@ -186,7 +186,7 @@ For teams and organizations, CrewAI offers enterprise deployment options that el
<Card
title="Build Your First Agent"
icon="code"
href="/en/quickstart"
href="/quickstart"
>
Follow our quickstart guide to create your first CrewAI agent and get hands-on experience.
</Card>

View File

@@ -10,8 +10,8 @@ icon: handshake
CrewAI empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario:
- **[CrewAI Crews](/en/guides/crews/first-crew)**: Optimize for autonomy and collaborative intelligence, enabling you to create AI teams where each agent has specific roles, tools, and goals.
- **[CrewAI Flows](/en/guides/flows/first-flow)**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively.
- **[CrewAI Crews](/guides/crews/first-crew)**: Optimize for autonomy and collaborative intelligence, enabling you to create AI teams where each agent has specific roles, tools, and goals.
- **[CrewAI Flows](/guides/flows/first-flow)**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively.
With over 100,000 developers certified through our community courses, CrewAI is rapidly becoming the standard for enterprise-ready AI automation.
@@ -23,7 +23,7 @@ With over 100,000 developers certified through our community courses, CrewAI is
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
<img src="images/crews.png" alt="CrewAI Framework Overview" />
</Frame>
| Component | Description | Key Features |
@@ -64,7 +64,7 @@ With over 100,000 developers certified through our community courses, CrewAI is
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="/images/flows.png" alt="CrewAI Framework Overview" />
<img src="images/flows.png" alt="CrewAI Framework Overview" />
</Frame>
| Component | Description | Key Features |
@@ -94,21 +94,21 @@ With over 100,000 developers certified through our community courses, CrewAI is
## When to Use Crews vs. Flows
<Note>
Understanding when to use [Crews](/en/guides/crews/first-crew) versus [Flows](/en/guides/flows/first-flow) is key to maximizing the potential of CrewAI in your applications.
Understanding when to use [Crews](/guides/crews/first-crew) versus [Flows](/guides/flows/first-flow) is key to maximizing the potential of CrewAI in your applications.
</Note>
| Use Case | Recommended Approach | Why? |
|:---------|:---------------------|:-----|
| **Open-ended research** | [Crews](/en/guides/crews/first-crew) | When tasks require creative thinking, exploration, and adaptation |
| **Content generation** | [Crews](/en/guides/crews/first-crew) | For collaborative creation of articles, reports, or marketing materials |
| **Decision workflows** | [Flows](/en/guides/flows/first-flow) | When you need predictable, auditable decision paths with precise control |
| **API orchestration** | [Flows](/en/guides/flows/first-flow) | For reliable integration with multiple external services in a specific sequence |
| **Hybrid applications** | Combined approach | Use [Flows](/en/guides/flows/first-flow) to orchestrate overall process with [Crews](/en/guides/crews/first-crew) handling complex subtasks |
| **Open-ended research** | [Crews](/guides/crews/first-crew) | When tasks require creative thinking, exploration, and adaptation |
| **Content generation** | [Crews](/guides/crews/first-crew) | For collaborative creation of articles, reports, or marketing materials |
| **Decision workflows** | [Flows](/guides/flows/first-flow) | When you need predictable, auditable decision paths with precise control |
| **API orchestration** | [Flows](/guides/flows/first-flow) | For reliable integration with multiple external services in a specific sequence |
| **Hybrid applications** | Combined approach | Use [Flows](/guides/flows/first-flow) to orchestrate overall process with [Crews](/guides/crews/first-crew) handling complex subtasks |
### Decision Framework
- **Choose [Crews](/en/guides/crews/first-crew) when:** You need autonomous problem-solving, creative collaboration, or exploratory tasks
- **Choose [Flows](/en/guides/flows/first-flow) when:** You require deterministic outcomes, auditability, or precise control over execution
- **Choose [Crews](/guides/crews/first-crew) when:** You need autonomous problem-solving, creative collaboration, or exploratory tasks
- **Choose [Flows](/guides/flows/first-flow) when:** You require deterministic outcomes, auditability, or precise control over execution
- **Combine both when:** Your application needs both structured processes and pockets of autonomous intelligence
## Why Choose CrewAI?
@@ -126,14 +126,14 @@ With over 100,000 developers certified through our community courses, CrewAI is
<Card
title="Build Your First Crew"
icon="users-gear"
href="/en/guides/crews/first-crew"
href="/guides/crews/first-crew"
>
Step-by-step tutorial to create a collaborative AI team that works together to solve complex problems.
</Card>
<Card
title="Build Your First Flow"
icon="diagram-project"
href="/en/guides/flows/first-flow"
href="/guides/flows/first-flow"
>
Learn how to create structured, event-driven workflows with precise control over execution.
</Card>
@@ -143,14 +143,14 @@ With over 100,000 developers certified through our community courses, CrewAI is
<Card
title="Install CrewAI"
icon="wrench"
href="/en/installation"
href="/installation"
>
Get started with CrewAI in your development environment.
</Card>
<Card
title="Quick Start"
icon="bolt"
href="en/quickstart"
href="/quickstart"
>
Follow our quickstart guide to create your first CrewAI agent and get hands-on experience.
</Card>
@@ -161,4 +161,4 @@ With over 100,000 developers certified through our community courses, CrewAI is
>
Connect with other developers, get help, and share your CrewAI experiences.
</Card>
</CardGroup>
</CardGroup>

View File

@@ -12,38 +12,38 @@ This section provides comprehensive guides and tutorials to help you master Crew
### Core Concepts
<CardGroup cols={2}>
<Card title="Sequential Process" icon="list-ol" href="/en/learn/sequential-process">
<Card title="Sequential Process" icon="list-ol" href="/learn/sequential-process">
Learn how to execute tasks in a sequential order for structured workflows.
</Card>
<Card title="Hierarchical Process" icon="sitemap" href="/en/learn/hierarchical-process">
<Card title="Hierarchical Process" icon="sitemap" href="/learn/hierarchical-process">
Implement hierarchical task execution with manager agents overseeing workflows.
</Card>
<Card title="Conditional Tasks" icon="code-branch" href="/en/learn/conditional-tasks">
<Card title="Conditional Tasks" icon="code-branch" href="/learn/conditional-tasks">
Create dynamic workflows with conditional task execution based on outcomes.
</Card>
<Card title="Async Kickoff" icon="bolt" href="/en/learn/kickoff-async">
<Card title="Async Kickoff" icon="bolt" href="/learn/kickoff-async">
Execute crews asynchronously for improved performance and concurrency.
</Card>
</CardGroup>
### Agent Development
<CardGroup cols={2}>
<Card title="Customizing Agents" icon="user-gear" href="/en/learn/customizing-agents">
<Card title="Customizing Agents" icon="user-gear" href="/learn/customizing-agents">
Learn how to customize agent behavior, roles, and capabilities.
</Card>
<Card title="Coding Agents" icon="code" href="/en/learn/coding-agents">
<Card title="Coding Agents" icon="code" href="/learn/coding-agents">
Build agents that can write, execute, and debug code automatically.
</Card>
<Card title="Multimodal Agents" icon="images" href="/en/learn/multimodal-agents">
<Card title="Multimodal Agents" icon="images" href="/learn/multimodal-agents">
Create agents that can process text, images, and other media types.
</Card>
<Card title="Custom Manager Agent" icon="user-tie" href="/en/learn/custom-manager-agent">
<Card title="Custom Manager Agent" icon="user-tie" href="/learn/custom-manager-agent">
Implement custom manager agents for complex hierarchical workflows.
</Card>
</CardGroup>
@@ -52,38 +52,38 @@ This section provides comprehensive guides and tutorials to help you master Crew
### Workflow Control
<CardGroup cols={2}>
<Card title="Human in the Loop" icon="user-check" href="/en/learn/human-in-the-loop">
<Card title="Human in the Loop" icon="user-check" href="/learn/human-in-the-loop">
Integrate human oversight and intervention into agent workflows.
</Card>
<Card title="Human Input on Execution" icon="hand-paper" href="/en/learn/human-input-on-execution">
<Card title="Human Input on Execution" icon="hand-paper" href="/learn/human-input-on-execution">
Allow human input during task execution for dynamic decision making.
</Card>
<Card title="Replay Tasks" icon="rotate-left" href="/en/learn/replay-tasks-from-latest-crew-kickoff">
<Card title="Replay Tasks" icon="rotate-left" href="/learn/replay-tasks-from-latest-crew-kickoff">
Replay and resume tasks from previous crew executions.
</Card>
<Card title="Kickoff for Each" icon="repeat" href="/en/learn/kickoff-for-each">
<Card title="Kickoff for Each" icon="repeat" href="/learn/kickoff-for-each">
Execute crews multiple times with different inputs efficiently.
</Card>
</CardGroup>
### Customization & Integration
<CardGroup cols={2}>
<Card title="Custom LLM" icon="brain" href="/en/learn/custom-llm">
<Card title="Custom LLM" icon="brain" href="/learn/custom-llm">
Integrate custom language models and providers with CrewAI.
</Card>
<Card title="LLM Connections" icon="link" href="/en/learn/llm-connections">
<Card title="LLM Connections" icon="link" href="/learn/llm-connections">
Configure and manage connections to various LLM providers.
</Card>
<Card title="Create Custom Tools" icon="wrench" href="/en/learn/create-custom-tools">
<Card title="Create Custom Tools" icon="wrench" href="/learn/create-custom-tools">
Build custom tools to extend agent capabilities.
</Card>
<Card title="Using Annotations" icon="at" href="/en/learn/using-annotations">
<Card title="Using Annotations" icon="at" href="/learn/using-annotations">
Use Python annotations for cleaner, more maintainable code.
</Card>
</CardGroup>
@@ -92,18 +92,18 @@ This section provides comprehensive guides and tutorials to help you master Crew
### Content & Media
<CardGroup cols={2}>
<Card title="DALL-E Image Generation" icon="image" href="/en/learn/dalle-image-generation">
<Card title="DALL-E Image Generation" icon="image" href="/learn/dalle-image-generation">
Generate images using DALL-E integration with your agents.
</Card>
<Card title="Bring Your Own Agent" icon="user-plus" href="/en/learn/bring-your-own-agent">
<Card title="Bring Your Own Agent" icon="user-plus" href="/learn/bring-your-own-agent">
Integrate existing agents and models into CrewAI workflows.
</Card>
</CardGroup>
### Tool Management
<CardGroup cols={2}>
<Card title="Force Tool Output as Result" icon="hammer" href="/en/learn/force-tool-output-as-result">
<Card title="Force Tool Output as Result" icon="hammer" href="/learn/force-tool-output-as-result">
Configure tools to return their output directly as task results.
</Card>
</CardGroup>
@@ -155,4 +155,4 @@ This section provides comprehensive guides and tutorials to help you master Crew
- **Examples**: Check the Examples section for complete working implementations
- **Support**: Contact [support@crewai.com](mailto:support@crewai.com) for technical assistance
Start with the guides that match your current needs and gradually explore more advanced topics as you become comfortable with the fundamentals.
Start with the guides that match your current needs and gradually explore more advanced topics as you become comfortable with the fundamentals.

View File

@@ -6,11 +6,11 @@ icon: plug
## Overview
The [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) provides a standardized way for AI agents to provide context to LLMs by communicating with external services, known as MCP Servers.
The `crewai-tools` library extends CrewAI's capabilities by allowing you to seamlessly integrate tools from these MCP servers into your agents.
This gives your crews access to a vast ecosystem of functionalities.
The [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) provides a standardized way for AI agents to provide context to LLMs by communicating with external services, known as MCP Servers.
The `crewai-tools` library extends CrewAI's capabilities by allowing you to seamlessly integrate tools from these MCP servers into your agents.
This gives your crews access to a vast ecosystem of functionalities.
We currently support the following transport mechanisms:
We currently support the following transport mechanisms:
- **Stdio**: for local servers (communication via standard input/output between processes on the same machine)
- **Server-Sent Events (SSE)**: for remote servers (unidirectional, real-time data streaming from server to client over HTTP)
@@ -52,27 +52,27 @@ from mcp import StdioServerParameters # For Stdio Server
# Example server_params (choose one based on your server type):
# 1. Stdio Server:
server_params=StdioServerParameters(
command="python3",
command="python3",
args=["servers/your_server.py"],
env={"UV_PYTHON": "3.12", **os.environ},
)
# 2. SSE Server:
server_params = {
"url": "http://localhost:8000/sse",
"url": "http://localhost:8000/sse",
"transport": "sse"
}
# 3. Streamable HTTP Server:
server_params = {
"url": "http://localhost:8001/mcp",
"url": "http://localhost:8001/mcp",
"transport": "streamable-http"
}
# Example usage (uncomment and adapt once server_params is set):
with MCPServerAdapter(server_params) as mcp_tools:
print(f"Available tools: {[tool.name for tool in mcp_tools]}")
my_agent = Agent(
role="MCP Tool User",
goal="Utilize tools from an MCP server.",
@@ -87,13 +87,6 @@ This general pattern shows how to integrate tools. For specific examples tailore
## Filtering Tools
There are two ways to filter tools:
1. Accessing a specific tool using dictionary-style indexing.
2. Pass a list of tool names to the `MCPServerAdapter` constructor.
### Accessing a specific tool using dictionary-style indexing.
```python
with MCPServerAdapter(server_params) as mcp_tools:
print(f"Available tools: {[tool.name for tool in mcp_tools]}")
@@ -102,115 +95,51 @@ with MCPServerAdapter(server_params) as mcp_tools:
role="MCP Tool User",
goal="Utilize tools from an MCP server.",
backstory="I can connect to MCP servers and use their tools.",
tools=[mcp_tools["tool_name"]], # Pass the loaded tools to your agent
tools=mcp_tools["tool_name"], # Pass the loaded tools to your agent
reasoning=True,
verbose=True
)
# ... rest of your crew setup ...
```
### Pass a list of tool names to the `MCPServerAdapter` constructor.
```python
with MCPServerAdapter(server_params, "tool_name") as mcp_tools:
print(f"Available tools: {[tool.name for tool in mcp_tools]}")
my_agent = Agent(
role="MCP Tool User",
goal="Utilize tools from an MCP server.",
backstory="I can connect to MCP servers and use their tools.",
tools=mcp_tools, # Pass the loaded tools to your agent
reasoning=True,
verbose=True
)
# ... rest of your crew setup ...
```
## Using with CrewBase
To use MCPServer tools within a CrewBase class, use the `mcp_tools` method. Server configurations should be provided via the mcp_server_params attribute. You can pass either a single configuration or a list of multiple server configurations.
```python
@CrewBase
class CrewWithMCP:
# ... define your agents and tasks config file ...
mcp_server_params = [
# Streamable HTTP Server
{
"url": "http://localhost:8001/mcp",
"transport": "streamable-http"
},
# SSE Server
{
"url": "http://localhost:8000/sse",
"transport": "sse"
},
# StdIO Server
StdioServerParameters(
command="python3",
args=["servers/your_stdio_server.py"],
env={"UV_PYTHON": "3.12", **os.environ},
)
]
@agent
def your_agent(self):
return Agent(config=self.agents_config["your_agent"], tools=self.get_mcp_tools()) # get all available tools
# ... rest of your crew setup ...
```
You can filter which tools are available to your agent by passing a list of tool names to the `get_mcp_tools` method.
```python
@agent
def another_agent(self):
return Agent(
config=self.agents_config["your_agent"],
tools=self.get_mcp_tools("tool_1", "tool_2") # get specific tools
)
```
## Explore MCP Integrations
<CardGroup cols={2}>
<Card
title="Stdio Transport"
icon="server"
href="/en/mcp/stdio"
<Card
title="Stdio Transport"
icon="server"
href="/mcp/stdio"
color="#3B82F6"
>
Connect to local MCP servers via standard input/output. Ideal for scripts and local executables.
</Card>
<Card
title="SSE Transport"
icon="wifi"
href="/en/mcp/sse"
<Card
title="SSE Transport"
icon="wifi"
href="/mcp/sse"
color="#10B981"
>
Integrate with remote MCP servers using Server-Sent Events for real-time data streaming.
</Card>
<Card
title="Streamable HTTP Transport"
icon="globe"
href="/en/mcp/streamable-http"
<Card
title="Streamable HTTP Transport"
icon="globe"
href="/mcp/streamable-http"
color="#F59E0B"
>
Utilize flexible Streamable HTTP for robust communication with remote MCP servers.
</Card>
<Card
title="Connecting to Multiple Servers"
icon="layer-group"
href="/en/mcp/multiple-servers"
<Card
title="Connecting to Multiple Servers"
icon="layer-group"
href="/mcp/multiple-servers"
color="#8B5CF6"
>
Aggregate tools from several MCP servers simultaneously using a single adapter.
</Card>
<Card
title="Security Considerations"
icon="lock"
href="/en/mcp/security"
<Card
title="Security Considerations"
icon="lock"
href="/mcp/security"
color="#EF4444"
>
Review important security best practices for MCP integration to keep your agents safe.
@@ -219,7 +148,7 @@ def another_agent(self):
Checkout this repository for full demos and examples of MCP integration with CrewAI! 👇
<Card
<Card
title="GitHub Repository"
icon="github"
href="https://github.com/tonykipkemboi/crewai-mcp-demo"
@@ -234,7 +163,7 @@ Always ensure that you trust an MCP Server before using it.
</Warning>
#### Security Warning: DNS Rebinding Attacks
SSE transports can be vulnerable to DNS rebinding attacks if not properly secured.
SSE transports can be vulnerable to DNS rebinding attacks if not properly secured.
To prevent this:
1. **Always validate Origin headers** on incoming SSE connections to ensure they come from expected sources
@@ -246,6 +175,6 @@ Without these protections, attackers could use DNS rebinding to interact with lo
For more details, see the [Anthropic's MCP Transport Security docs](https://modelcontextprotocol.io/docs/concepts/transports#security-considerations).
### Limitations
* **Supported Primitives**: Currently, `MCPServerAdapter` primarily supports adapting MCP `tools`.
* **Supported Primitives**: Currently, `MCPServerAdapter` primarily supports adapting MCP `tools`.
Other MCP primitives like `prompts` or `resources` are not directly integrated as CrewAI components through this adapter at this time.
* **Output Handling**: The adapter typically processes the primary text output from an MCP tool (e.g., `.content[0].text`). Complex or multi-modal outputs might require custom handling if not fitting this pattern.

Some files were not shown because too many files have changed in this diff Show More