mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-31 19:58:30 +00:00
Compare commits
9 Commits
lg-agent-i
...
0.134.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0f861338ef | ||
|
|
4d1aabf620 | ||
|
|
a50fae3a4b | ||
|
|
f6dfec61d6 | ||
|
|
060c486948 | ||
|
|
8b176d0598 | ||
|
|
c96d4a6823 | ||
|
|
59032817c7 | ||
|
|
e9d8a853ea |
942
docs/docs.json
942
docs/docs.json
File diff suppressed because it is too large
Load Diff
@@ -71,7 +71,7 @@ There are two ways to create agents in CrewAI: using **YAML configuration (recom
|
||||
|
||||
Using YAML configuration provides a cleaner, more maintainable way to define agents. We strongly recommend using this approach in your CrewAI projects.
|
||||
|
||||
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements.
|
||||
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements.
|
||||
|
||||
<Note>
|
||||
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
|
||||
@@ -312,7 +312,7 @@ multimodal_agent = Agent(
|
||||
|
||||
<Note>
|
||||
When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting.
|
||||
</Note>
|
||||
</Note>
|
||||
|
||||
<Note>
|
||||
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution.
|
||||
@@ -425,7 +425,7 @@ strict_agent = Agent(
|
||||
```python Code
|
||||
# Perfect for document processing
|
||||
document_processor = Agent(
|
||||
role="Document Analyst",
|
||||
role="Document Analyst",
|
||||
goal="Extract insights from large research papers",
|
||||
backstory="Expert at analyzing extensive documentation",
|
||||
respect_context_window=True, # Handle large documents gracefully
|
||||
@@ -45,7 +45,7 @@ There are two ways to create crews in CrewAI: using **YAML configuration (recomm
|
||||
|
||||
Using YAML configuration provides a cleaner, more maintainable way to define crews and is consistent with how agents and tasks are defined in CrewAI projects.
|
||||
|
||||
After creating your CrewAI project as outlined in the [Installation](/installation) section, you can define your crew in a class that inherits from `CrewBase` and uses decorators to define agents, tasks, and the crew itself.
|
||||
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, you can define your crew in a class that inherits from `CrewBase` and uses decorators to define agents, tasks, and the crew itself.
|
||||
|
||||
#### Example Crew Class with Decorators
|
||||
|
||||
@@ -66,8 +66,8 @@ class YourCrewName:
|
||||
# To see an example agent and task defined in YAML, checkout the following:
|
||||
# - Task: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
|
||||
# - Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
|
||||
agents_config = 'config/agents.yaml'
|
||||
tasks_config = 'config/tasks.yaml'
|
||||
agents_config = 'config/agents.yaml'
|
||||
tasks_config = 'config/tasks.yaml'
|
||||
|
||||
@before_kickoff
|
||||
def prepare_inputs(self, inputs):
|
||||
@@ -111,7 +111,7 @@ class YourCrewName:
|
||||
def crew(self) -> Crew:
|
||||
return Crew(
|
||||
agents=self.agents, # Automatically collected by the @agent decorator
|
||||
tasks=self.tasks, # Automatically collected by the @task decorator.
|
||||
tasks=self.tasks, # Automatically collected by the @task decorator.
|
||||
process=Process.sequential,
|
||||
verbose=True,
|
||||
)
|
||||
@@ -66,7 +66,7 @@ There are two ways to create tasks in CrewAI: using **YAML configuration (recomm
|
||||
|
||||
Using YAML configuration provides a cleaner, more maintainable way to define tasks. We strongly recommend using this approach to define tasks in your CrewAI projects.
|
||||
|
||||
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/tasks.yaml` file and modify the template to match your specific task requirements.
|
||||
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, navigate to the `src/latest_ai_development/config/tasks.yaml` file and modify the template to match your specific task requirements.
|
||||
|
||||
<Note>
|
||||
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
|
||||
@@ -277,7 +277,7 @@ formatted_task = Task(
|
||||
|
||||
When `markdown=True`, the agent will receive additional instructions to format the output using:
|
||||
- `#` for headers
|
||||
- `**text**` for bold text
|
||||
- `**text**` for bold text
|
||||
- `*text*` for italic text
|
||||
- `-` or `*` for bullet points
|
||||
- `` `code` `` for inline code
|
||||
@@ -124,7 +124,7 @@ from crewai_tools import CrewaiEnterpriseTools
|
||||
enterprise_tools = CrewaiEnterpriseTools(
|
||||
actions_list=["gmail_find_email"] # only gmail_find_email tool will be available
|
||||
)
|
||||
gmail_tool = enterprise_tools[0]
|
||||
gmail_tool = enterprise_tools["gmail_find_email"]
|
||||
|
||||
gmail_agent = Agent(
|
||||
role="Gmail Manager",
|
||||
@@ -10,11 +10,11 @@ icon: "people-arrows"
|
||||
|
||||
## Getting Started
|
||||
|
||||
<iframe
|
||||
<iframe
|
||||
width="100%"
|
||||
height="400"
|
||||
src="https://www.youtube.com/embed/-kSOTtYzgEw"
|
||||
title="Building Crews with CrewAI CLI"
|
||||
title="Building Crews with CrewAI CLI"
|
||||
frameborder="0"
|
||||
style={{ borderRadius: '10px' }}
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|
||||
@@ -23,13 +23,13 @@ icon: "people-arrows"
|
||||
|
||||
### Installation and Setup
|
||||
|
||||
<Card title="Follow Standard Installation" icon="wrench" href="/installation">
|
||||
<Card title="Follow Standard Installation" icon="wrench" href="/en/installation">
|
||||
Follow our standard installation guide to set up CrewAI CLI and create your first project.
|
||||
</Card>
|
||||
|
||||
### Building Your Crew
|
||||
|
||||
<Card title="Quickstart Tutorial" icon="rocket" href="/quickstart">
|
||||
<Card title="Quickstart Tutorial" icon="rocket" href="/en/quickstart">
|
||||
Follow our quickstart guide to create your first agent crew using YAML configuration.
|
||||
</Card>
|
||||
|
||||
@@ -40,4 +40,4 @@ For Enterprise-specific support or questions, contact our dedicated support team
|
||||
|
||||
<Card title="Schedule a Demo" icon="calendar" href="mailto:support@crewai.com">
|
||||
Book time with our team to learn more about Enterprise features and how they can benefit your organization.
|
||||
</Card>
|
||||
</Card>
|
||||
@@ -122,7 +122,7 @@ The CrewAI CLI offers several commands to manage your deployments:
|
||||
|
||||
# Remove a deployment
|
||||
crewai deploy remove <deployment_id>
|
||||
```
|
||||
```
|
||||
|
||||
## Option 2: Deploy Directly via Web Interface
|
||||
|
||||
@@ -132,14 +132,14 @@ You can also deploy your crews directly through the CrewAI Enterprise web interf
|
||||
|
||||
<Step title="Pushing to GitHub">
|
||||
|
||||
You need to push your crew to a GitHub repository. If you haven't created a crew yet, you can [follow this tutorial](/quickstart).
|
||||
You need to push your crew to a GitHub repository. If you haven't created a crew yet, you can [follow this tutorial](/en/quickstart).
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Connecting GitHub to CrewAI Enterprise">
|
||||
|
||||
1. Log in to [CrewAI Enterprise](https://app.crewai.com)
|
||||
2. Click on the button "Connect GitHub"
|
||||
2. Click on the button "Connect GitHub"
|
||||
|
||||
<Frame>
|
||||

|
||||
@@ -201,20 +201,20 @@ For security reasons, the following environment variable naming patterns are **a
|
||||
|
||||
**Blocked Patterns:**
|
||||
- Variables ending with `_TOKEN` (e.g., `MY_API_TOKEN`)
|
||||
- Variables ending with `_PASSWORD` (e.g., `DB_PASSWORD`)
|
||||
- Variables ending with `_PASSWORD` (e.g., `DB_PASSWORD`)
|
||||
- Variables ending with `_SECRET` (e.g., `API_SECRET`)
|
||||
- Variables ending with `_KEY` in certain contexts
|
||||
|
||||
**Specific Blocked Variables:**
|
||||
- `GITHUB_USER`, `GITHUB_TOKEN`
|
||||
- `AWS_REGION`, `AWS_DEFAULT_REGION`
|
||||
- `AWS_REGION`, `AWS_DEFAULT_REGION`
|
||||
- Various internal CrewAI system variables
|
||||
|
||||
### Allowed Exceptions
|
||||
|
||||
Some variables are explicitly allowed despite matching blocked patterns:
|
||||
- `AZURE_AD_TOKEN`
|
||||
- `AZURE_OPENAI_AD_TOKEN`
|
||||
- `AZURE_OPENAI_AD_TOKEN`
|
||||
- `ENTERPRISE_ACTION_TOKEN`
|
||||
- `CREWAI_ENTEPRISE_TOOLS_TOKEN`
|
||||
|
||||
@@ -228,7 +228,7 @@ OPENAI_TOKEN=sk-...
|
||||
DATABASE_PASSWORD=mypassword
|
||||
API_SECRET=secret123
|
||||
|
||||
# ✅ Use these naming patterns instead
|
||||
# ✅ Use these naming patterns instead
|
||||
OPENAI_API_KEY=sk-...
|
||||
DATABASE_CREDENTIALS=mypassword
|
||||
API_CONFIG=secret123
|
||||
@@ -56,8 +56,8 @@ CrewAI Enterprise extends the power of the open-source framework with features d
|
||||
<Steps>
|
||||
<Step title="Sign up for an account">
|
||||
Create your account at [app.crewai.com](https://app.crewai.com)
|
||||
<Card
|
||||
title="Sign Up"
|
||||
<Card
|
||||
title="Sign Up"
|
||||
icon="user"
|
||||
href="https://app.crewai.com/signup"
|
||||
>
|
||||
@@ -66,34 +66,34 @@ CrewAI Enterprise extends the power of the open-source framework with features d
|
||||
</Step>
|
||||
<Step title="Build your first crew">
|
||||
Use code or Crew Studio to build your crew
|
||||
<Card
|
||||
title="Build Crew"
|
||||
<Card
|
||||
title="Build Crew"
|
||||
icon="paintbrush"
|
||||
href="/enterprise/guides/build-crew"
|
||||
href="/en/enterprise/guides/build-crew"
|
||||
>
|
||||
Build Crew
|
||||
</Card>
|
||||
</Step>
|
||||
<Step title="Deploy your crew">
|
||||
Deploy your crew to the Enterprise platform
|
||||
<Card
|
||||
title="Deploy Crew"
|
||||
<Card
|
||||
title="Deploy Crew"
|
||||
icon="rocket"
|
||||
href="/enterprise/guides/deploy-crew"
|
||||
href="/en/enterprise/guides/deploy-crew"
|
||||
>
|
||||
Deploy Crew
|
||||
</Card>
|
||||
</Step>
|
||||
<Step title="Access your crew">
|
||||
Integrate with your crew via the generated API endpoints
|
||||
<Card
|
||||
title="API Access"
|
||||
<Card
|
||||
title="API Access"
|
||||
icon="code"
|
||||
href="/enterprise/guides/use-crew-api"
|
||||
href="/en/enterprise/guides/kickoff-crew"
|
||||
>
|
||||
Use the Crew API
|
||||
</Card>
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
For detailed instructions, check out our [deployment guide](/enterprise/guides/deploy-crew) or click the button below to get started.
|
||||
For detailed instructions, check out our [deployment guide](/en/enterprise/guides/deploy-crew) or click the button below to get started.
|
||||
@@ -48,12 +48,12 @@ icon: "circle-question"
|
||||
|
||||
To integrate human input into agent execution, set the `human_input` flag in the task definition. When enabled, the agent prompts the user for input before delivering its final answer. This input can provide extra context, clarify ambiguities, or validate the agent's output.
|
||||
|
||||
For detailed implementation guidance, see our [Human-in-the-Loop guide](/how-to/human-in-the-loop).
|
||||
For detailed implementation guidance, see our [Human-in-the-Loop guide](/en/how-to/human-in-the-loop).
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="What advanced customization options are available for tailoring and enhancing agent behavior and capabilities in CrewAI?">
|
||||
CrewAI provides a range of advanced customization options:
|
||||
|
||||
|
||||
- **Language Model Customization**: Agents can be customized with specific language models (`llm`) and function-calling language models (`function_calling_llm`)
|
||||
- **Performance and Debugging Settings**: Adjust an agent's performance and monitor its operations
|
||||
- **Verbose Mode**: Enables detailed logging of an agent's actions, useful for debugging and optimization
|
||||
@@ -129,12 +129,12 @@ icon: "circle-question"
|
||||
|
||||
Here's a tutorial on how to consistently get structured outputs from your agents:
|
||||
<Frame>
|
||||
<iframe
|
||||
<iframe
|
||||
height="400"
|
||||
width="100%"
|
||||
src="https://www.youtube.com/embed/dNpKQk5uxHw"
|
||||
title="YouTube video player" frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|
||||
src="https://www.youtube.com/embed/dNpKQk5uxHw"
|
||||
title="YouTube video player" frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|
||||
allowfullscreen></iframe>
|
||||
</Frame>
|
||||
</Accordion>
|
||||
@@ -148,4 +148,4 @@ icon: "circle-question"
|
||||
<Accordion title="How can you control the maximum number of requests per minute that the entire crew can perform?">
|
||||
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
</AccordionGroup>
|
||||
@@ -44,7 +44,7 @@ Based on your agent configuration, CrewAI adds different default instructions:
|
||||
"I MUST use these formats, my job depends on it!"
|
||||
```
|
||||
|
||||
#### For Agents With Tools
|
||||
#### For Agents With Tools
|
||||
```text
|
||||
"IMPORTANT: Use the following format in your response:
|
||||
|
||||
@@ -127,7 +127,7 @@ custom_prompt_template = """Task: {input}
|
||||
Please complete this task thoughtfully."""
|
||||
|
||||
agent = Agent(
|
||||
role="Research Assistant",
|
||||
role="Research Assistant",
|
||||
goal="Help users find accurate information",
|
||||
backstory="You are a helpful research assistant.",
|
||||
system_template=custom_system_template,
|
||||
@@ -164,7 +164,7 @@ crew = Crew(
|
||||
```python
|
||||
agent = Agent(
|
||||
role="Analyst",
|
||||
goal="Analyze data",
|
||||
goal="Analyze data",
|
||||
backstory="Expert analyst",
|
||||
use_system_prompt=False # Disables system prompt separation
|
||||
)
|
||||
@@ -174,13 +174,13 @@ agent = Agent(
|
||||
|
||||
For production transparency, integrate with observability platforms to monitor all prompts and LLM interactions. This allows you to see exactly what prompts (including default instructions) are being sent to your LLMs.
|
||||
|
||||
See our [Observability documentation](/how-to/observability) for detailed integration guides with various platforms including Langfuse, MLflow, Weights & Biases, and custom logging solutions.
|
||||
See our [Observability documentation](/en/observability/overview) for detailed integration guides with various platforms including Langfuse, MLflow, Weights & Biases, and custom logging solutions.
|
||||
|
||||
### Best Practices for Production
|
||||
|
||||
1. **Always inspect generated prompts** before deploying to production
|
||||
2. **Use custom templates** when you need full control over prompt content
|
||||
3. **Integrate observability tools** for ongoing prompt monitoring (see [Observability docs](/how-to/observability))
|
||||
3. **Integrate observability tools** for ongoing prompt monitoring (see [Observability docs](/en/observability/overview))
|
||||
4. **Test with different LLMs** as default instructions may work differently across models
|
||||
5. **Document your prompt customizations** for team transparency
|
||||
|
||||
@@ -313,4 +313,4 @@ Low-level prompt customization in CrewAI opens the door to super custom, complex
|
||||
|
||||
<Check>
|
||||
You now have the foundation for advanced prompt customizations in CrewAI. Whether you're adapting for model-specific structures or domain-specific constraints, this low-level approach lets you shape agent interactions in highly specialized ways.
|
||||
</Check>
|
||||
</Check>
|
||||
@@ -448,5 +448,5 @@ Congratulations! You now understand the principles and practices of effective ag
|
||||
## Next Steps
|
||||
|
||||
- Experiment with different agent configurations for your specific use case
|
||||
- Learn about [building your first crew](/guides/crews/first-crew) to see how agents work together
|
||||
- Explore [CrewAI Flows](/guides/flows/first-flow) for more advanced orchestration
|
||||
- Learn about [building your first crew](/en/guides/crews/first-crew) to see how agents work together
|
||||
- Explore [CrewAI Flows](/en/guides/flows/first-flow) for more advanced orchestration
|
||||
@@ -11,7 +11,7 @@ When building AI applications with CrewAI, one of the most important decisions y
|
||||
At the heart of this decision is understanding the relationship between **complexity** and **precision** in your application:
|
||||
|
||||
<Frame caption="Complexity vs. Precision Matrix for CrewAI Applications">
|
||||
<img src="../../images/complexity_precision.png" alt="Complexity vs. Precision Matrix" />
|
||||
<img src="/images/complexity_precision.png" alt="Complexity vs. Precision Matrix" />
|
||||
</Frame>
|
||||
|
||||
This matrix helps visualize how different approaches align with varying requirements for complexity and precision. Let's explore what each quadrant means and how it guides your architectural choices.
|
||||
@@ -497,7 +497,7 @@ You now have a framework for evaluating CrewAI use cases and choosing the right
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Learn more about [crafting effective agents](/guides/agents/crafting-effective-agents)
|
||||
- Explore [building your first crew](/guides/crews/first-crew)
|
||||
- Dive into [mastering flow state management](/guides/flows/mastering-flow-state)
|
||||
- Check out the [core concepts](/concepts/agents) for deeper understanding
|
||||
- Learn more about [crafting effective agents](/en/guides/agents/crafting-effective-agents)
|
||||
- Explore [building your first crew](/en/guides/crews/first-crew)
|
||||
- Dive into [mastering flow state management](/en/guides/flows/mastering-flow-state)
|
||||
- Check out the [core concepts](/en/concepts/agents) for deeper understanding
|
||||
@@ -32,9 +32,9 @@ Let's get started building your first crew!
|
||||
|
||||
Before starting, make sure you have:
|
||||
|
||||
1. Installed CrewAI following the [installation guide](/installation)
|
||||
1. Installed CrewAI following the [installation guide](/en/installation)
|
||||
2. Set up your LLM API key in your environment, following the [LLM setup
|
||||
guide](/concepts/llms#setting-up-your-llm)
|
||||
guide](/en/concepts/llms#setting-up-your-llm)
|
||||
3. Basic understanding of Python
|
||||
|
||||
## Step 1: Create a New CrewAI Project
|
||||
@@ -54,7 +54,7 @@ This will generate a project with the basic structure needed for your crew. The
|
||||
- A main script to run the crew
|
||||
|
||||
<Frame caption="CrewAI Framework Overview">
|
||||
<img src="../../images/crews.png" alt="CrewAI Framework Overview" />
|
||||
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
|
||||
</Frame>
|
||||
|
||||
|
||||
@@ -287,7 +287,7 @@ SERPER_API_KEY=your_serper_api_key
|
||||
# Add your provider's API key here too.
|
||||
```
|
||||
|
||||
See the [LLM Setup guide](/concepts/llms#setting-up-your-llm) for details on configuring your provider of choice. You can get a Serper API key from [Serper.dev](https://serper.dev/).
|
||||
See the [LLM Setup guide](/en/concepts/llms#setting-up-your-llm) for details on configuring your provider of choice. You can get a Serper API key from [Serper.dev](https://serper.dev/).
|
||||
|
||||
## Step 8: Install Dependencies
|
||||
|
||||
@@ -388,7 +388,7 @@ Now that you've built your first crew, you can:
|
||||
2. Try more complex task structures and workflows
|
||||
3. Implement custom tools to give your agents new capabilities
|
||||
4. Apply your crew to different topics or problem domains
|
||||
5. Explore [CrewAI Flows](/guides/flows/first-flow) for more advanced workflows with procedural programming
|
||||
5. Explore [CrewAI Flows](/en/guides/flows/first-flow) for more advanced workflows with procedural programming
|
||||
|
||||
<Check>
|
||||
Congratulations! You've successfully built your first CrewAI crew that can research and analyze any topic you provide. This foundational experience has equipped you with the skills to create increasingly sophisticated AI systems that can tackle complex, multi-stage problems through collaborative intelligence.
|
||||
@@ -42,9 +42,9 @@ Let's dive in and build your first flow!
|
||||
|
||||
Before starting, make sure you have:
|
||||
|
||||
1. Installed CrewAI following the [installation guide](/installation)
|
||||
1. Installed CrewAI following the [installation guide](/en/installation)
|
||||
2. Set up your LLM API key in your environment, following the [LLM setup
|
||||
guide](/concepts/llms#setting-up-your-llm)
|
||||
guide](/en/concepts/llms#setting-up-your-llm)
|
||||
3. Basic understanding of Python
|
||||
|
||||
## Step 1: Create a New CrewAI Flow Project
|
||||
@@ -59,7 +59,7 @@ cd guide_creator_flow
|
||||
This will generate a project with the basic structure needed for your flow.
|
||||
|
||||
<Frame caption="CrewAI Framework Overview">
|
||||
<img src="../../images/flows.png" alt="CrewAI Framework Overview" />
|
||||
<img src="/images/flows.png" alt="CrewAI Framework Overview" />
|
||||
</Frame>
|
||||
|
||||
## Step 2: Understanding the Project Structure
|
||||
@@ -443,7 +443,7 @@ This is the power of flows - combining different types of processing (user inter
|
||||
## Step 6: Set Up Your Environment Variables
|
||||
|
||||
Create a `.env` file in your project root with your API keys. See the [LLM setup
|
||||
guide](/concepts/llms#setting-up-your-llm) for details on configuring a provider.
|
||||
guide](/en/concepts/llms#setting-up-your-llm) for details on configuring a provider.
|
||||
|
||||
```sh .env
|
||||
OPENAI_API_KEY=your_openai_api_key
|
||||
@@ -767,5 +767,5 @@ You've now mastered the concepts and practices of state management in CrewAI Flo
|
||||
|
||||
- Experiment with both structured and unstructured state in your flows
|
||||
- Try implementing state persistence for long-running workflows
|
||||
- Explore [building your first crew](/guides/crews/first-crew) to see how crews and flows can work together
|
||||
- Check out the [Flow reference documentation](/concepts/flows) for more advanced features
|
||||
- Explore [building your first crew](/en/guides/crews/first-crew) to see how crews and flows can work together
|
||||
- Check out the [Flow reference documentation](/en/concepts/flows) for more advanced features
|
||||
@@ -186,7 +186,7 @@ For teams and organizations, CrewAI offers enterprise deployment options that el
|
||||
<Card
|
||||
title="Build Your First Agent"
|
||||
icon="code"
|
||||
href="/quickstart"
|
||||
href="/en/quickstart"
|
||||
>
|
||||
Follow our quickstart guide to create your first CrewAI agent and get hands-on experience.
|
||||
</Card>
|
||||
@@ -10,8 +10,8 @@ icon: handshake
|
||||
|
||||
CrewAI empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario:
|
||||
|
||||
- **[CrewAI Crews](/guides/crews/first-crew)**: Optimize for autonomy and collaborative intelligence, enabling you to create AI teams where each agent has specific roles, tools, and goals.
|
||||
- **[CrewAI Flows](/guides/flows/first-flow)**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively.
|
||||
- **[CrewAI Crews](/en/guides/crews/first-crew)**: Optimize for autonomy and collaborative intelligence, enabling you to create AI teams where each agent has specific roles, tools, and goals.
|
||||
- **[CrewAI Flows](/en/guides/flows/first-flow)**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively.
|
||||
|
||||
With over 100,000 developers certified through our community courses, CrewAI is rapidly becoming the standard for enterprise-ready AI automation.
|
||||
|
||||
@@ -23,7 +23,7 @@ With over 100,000 developers certified through our community courses, CrewAI is
|
||||
</Note>
|
||||
|
||||
<Frame caption="CrewAI Framework Overview">
|
||||
<img src="images/crews.png" alt="CrewAI Framework Overview" />
|
||||
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
|
||||
</Frame>
|
||||
|
||||
| Component | Description | Key Features |
|
||||
@@ -64,7 +64,7 @@ With over 100,000 developers certified through our community courses, CrewAI is
|
||||
</Note>
|
||||
|
||||
<Frame caption="CrewAI Framework Overview">
|
||||
<img src="images/flows.png" alt="CrewAI Framework Overview" />
|
||||
<img src="/images/flows.png" alt="CrewAI Framework Overview" />
|
||||
</Frame>
|
||||
|
||||
| Component | Description | Key Features |
|
||||
@@ -94,21 +94,21 @@ With over 100,000 developers certified through our community courses, CrewAI is
|
||||
## When to Use Crews vs. Flows
|
||||
|
||||
<Note>
|
||||
Understanding when to use [Crews](/guides/crews/first-crew) versus [Flows](/guides/flows/first-flow) is key to maximizing the potential of CrewAI in your applications.
|
||||
Understanding when to use [Crews](/en/guides/crews/first-crew) versus [Flows](/en/guides/flows/first-flow) is key to maximizing the potential of CrewAI in your applications.
|
||||
</Note>
|
||||
|
||||
| Use Case | Recommended Approach | Why? |
|
||||
|:---------|:---------------------|:-----|
|
||||
| **Open-ended research** | [Crews](/guides/crews/first-crew) | When tasks require creative thinking, exploration, and adaptation |
|
||||
| **Content generation** | [Crews](/guides/crews/first-crew) | For collaborative creation of articles, reports, or marketing materials |
|
||||
| **Decision workflows** | [Flows](/guides/flows/first-flow) | When you need predictable, auditable decision paths with precise control |
|
||||
| **API orchestration** | [Flows](/guides/flows/first-flow) | For reliable integration with multiple external services in a specific sequence |
|
||||
| **Hybrid applications** | Combined approach | Use [Flows](/guides/flows/first-flow) to orchestrate overall process with [Crews](/guides/crews/first-crew) handling complex subtasks |
|
||||
| **Open-ended research** | [Crews](/en/guides/crews/first-crew) | When tasks require creative thinking, exploration, and adaptation |
|
||||
| **Content generation** | [Crews](/en/guides/crews/first-crew) | For collaborative creation of articles, reports, or marketing materials |
|
||||
| **Decision workflows** | [Flows](/en/guides/flows/first-flow) | When you need predictable, auditable decision paths with precise control |
|
||||
| **API orchestration** | [Flows](/en/guides/flows/first-flow) | For reliable integration with multiple external services in a specific sequence |
|
||||
| **Hybrid applications** | Combined approach | Use [Flows](/en/guides/flows/first-flow) to orchestrate overall process with [Crews](/en/guides/crews/first-crew) handling complex subtasks |
|
||||
|
||||
### Decision Framework
|
||||
|
||||
- **Choose [Crews](/guides/crews/first-crew) when:** You need autonomous problem-solving, creative collaboration, or exploratory tasks
|
||||
- **Choose [Flows](/guides/flows/first-flow) when:** You require deterministic outcomes, auditability, or precise control over execution
|
||||
- **Choose [Crews](/en/guides/crews/first-crew) when:** You need autonomous problem-solving, creative collaboration, or exploratory tasks
|
||||
- **Choose [Flows](/en/guides/flows/first-flow) when:** You require deterministic outcomes, auditability, or precise control over execution
|
||||
- **Combine both when:** Your application needs both structured processes and pockets of autonomous intelligence
|
||||
|
||||
## Why Choose CrewAI?
|
||||
@@ -126,14 +126,14 @@ With over 100,000 developers certified through our community courses, CrewAI is
|
||||
<Card
|
||||
title="Build Your First Crew"
|
||||
icon="users-gear"
|
||||
href="/guides/crews/first-crew"
|
||||
href="/en/guides/crews/first-crew"
|
||||
>
|
||||
Step-by-step tutorial to create a collaborative AI team that works together to solve complex problems.
|
||||
</Card>
|
||||
<Card
|
||||
title="Build Your First Flow"
|
||||
icon="diagram-project"
|
||||
href="/guides/flows/first-flow"
|
||||
href="/en/guides/flows/first-flow"
|
||||
>
|
||||
Learn how to create structured, event-driven workflows with precise control over execution.
|
||||
</Card>
|
||||
@@ -143,14 +143,14 @@ With over 100,000 developers certified through our community courses, CrewAI is
|
||||
<Card
|
||||
title="Install CrewAI"
|
||||
icon="wrench"
|
||||
href="/installation"
|
||||
href="/en/installation"
|
||||
>
|
||||
Get started with CrewAI in your development environment.
|
||||
</Card>
|
||||
<Card
|
||||
title="Quick Start"
|
||||
icon="bolt"
|
||||
href="/quickstart"
|
||||
href="en/quickstart"
|
||||
>
|
||||
Follow our quickstart guide to create your first CrewAI agent and get hands-on experience.
|
||||
</Card>
|
||||
@@ -161,4 +161,4 @@ With over 100,000 developers certified through our community courses, CrewAI is
|
||||
>
|
||||
Connect with other developers, get help, and share your CrewAI experiences.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
</CardGroup>
|
||||
@@ -12,38 +12,38 @@ This section provides comprehensive guides and tutorials to help you master Crew
|
||||
|
||||
### Core Concepts
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Sequential Process" icon="list-ol" href="/learn/sequential-process">
|
||||
<Card title="Sequential Process" icon="list-ol" href="/en/learn/sequential-process">
|
||||
Learn how to execute tasks in a sequential order for structured workflows.
|
||||
</Card>
|
||||
|
||||
<Card title="Hierarchical Process" icon="sitemap" href="/learn/hierarchical-process">
|
||||
<Card title="Hierarchical Process" icon="sitemap" href="/en/learn/hierarchical-process">
|
||||
Implement hierarchical task execution with manager agents overseeing workflows.
|
||||
</Card>
|
||||
|
||||
<Card title="Conditional Tasks" icon="code-branch" href="/learn/conditional-tasks">
|
||||
<Card title="Conditional Tasks" icon="code-branch" href="/en/learn/conditional-tasks">
|
||||
Create dynamic workflows with conditional task execution based on outcomes.
|
||||
</Card>
|
||||
|
||||
<Card title="Async Kickoff" icon="bolt" href="/learn/kickoff-async">
|
||||
<Card title="Async Kickoff" icon="bolt" href="/en/learn/kickoff-async">
|
||||
Execute crews asynchronously for improved performance and concurrency.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Agent Development
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Customizing Agents" icon="user-gear" href="/learn/customizing-agents">
|
||||
<Card title="Customizing Agents" icon="user-gear" href="/en/learn/customizing-agents">
|
||||
Learn how to customize agent behavior, roles, and capabilities.
|
||||
</Card>
|
||||
|
||||
<Card title="Coding Agents" icon="code" href="/learn/coding-agents">
|
||||
<Card title="Coding Agents" icon="code" href="/en/learn/coding-agents">
|
||||
Build agents that can write, execute, and debug code automatically.
|
||||
</Card>
|
||||
|
||||
<Card title="Multimodal Agents" icon="images" href="/learn/multimodal-agents">
|
||||
<Card title="Multimodal Agents" icon="images" href="/en/learn/multimodal-agents">
|
||||
Create agents that can process text, images, and other media types.
|
||||
</Card>
|
||||
|
||||
<Card title="Custom Manager Agent" icon="user-tie" href="/learn/custom-manager-agent">
|
||||
<Card title="Custom Manager Agent" icon="user-tie" href="/en/learn/custom-manager-agent">
|
||||
Implement custom manager agents for complex hierarchical workflows.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
@@ -52,38 +52,38 @@ This section provides comprehensive guides and tutorials to help you master Crew
|
||||
|
||||
### Workflow Control
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Human in the Loop" icon="user-check" href="/learn/human-in-the-loop">
|
||||
<Card title="Human in the Loop" icon="user-check" href="/en/learn/human-in-the-loop">
|
||||
Integrate human oversight and intervention into agent workflows.
|
||||
</Card>
|
||||
|
||||
<Card title="Human Input on Execution" icon="hand-paper" href="/learn/human-input-on-execution">
|
||||
<Card title="Human Input on Execution" icon="hand-paper" href="/en/learn/human-input-on-execution">
|
||||
Allow human input during task execution for dynamic decision making.
|
||||
</Card>
|
||||
|
||||
<Card title="Replay Tasks" icon="rotate-left" href="/learn/replay-tasks-from-latest-crew-kickoff">
|
||||
<Card title="Replay Tasks" icon="rotate-left" href="/en/learn/replay-tasks-from-latest-crew-kickoff">
|
||||
Replay and resume tasks from previous crew executions.
|
||||
</Card>
|
||||
|
||||
<Card title="Kickoff for Each" icon="repeat" href="/learn/kickoff-for-each">
|
||||
<Card title="Kickoff for Each" icon="repeat" href="/en/learn/kickoff-for-each">
|
||||
Execute crews multiple times with different inputs efficiently.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Customization & Integration
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Custom LLM" icon="brain" href="/learn/custom-llm">
|
||||
<Card title="Custom LLM" icon="brain" href="/en/learn/custom-llm">
|
||||
Integrate custom language models and providers with CrewAI.
|
||||
</Card>
|
||||
|
||||
<Card title="LLM Connections" icon="link" href="/learn/llm-connections">
|
||||
<Card title="LLM Connections" icon="link" href="/en/learn/llm-connections">
|
||||
Configure and manage connections to various LLM providers.
|
||||
</Card>
|
||||
|
||||
<Card title="Create Custom Tools" icon="wrench" href="/learn/create-custom-tools">
|
||||
<Card title="Create Custom Tools" icon="wrench" href="/en/learn/create-custom-tools">
|
||||
Build custom tools to extend agent capabilities.
|
||||
</Card>
|
||||
|
||||
<Card title="Using Annotations" icon="at" href="/learn/using-annotations">
|
||||
<Card title="Using Annotations" icon="at" href="/en/learn/using-annotations">
|
||||
Use Python annotations for cleaner, more maintainable code.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
@@ -92,18 +92,18 @@ This section provides comprehensive guides and tutorials to help you master Crew
|
||||
|
||||
### Content & Media
|
||||
<CardGroup cols={2}>
|
||||
<Card title="DALL-E Image Generation" icon="image" href="/learn/dalle-image-generation">
|
||||
<Card title="DALL-E Image Generation" icon="image" href="/en/learn/dalle-image-generation">
|
||||
Generate images using DALL-E integration with your agents.
|
||||
</Card>
|
||||
|
||||
<Card title="Bring Your Own Agent" icon="user-plus" href="/learn/bring-your-own-agent">
|
||||
<Card title="Bring Your Own Agent" icon="user-plus" href="/en/learn/bring-your-own-agent">
|
||||
Integrate existing agents and models into CrewAI workflows.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Tool Management
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Force Tool Output as Result" icon="hammer" href="/learn/force-tool-output-as-result">
|
||||
<Card title="Force Tool Output as Result" icon="hammer" href="/en/learn/force-tool-output-as-result">
|
||||
Configure tools to return their output directly as task results.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
@@ -155,4 +155,4 @@ This section provides comprehensive guides and tutorials to help you master Crew
|
||||
- **Examples**: Check the Examples section for complete working implementations
|
||||
- **Support**: Contact [support@crewai.com](mailto:support@crewai.com) for technical assistance
|
||||
|
||||
Start with the guides that match your current needs and gradually explore more advanced topics as you become comfortable with the fundamentals.
|
||||
Start with the guides that match your current needs and gradually explore more advanced topics as you become comfortable with the fundamentals.
|
||||
@@ -6,11 +6,11 @@ icon: plug
|
||||
|
||||
## Overview
|
||||
|
||||
The [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) provides a standardized way for AI agents to provide context to LLMs by communicating with external services, known as MCP Servers.
|
||||
The `crewai-tools` library extends CrewAI's capabilities by allowing you to seamlessly integrate tools from these MCP servers into your agents.
|
||||
This gives your crews access to a vast ecosystem of functionalities.
|
||||
The [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) provides a standardized way for AI agents to provide context to LLMs by communicating with external services, known as MCP Servers.
|
||||
The `crewai-tools` library extends CrewAI's capabilities by allowing you to seamlessly integrate tools from these MCP servers into your agents.
|
||||
This gives your crews access to a vast ecosystem of functionalities.
|
||||
|
||||
We currently support the following transport mechanisms:
|
||||
We currently support the following transport mechanisms:
|
||||
|
||||
- **Stdio**: for local servers (communication via standard input/output between processes on the same machine)
|
||||
- **Server-Sent Events (SSE)**: for remote servers (unidirectional, real-time data streaming from server to client over HTTP)
|
||||
@@ -52,27 +52,27 @@ from mcp import StdioServerParameters # For Stdio Server
|
||||
# Example server_params (choose one based on your server type):
|
||||
# 1. Stdio Server:
|
||||
server_params=StdioServerParameters(
|
||||
command="python3",
|
||||
command="python3",
|
||||
args=["servers/your_server.py"],
|
||||
env={"UV_PYTHON": "3.12", **os.environ},
|
||||
)
|
||||
|
||||
# 2. SSE Server:
|
||||
server_params = {
|
||||
"url": "http://localhost:8000/sse",
|
||||
"url": "http://localhost:8000/sse",
|
||||
"transport": "sse"
|
||||
}
|
||||
|
||||
# 3. Streamable HTTP Server:
|
||||
server_params = {
|
||||
"url": "http://localhost:8001/mcp",
|
||||
"url": "http://localhost:8001/mcp",
|
||||
"transport": "streamable-http"
|
||||
}
|
||||
|
||||
# Example usage (uncomment and adapt once server_params is set):
|
||||
with MCPServerAdapter(server_params) as mcp_tools:
|
||||
print(f"Available tools: {[tool.name for tool in mcp_tools]}")
|
||||
|
||||
|
||||
my_agent = Agent(
|
||||
role="MCP Tool User",
|
||||
goal="Utilize tools from an MCP server.",
|
||||
@@ -85,45 +85,132 @@ with MCPServerAdapter(server_params) as mcp_tools:
|
||||
```
|
||||
This general pattern shows how to integrate tools. For specific examples tailored to each transport, refer to the detailed guides below.
|
||||
|
||||
## Filtering Tools
|
||||
|
||||
There are two ways to filter tools:
|
||||
|
||||
1. Accessing a specific tool using dictionary-style indexing.
|
||||
2. Pass a list of tool names to the `MCPServerAdapter` constructor.
|
||||
|
||||
### Accessing a specific tool using dictionary-style indexing.
|
||||
|
||||
```python
|
||||
with MCPServerAdapter(server_params) as mcp_tools:
|
||||
print(f"Available tools: {[tool.name for tool in mcp_tools]}")
|
||||
|
||||
my_agent = Agent(
|
||||
role="MCP Tool User",
|
||||
goal="Utilize tools from an MCP server.",
|
||||
backstory="I can connect to MCP servers and use their tools.",
|
||||
tools=[mcp_tools["tool_name"]], # Pass the loaded tools to your agent
|
||||
reasoning=True,
|
||||
verbose=True
|
||||
)
|
||||
# ... rest of your crew setup ...
|
||||
```
|
||||
|
||||
### Pass a list of tool names to the `MCPServerAdapter` constructor.
|
||||
|
||||
```python
|
||||
with MCPServerAdapter(server_params, "tool_name") as mcp_tools:
|
||||
print(f"Available tools: {[tool.name for tool in mcp_tools]}")
|
||||
|
||||
my_agent = Agent(
|
||||
role="MCP Tool User",
|
||||
goal="Utilize tools from an MCP server.",
|
||||
backstory="I can connect to MCP servers and use their tools.",
|
||||
tools=mcp_tools, # Pass the loaded tools to your agent
|
||||
reasoning=True,
|
||||
verbose=True
|
||||
)
|
||||
# ... rest of your crew setup ...
|
||||
```
|
||||
|
||||
## Using with CrewBase
|
||||
|
||||
To use MCPServer tools within a CrewBase class, use the `mcp_tools` method. Server configurations should be provided via the mcp_server_params attribute. You can pass either a single configuration or a list of multiple server configurations.
|
||||
|
||||
```python
|
||||
@CrewBase
|
||||
class CrewWithMCP:
|
||||
# ... define your agents and tasks config file ...
|
||||
|
||||
mcp_server_params = [
|
||||
# Streamable HTTP Server
|
||||
{
|
||||
"url": "http://localhost:8001/mcp",
|
||||
"transport": "streamable-http"
|
||||
},
|
||||
# SSE Server
|
||||
{
|
||||
"url": "http://localhost:8000/sse",
|
||||
"transport": "sse"
|
||||
},
|
||||
# StdIO Server
|
||||
StdioServerParameters(
|
||||
command="python3",
|
||||
args=["servers/your_stdio_server.py"],
|
||||
env={"UV_PYTHON": "3.12", **os.environ},
|
||||
)
|
||||
]
|
||||
|
||||
@agent
|
||||
def your_agent(self):
|
||||
return Agent(config=self.agents_config["your_agent"], tools=self.get_mcp_tools()) # get all available tools
|
||||
|
||||
# ... rest of your crew setup ...
|
||||
```
|
||||
|
||||
You can filter which tools are available to your agent by passing a list of tool names to the `get_mcp_tools` method.
|
||||
|
||||
```python
|
||||
@agent
|
||||
def another_agent(self):
|
||||
return Agent(
|
||||
config=self.agents_config["your_agent"],
|
||||
tools=self.get_mcp_tools("tool_1", "tool_2") # get specific tools
|
||||
)
|
||||
```
|
||||
|
||||
## Explore MCP Integrations
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Stdio Transport"
|
||||
icon="server"
|
||||
href="/mcp/stdio"
|
||||
<Card
|
||||
title="Stdio Transport"
|
||||
icon="server"
|
||||
href="/en/mcp/stdio"
|
||||
color="#3B82F6"
|
||||
>
|
||||
Connect to local MCP servers via standard input/output. Ideal for scripts and local executables.
|
||||
</Card>
|
||||
<Card
|
||||
title="SSE Transport"
|
||||
icon="wifi"
|
||||
href="/mcp/sse"
|
||||
<Card
|
||||
title="SSE Transport"
|
||||
icon="wifi"
|
||||
href="/en/mcp/sse"
|
||||
color="#10B981"
|
||||
>
|
||||
Integrate with remote MCP servers using Server-Sent Events for real-time data streaming.
|
||||
</Card>
|
||||
<Card
|
||||
title="Streamable HTTP Transport"
|
||||
icon="globe"
|
||||
href="/mcp/streamable-http"
|
||||
<Card
|
||||
title="Streamable HTTP Transport"
|
||||
icon="globe"
|
||||
href="/en/mcp/streamable-http"
|
||||
color="#F59E0B"
|
||||
>
|
||||
Utilize flexible Streamable HTTP for robust communication with remote MCP servers.
|
||||
</Card>
|
||||
<Card
|
||||
title="Connecting to Multiple Servers"
|
||||
icon="layer-group"
|
||||
href="/mcp/multiple-servers"
|
||||
<Card
|
||||
title="Connecting to Multiple Servers"
|
||||
icon="layer-group"
|
||||
href="/en/mcp/multiple-servers"
|
||||
color="#8B5CF6"
|
||||
>
|
||||
Aggregate tools from several MCP servers simultaneously using a single adapter.
|
||||
</Card>
|
||||
<Card
|
||||
title="Security Considerations"
|
||||
icon="lock"
|
||||
href="/mcp/security"
|
||||
<Card
|
||||
title="Security Considerations"
|
||||
icon="lock"
|
||||
href="/en/mcp/security"
|
||||
color="#EF4444"
|
||||
>
|
||||
Review important security best practices for MCP integration to keep your agents safe.
|
||||
@@ -132,7 +219,7 @@ This general pattern shows how to integrate tools. For specific examples tailore
|
||||
|
||||
Checkout this repository for full demos and examples of MCP integration with CrewAI! 👇
|
||||
|
||||
<Card
|
||||
<Card
|
||||
title="GitHub Repository"
|
||||
icon="github"
|
||||
href="https://github.com/tonykipkemboi/crewai-mcp-demo"
|
||||
@@ -147,7 +234,7 @@ Always ensure that you trust an MCP Server before using it.
|
||||
</Warning>
|
||||
|
||||
#### Security Warning: DNS Rebinding Attacks
|
||||
SSE transports can be vulnerable to DNS rebinding attacks if not properly secured.
|
||||
SSE transports can be vulnerable to DNS rebinding attacks if not properly secured.
|
||||
To prevent this:
|
||||
|
||||
1. **Always validate Origin headers** on incoming SSE connections to ensure they come from expected sources
|
||||
@@ -159,6 +246,6 @@ Without these protections, attackers could use DNS rebinding to interact with lo
|
||||
For more details, see the [Anthropic's MCP Transport Security docs](https://modelcontextprotocol.io/docs/concepts/transports#security-considerations).
|
||||
|
||||
### Limitations
|
||||
* **Supported Primitives**: Currently, `MCPServerAdapter` primarily supports adapting MCP `tools`.
|
||||
* **Supported Primitives**: Currently, `MCPServerAdapter` primarily supports adapting MCP `tools`.
|
||||
Other MCP primitives like `prompts` or `resources` are not directly integrated as CrewAI components through this adapter at this time.
|
||||
* **Output Handling**: The adapter typically processes the primary text output from an MCP tool (e.g., `.content[0].text`). Complex or multi-modal outputs might require custom handling if not fitting this pattern.
|
||||
231
docs/en/observability/maxim.mdx
Normal file
231
docs/en/observability/maxim.mdx
Normal file
@@ -0,0 +1,231 @@
|
||||
---
|
||||
title: "Maxim Integration"
|
||||
description: "Start Agent monitoring, evaluation, and observability"
|
||||
icon: "infinity"
|
||||
---
|
||||
|
||||
# Maxim Overview
|
||||
|
||||
Maxim AI provides comprehensive agent monitoring, evaluation, and observability for your CrewAI applications. With Maxim's one-line integration, you can easily trace and analyse agent interactions, performance metrics, and more.
|
||||
|
||||
## Features
|
||||
|
||||
### Prompt Management
|
||||
|
||||
Maxim's Prompt Management capabilities enable you to create, organize, and optimize prompts for your CrewAI agents. Rather than hardcoding instructions, leverage Maxim’s SDK to dynamically retrieve and apply version-controlled prompts.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Prompt Playground">
|
||||
Create, refine, experiment and deploy your prompts via the playground. Organize of your prompts using folders and versions, experimenting with the real world cases by linking tools and context, and deploying based on custom logic.
|
||||
|
||||
Easily experiment across models by [**configuring models**](https://www.getmaxim.ai/docs/introduction/quickstart/setting-up-workspace#add-model-api-keys) and selecting the relevant model from the dropdown at the top of the prompt playground.
|
||||
|
||||
<img src='https://raw.githubusercontent.com/akmadan/crewAI/docs_maxim_observability/docs/images/maxim_playground.png'> </img>
|
||||
</Tab>
|
||||
<Tab title="Prompt Versions">
|
||||
As teams build their AI applications, a big part of experimentation is iterating on the prompt structure. In order to collaborate effectively and organize your changes clearly, Maxim allows prompt versioning and comparison runs across versions.
|
||||
|
||||
<img src='https://raw.githubusercontent.com/akmadan/crewAI/docs_maxim_observability/docs/images/maxim_versions.png'> </img>
|
||||
</Tab>
|
||||
<Tab title="Prompt Comparisons">
|
||||
Iterating on Prompts as you evolve your AI application would need experiments across models, prompt structures, etc. In order to compare versions and make informed decisions about changes, the comparison playground allows a side by side view of results.
|
||||
|
||||
## **Why use Prompt comparison?**
|
||||
|
||||
Prompt comparison combines multiple single Prompts into one view, enabling a streamlined approach for various workflows:
|
||||
|
||||
1. **Model comparison**: Evaluate the performance of different models on the same Prompt.
|
||||
2. **Prompt optimization**: Compare different versions of a Prompt to identify the most effective formulation.
|
||||
3. **Cross-Model consistency**: Ensure consistent outputs across various models for the same Prompt.
|
||||
4. **Performance benchmarking**: Analyze metrics like latency, cost, and token count across different models and Prompts.
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Observability & Evals
|
||||
|
||||
Maxim AI provides comprehensive observability & evaluation for your CrewAI agents, helping you understand exactly what's happening during each execution.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Agent Tracing">
|
||||
Track your agent’s complete lifecycle, including tool calls, agent trajectories, and decision flows effortlessly.
|
||||
|
||||
<img src='https://raw.githubusercontent.com/akmadan/crewAI/docs_maxim_observability/docs/images/maxim_agent_tracking.png'> </img>
|
||||
</Tab>
|
||||
<Tab title="Analytics + Evals">
|
||||
Run detailed evaluations on full traces or individual nodes with support for:
|
||||
|
||||
- Multi-step interactions and granular trace analysis
|
||||
- Session Level Evaluations
|
||||
- Simulations for real-world testing
|
||||
|
||||
<img src='https://raw.githubusercontent.com/akmadan/crewAI/docs_maxim_observability/docs/images/maxim_trace_eval.png'> </img>
|
||||
|
||||
<CardGroup cols={3}>
|
||||
<Card title="Auto Evals on Logs" icon="e" href="https://www.getmaxim.ai/docs/observe/how-to/evaluate-logs/auto-evaluation">
|
||||
<p>
|
||||
Evaluate captured logs automatically from the UI based on filters and sampling
|
||||
|
||||
</p>
|
||||
</Card>
|
||||
<Card title="Human Evals on Logs" icon="hand" href="https://www.getmaxim.ai/docs/observe/how-to/evaluate-logs/human-evaluation">
|
||||
<p>
|
||||
Use human evaluation or rating to assess the quality of your logs and evaluate them.
|
||||
|
||||
</p>
|
||||
</Card>
|
||||
<Card title="Node Level Evals" icon="road" href="https://www.getmaxim.ai/docs/observe/how-to/evaluate-logs/node-level-evaluation">
|
||||
<p>
|
||||
Evaluate any component of your trace or log to gain insights into your agent’s behavior.
|
||||
|
||||
</p>
|
||||
</Card>
|
||||
</CardGroup>
|
||||
---
|
||||
</Tab>
|
||||
<Tab title="Alerting">
|
||||
Set thresholds on **error**, **cost, token usage, user feedback, latency** and get real-time alerts via Slack or PagerDuty.
|
||||
|
||||
<img src='https://raw.githubusercontent.com/akmadan/crewAI/docs_maxim_observability/docs/images/maxim_alerts_1.png'> </img>
|
||||
</Tab>
|
||||
<Tab title="Dashboards">
|
||||
Visualize Traces over time, usage metrics, latency & error rates with ease.
|
||||
|
||||
<img src='https://raw.githubusercontent.com/akmadan/crewAI/docs_maxim_observability/docs/images/maxim_dashboard_1.png'> </img>
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Prerequisites
|
||||
|
||||
|
||||
- Python version \>=3.10
|
||||
- A Maxim account ([sign up here](https://getmaxim.ai/))
|
||||
- Generate Maxim API Key
|
||||
- A CrewAI project
|
||||
|
||||
### Installation
|
||||
|
||||
Install the Maxim SDK via pip:
|
||||
|
||||
```python
|
||||
pip install maxim-py
|
||||
```
|
||||
|
||||
Or add it to your `requirements.txt`:
|
||||
|
||||
```
|
||||
maxim-py
|
||||
```
|
||||
### Basic Setup
|
||||
|
||||
### 1. Set up environment variables
|
||||
|
||||
```python
|
||||
### Environment Variables Setup
|
||||
|
||||
# Create a `.env` file in your project root:
|
||||
|
||||
# Maxim API Configuration
|
||||
MAXIM_API_KEY=your_api_key_here
|
||||
MAXIM_LOG_REPO_ID=your_repo_id_here
|
||||
```
|
||||
|
||||
### 2. Import the required packages
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
from maxim import Maxim
|
||||
from maxim.logger.crewai import instrument_crewai
|
||||
```
|
||||
|
||||
### 3. Initialise Maxim with your API key
|
||||
|
||||
|
||||
```python {8}
|
||||
# Instrument CrewAI with just one line
|
||||
instrument_crewai(Maxim().logger())
|
||||
```
|
||||
|
||||
### 4. Create and run your CrewAI application as usual
|
||||
|
||||
```python
|
||||
# Create your agent
|
||||
researcher = Agent(
|
||||
role='Senior Research Analyst',
|
||||
goal='Uncover cutting-edge developments in AI',
|
||||
backstory="You are an expert researcher at a tech think tank...",
|
||||
verbose=True,
|
||||
llm=llm
|
||||
)
|
||||
|
||||
# Define the task
|
||||
research_task = Task(
|
||||
description="Research the latest AI advancements...",
|
||||
expected_output="",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# Configure and run the crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
try:
|
||||
result = crew.kickoff()
|
||||
finally:
|
||||
maxim.cleanup() # Ensure cleanup happens even if errors occur
|
||||
```
|
||||
|
||||
|
||||
That's it\! All your CrewAI agent interactions will now be logged and available in your Maxim dashboard.
|
||||
|
||||
Check this Google Colab Notebook for a quick reference - [Notebook](https://colab.research.google.com/drive/1ZKIZWsmgQQ46n8TH9zLsT1negKkJA6K8?usp=sharing)
|
||||
|
||||
## Viewing Your Traces
|
||||
|
||||
After running your CrewAI application:
|
||||
|
||||
1. Log in to your [Maxim Dashboard](https://app.getmaxim.ai/login)
|
||||
2. Navigate to your repository
|
||||
3. View detailed agent traces, including:
|
||||
- Agent conversations
|
||||
- Tool usage patterns
|
||||
- Performance metrics
|
||||
- Cost analytics
|
||||
|
||||
<img src='https://raw.githubusercontent.com/akmadan/crewAI/docs_maxim_observability/docs/images/crewai_traces.gif'> </img>
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
- **No traces appearing**: Ensure your API key and repository ID are correct
|
||||
- Ensure you've **`called instrument_crewai()`** **_before_** running your crew. This initializes logging hooks correctly.
|
||||
- Set `debug=True` in your `instrument_crewai()` call to surface any internal errors:
|
||||
|
||||
```python
|
||||
instrument_crewai(logger, debug=True)
|
||||
```
|
||||
- Configure your agents with `verbose=True` to capture detailed logs:
|
||||
|
||||
```python
|
||||
agent = CrewAgent(..., verbose=True)
|
||||
```
|
||||
- Double-check that `instrument_crewai()` is called **before** creating or executing agents. This might be obvious, but it's a common oversight.
|
||||
|
||||
## Resources
|
||||
|
||||
<CardGroup cols="3">
|
||||
<Card title="CrewAI Docs" icon="book" href="https://docs.crewai.com/">
|
||||
Official CrewAI documentation
|
||||
</Card>
|
||||
<Card title="Maxim Docs" icon="book" href="https://getmaxim.ai/docs">
|
||||
Official Maxim documentation
|
||||
</Card>
|
||||
<Card title="Maxim Github" icon="github" href="https://github.com/maximhq">
|
||||
Maxim Github
|
||||
</Card>
|
||||
</CardGroup>
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user