docs: Add transparency features for prompts and memory systems (#2902)

* docs: Fix major memory system documentation issues - Remove misleading deprecation warnings, fix confusing comments, clearly separate three memory approaches, provide accurate examples that match implementation

* fix: Correct broken image paths in README - Update crewai_logo.png and asset.png paths to point to docs/images/ directory instead of docs/ directly

* docs: Add system prompt transparency and customization guide - Add 'Understanding Default System Instructions' section to address black-box concerns - Document what CrewAI automatically injects into prompts - Provide code examples to inspect complete system prompts - Show 3 methods to override default instructions - Include observability integration examples with Langfuse - Add best practices for production prompt management

* docs: Fix implementation accuracy issues in memory documentation - Fix Ollama embedding URL parameter and remove unsupported Cohere input_type parameter

* docs: Reference observability docs instead of showing specific tool examples

* docs: Reorganize knowledge documentation for better developer experience - Move quickstart examples right after overview for immediate hands-on experience - Create logical learning progression: basics → configuration → advanced → troubleshooting - Add comprehensive agent vs crew knowledge guide with working examples - Consolidate debugging and troubleshooting in dedicated section - Organize best practices by topic in accordion format - Improve content flow from simple concepts to advanced features - Ensure all examples are grounded in actual codebase implementation

* docs: enhance custom LLM documentation with comprehensive examples and accurate imports

* docs: reorganize observability tools into dedicated section with comprehensive overview and improved navigation

* docs: rename how-to section to learn and add comprehensive overview page

* docs: finalize documentation reorganization and update navigation labels

* docs: enhance README with comprehensive badges, navigation links, and getting started video
This commit is contained in:
Tony Kipkemboi
2025-05-27 13:08:40 -04:00
committed by GitHub
parent 4e0ce9adfe
commit dfc4255f2f
39 changed files with 2241 additions and 1172 deletions

View File

@@ -1,27 +1,70 @@
<div align="center">
<p align="center">
<a href="https://github.com/crewAIInc/crewAI">
<img src="docs/images/crewai_logo.png" width="600px" alt="Open source Multi-AI Agent orchestration framework">
</a>
</p>
<p align="center" style="display: flex; justify-content: center; gap: 20px; align-items: center;">
<a href="https://trendshift.io/repositories/11239" target="_blank">
<img src="https://trendshift.io/api/badge/repositories/11239" alt="crewAIInc%2FcrewAI | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/>
</a>
</p>
![Logo of CrewAI](./docs/images/crewai_logo.png)
<p align="center">
<a href="https://crewai.com">Homepage</a>
·
<a href="https://docs.crewai.com">Docs</a>
·
<a href="https://app.crewai.com">Start Cloud Trial</a>
·
<a href="https://blog.crewai.com">Blog</a>
·
<a href="https://community.crewai.com">Forum</a>
</p>
</div>
<p align="center">
<a href="https://github.com/crewAIInc/crewAI">
<img src="https://img.shields.io/github/stars/crewAIInc/crewAI" alt="GitHub Repo stars">
</a>
<a href="https://github.com/crewAIInc/crewAI/network/members">
<img src="https://img.shields.io/github/forks/crewAIInc/crewAI" alt="GitHub forks">
</a>
<a href="https://github.com/crewAIInc/crewAI/issues">
<img src="https://img.shields.io/github/issues/crewAIInc/crewAI" alt="GitHub issues">
</a>
<a href="https://github.com/crewAIInc/crewAI/pulls">
<img src="https://img.shields.io/github/issues-pr/crewAIInc/crewAI" alt="GitHub pull requests">
</a>
<a href="https://opensource.org/licenses/MIT">
<img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License: MIT">
</a>
</p>
<p align="center">
<a href="https://pypi.org/project/crewai/">
<img src="https://img.shields.io/pypi/v/crewai" alt="PyPI version">
</a>
<a href="https://pypi.org/project/crewai/">
<img src="https://img.shields.io/pypi/dm/crewai" alt="PyPI downloads">
</a>
<a href="https://twitter.com/crewAIInc">
<img src="https://img.shields.io/twitter/follow/crewAIInc?style=social" alt="Twitter Follow">
</a>
</p>
### Fast and Flexible Multi-Agent Automation Framework
CrewAI is a lean, lightning-fast Python framework built entirely from
scratch—completely **independent of LangChain or other agent frameworks**.
It empowers developers with both high-level simplicity and precise low-level
control, ideal for creating autonomous AI agents tailored to any scenario.
> CrewAI is a lean, lightning-fast Python framework built entirely from scratch—completely **independent of LangChain or other agent frameworks**.
> It empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario.
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
With over 100,000 developers certified through our community courses at
[learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation.
# CrewAI Enterprise Suite
CrewAI Enterprise Suite is a comprehensive bundle tailored for organizations
that require secure, scalable, and easy-to-manage agent-driven automation.
CrewAI Enterprise Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
@@ -35,21 +78,9 @@ You can try one part of the suite the [Crew Control Plane for free](https://app.
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI Enterprise on-premise or in the cloud, depending on your security and compliance requirements.
CrewAI Enterprise is designed for enterprises seeking a powerful,
reliable solution to transform complex business processes into efficient,
CrewAI Enterprise is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
intelligent automations.
<h3>
[Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Discourse](https://community.crewai.com)
</h3>
[![GitHub Repo stars](https://img.shields.io/github/stars/joaomdmoura/crewAI)](https://github.com/crewAIInc/crewAI)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
</div>
## Table of contents
- [Why CrewAI?](#why-crewai)
@@ -88,7 +119,12 @@ CrewAI empowers developers and enterprises to confidently build intelligent auto
## Getting Started
### Learning Resources
Setup and run your first CrewAI agents by following this tutorial.
[![CrewAI Getting Started Tutorial](https://img.youtube.com/vi/-kSOTtYzgEw/hqdefault.jpg)](https://www.youtube.com/watch?v=-kSOTtYzgEw "CrewAI Getting Started Tutorial")
###
Learning Resources
Learn CrewAI through our comprehensive courses:

File diff suppressed because it is too large Load Diff

View File

@@ -46,22 +46,96 @@ crew = Crew(
- **Storage Location**: Platform-specific location via `appdirs` package
- **Custom Storage Directory**: Set `CREWAI_STORAGE_DIR` environment variable
### Custom Embedder Configuration
## Storage Location Transparency
<Info>
**Understanding Storage Locations**: CrewAI uses platform-specific directories to store memory and knowledge files following OS conventions. Understanding these locations helps with production deployments, backups, and debugging.
</Info>
### Where CrewAI Stores Files
By default, CrewAI uses the `appdirs` library to determine storage locations following platform conventions. Here's exactly where your files are stored:
#### Default Storage Locations by Platform
**macOS:**
```
~/Library/Application Support/CrewAI/{project_name}/
├── knowledge/ # Knowledge base ChromaDB files
├── short_term_memory/ # Short-term memory ChromaDB files
├── long_term_memory/ # Long-term memory ChromaDB files
├── entities/ # Entity memory ChromaDB files
└── long_term_memory_storage.db # SQLite database
```
**Linux:**
```
~/.local/share/CrewAI/{project_name}/
├── knowledge/
├── short_term_memory/
├── long_term_memory/
├── entities/
└── long_term_memory_storage.db
```
**Windows:**
```
C:\Users\{username}\AppData\Local\CrewAI\{project_name}\
├── knowledge\
├── short_term_memory\
├── long_term_memory\
├── entities\
└── long_term_memory_storage.db
```
### Finding Your Storage Location
To see exactly where CrewAI is storing files on your system:
```python
from crewai.utilities.paths import db_storage_path
import os
# Get the base storage path
storage_path = db_storage_path()
print(f"CrewAI storage location: {storage_path}")
# List all CrewAI storage directories
if os.path.exists(storage_path):
print("\nStored files and directories:")
for item in os.listdir(storage_path):
item_path = os.path.join(storage_path, item)
if os.path.isdir(item_path):
print(f"📁 {item}/")
# Show ChromaDB collections
if os.path.exists(item_path):
for subitem in os.listdir(item_path):
print(f" └── {subitem}")
else:
print(f"📄 {item}")
else:
print("No CrewAI storage directory found yet.")
```
### Controlling Storage Locations
#### Option 1: Environment Variable (Recommended)
```python
import os
from crewai import Crew
# Set custom storage location
os.environ["CREWAI_STORAGE_DIR"] = "./my_project_storage"
# All memory and knowledge will now be stored in ./my_project_storage/
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
embedder={
"provider": "openai",
"config": {
"model": "text-embedding-3-small"
}
}
memory=True
)
```
### Custom Storage Paths
#### Option 2: Custom Storage Paths
```python
import os
from crewai import Crew
@@ -69,16 +143,547 @@ from crewai.memory import LongTermMemory
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
# Configure custom storage location
custom_storage_path = "./storage"
os.makedirs(custom_storage_path, exist_ok=True)
crew = Crew(
memory=True,
long_term_memory=LongTermMemory(
storage=LTMSQLiteStorage(
db_path=os.getenv("CREWAI_STORAGE_DIR", "./storage") + "/memory.db"
db_path=f"{custom_storage_path}/memory.db"
)
)
)
```
#### Option 3: Project-Specific Storage
```python
import os
from pathlib import Path
# Store in project directory
project_root = Path(__file__).parent
storage_dir = project_root / "crewai_storage"
os.environ["CREWAI_STORAGE_DIR"] = str(storage_dir)
# Now all storage will be in your project directory
```
### Embedding Provider Defaults
<Info>
**Default Embedding Provider**: CrewAI defaults to OpenAI embeddings for consistency and reliability. You can easily customize this to match your LLM provider or use local embeddings.
</Info>
#### Understanding Default Behavior
```python
# When using Claude as your LLM...
from crewai import Agent, LLM
agent = Agent(
role="Analyst",
goal="Analyze data",
backstory="Expert analyst",
llm=LLM(provider="anthropic", model="claude-3-sonnet") # Using Claude
)
# CrewAI will use OpenAI embeddings by default for consistency
# You can easily customize this to match your preferred provider
```
#### Customizing Embedding Providers
```python
from crewai import Crew
# Option 1: Match your LLM provider
crew = Crew(
agents=[agent],
tasks=[task],
memory=True,
embedder={
"provider": "anthropic", # Match your LLM provider
"config": {
"api_key": "your-anthropic-key",
"model": "text-embedding-3-small"
}
}
)
# Option 2: Use local embeddings (no external API calls)
crew = Crew(
agents=[agent],
tasks=[task],
memory=True,
embedder={
"provider": "ollama",
"config": {"model": "mxbai-embed-large"}
}
)
```
### Debugging Storage Issues
#### Check Storage Permissions
```python
import os
from crewai.utilities.paths import db_storage_path
storage_path = db_storage_path()
print(f"Storage path: {storage_path}")
print(f"Path exists: {os.path.exists(storage_path)}")
print(f"Is writable: {os.access(storage_path, os.W_OK) if os.path.exists(storage_path) else 'Path does not exist'}")
# Create with proper permissions
if not os.path.exists(storage_path):
os.makedirs(storage_path, mode=0o755, exist_ok=True)
print(f"Created storage directory: {storage_path}")
```
#### Inspect ChromaDB Collections
```python
import chromadb
from crewai.utilities.paths import db_storage_path
# Connect to CrewAI's ChromaDB
storage_path = db_storage_path()
chroma_path = os.path.join(storage_path, "knowledge")
if os.path.exists(chroma_path):
client = chromadb.PersistentClient(path=chroma_path)
collections = client.list_collections()
print("ChromaDB Collections:")
for collection in collections:
print(f" - {collection.name}: {collection.count()} documents")
else:
print("No ChromaDB storage found")
```
#### Reset Storage (Debugging)
```python
from crewai import Crew
# Reset all memory storage
crew = Crew(agents=[...], tasks=[...], memory=True)
# Reset specific memory types
crew.reset_memories(command_type='short') # Short-term memory
crew.reset_memories(command_type='long') # Long-term memory
crew.reset_memories(command_type='entity') # Entity memory
crew.reset_memories(command_type='knowledge') # Knowledge storage
```
### Production Best Practices
1. **Set `CREWAI_STORAGE_DIR`** to a known location in production for better control
2. **Choose explicit embedding providers** to match your LLM setup
3. **Monitor storage directory size** for large-scale deployments
4. **Include storage directories** in your backup strategy
5. **Set appropriate file permissions** (0o755 for directories, 0o644 for files)
6. **Use project-relative paths** for containerized deployments
### Common Storage Issues
**"ChromaDB permission denied" errors:**
```bash
# Fix permissions
chmod -R 755 ~/.local/share/CrewAI/
```
**"Database is locked" errors:**
```python
# Ensure only one CrewAI instance accesses storage
import fcntl
import os
storage_path = db_storage_path()
lock_file = os.path.join(storage_path, ".crewai.lock")
with open(lock_file, 'w') as f:
fcntl.flock(f.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
# Your CrewAI code here
```
**Storage not persisting between runs:**
```python
# Verify storage location is consistent
import os
print("CREWAI_STORAGE_DIR:", os.getenv("CREWAI_STORAGE_DIR"))
print("Current working directory:", os.getcwd())
print("Computed storage path:", db_storage_path())
```
## Custom Embedder Configuration
CrewAI supports multiple embedding providers to give you flexibility in choosing the best option for your use case. Here's a comprehensive guide to configuring different embedding providers for your memory system.
### Why Choose Different Embedding Providers?
- **Cost Optimization**: Local embeddings (Ollama) are free after initial setup
- **Privacy**: Keep your data local with Ollama or use your preferred cloud provider
- **Performance**: Some models work better for specific domains or languages
- **Consistency**: Match your embedding provider with your LLM provider
- **Compliance**: Meet specific regulatory or organizational requirements
### OpenAI Embeddings (Default)
OpenAI provides reliable, high-quality embeddings that work well for most use cases.
```python
from crewai import Crew
# Basic OpenAI configuration (uses environment OPENAI_API_KEY)
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
embedder={
"provider": "openai",
"config": {
"model": "text-embedding-3-small" # or "text-embedding-3-large"
}
}
)
# Advanced OpenAI configuration
crew = Crew(
memory=True,
embedder={
"provider": "openai",
"config": {
"api_key": "your-openai-api-key", # Optional: override env var
"model": "text-embedding-3-large",
"dimensions": 1536, # Optional: reduce dimensions for smaller storage
"organization_id": "your-org-id" # Optional: for organization accounts
}
}
)
```
### Azure OpenAI Embeddings
For enterprise users with Azure OpenAI deployments.
```python
crew = Crew(
memory=True,
embedder={
"provider": "openai", # Use openai provider for Azure
"config": {
"api_key": "your-azure-api-key",
"api_base": "https://your-resource.openai.azure.com/",
"api_type": "azure",
"api_version": "2023-05-15",
"model": "text-embedding-3-small",
"deployment_id": "your-deployment-name" # Azure deployment name
}
}
)
```
### Google AI Embeddings
Use Google's text embedding models for integration with Google Cloud services.
```python
crew = Crew(
memory=True,
embedder={
"provider": "google",
"config": {
"api_key": "your-google-api-key",
"model": "text-embedding-004" # or "text-embedding-preview-0409"
}
}
)
```
### Vertex AI Embeddings
For Google Cloud users with Vertex AI access.
```python
crew = Crew(
memory=True,
embedder={
"provider": "vertexai",
"config": {
"project_id": "your-gcp-project-id",
"region": "us-central1", # or your preferred region
"api_key": "your-service-account-key",
"model_name": "textembedding-gecko"
}
}
)
```
### Ollama Embeddings (Local)
Run embeddings locally for privacy and cost savings.
```python
# First, install and run Ollama locally, then pull an embedding model:
# ollama pull mxbai-embed-large
crew = Crew(
memory=True,
embedder={
"provider": "ollama",
"config": {
"model": "mxbai-embed-large", # or "nomic-embed-text"
"url": "http://localhost:11434/api/embeddings" # Default Ollama URL
}
}
)
# For custom Ollama installations
crew = Crew(
memory=True,
embedder={
"provider": "ollama",
"config": {
"model": "mxbai-embed-large",
"url": "http://your-ollama-server:11434/api/embeddings"
}
}
)
```
### Cohere Embeddings
Use Cohere's embedding models for multilingual support.
```python
crew = Crew(
memory=True,
embedder={
"provider": "cohere",
"config": {
"api_key": "your-cohere-api-key",
"model": "embed-english-v3.0" # or "embed-multilingual-v3.0"
}
}
)
```
### VoyageAI Embeddings
High-performance embeddings optimized for retrieval tasks.
```python
crew = Crew(
memory=True,
embedder={
"provider": "voyageai",
"config": {
"api_key": "your-voyage-api-key",
"model": "voyage-large-2", # or "voyage-code-2" for code
"input_type": "document" # or "query"
}
}
)
```
### AWS Bedrock Embeddings
For AWS users with Bedrock access.
```python
crew = Crew(
memory=True,
embedder={
"provider": "bedrock",
"config": {
"aws_access_key_id": "your-access-key",
"aws_secret_access_key": "your-secret-key",
"region_name": "us-east-1",
"model": "amazon.titan-embed-text-v1"
}
}
)
```
### Hugging Face Embeddings
Use open-source models from Hugging Face.
```python
crew = Crew(
memory=True,
embedder={
"provider": "huggingface",
"config": {
"api_key": "your-hf-token", # Optional for public models
"model": "sentence-transformers/all-MiniLM-L6-v2",
"api_url": "https://api-inference.huggingface.co" # or your custom endpoint
}
}
)
```
### IBM Watson Embeddings
For IBM Cloud users.
```python
crew = Crew(
memory=True,
embedder={
"provider": "watson",
"config": {
"api_key": "your-watson-api-key",
"url": "your-watson-instance-url",
"model": "ibm/slate-125m-english-rtrvr"
}
}
)
```
### Choosing the Right Embedding Provider
| Provider | Best For | Pros | Cons |
|:---------|:----------|:------|:------|
| **OpenAI** | General use, reliability | High quality, well-tested | Cost, requires API key |
| **Ollama** | Privacy, cost savings | Free, local, private | Requires local setup |
| **Google AI** | Google ecosystem | Good performance | Requires Google account |
| **Azure OpenAI** | Enterprise, compliance | Enterprise features | Complex setup |
| **Cohere** | Multilingual content | Great language support | Specialized use case |
| **VoyageAI** | Retrieval tasks | Optimized for search | Newer provider |
### Environment Variable Configuration
For security, store API keys in environment variables:
```python
import os
# Set environment variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["GOOGLE_API_KEY"] = "your-google-key"
os.environ["COHERE_API_KEY"] = "your-cohere-key"
# Use without exposing keys in code
crew = Crew(
memory=True,
embedder={
"provider": "openai",
"config": {
"model": "text-embedding-3-small"
# API key automatically loaded from environment
}
}
)
```
### Testing Different Embedding Providers
Compare embedding providers for your specific use case:
```python
from crewai import Crew
from crewai.utilities.paths import db_storage_path
# Test different providers with the same data
providers_to_test = [
{
"name": "OpenAI",
"config": {
"provider": "openai",
"config": {"model": "text-embedding-3-small"}
}
},
{
"name": "Ollama",
"config": {
"provider": "ollama",
"config": {"model": "mxbai-embed-large"}
}
}
]
for provider in providers_to_test:
print(f"\nTesting {provider['name']} embeddings...")
# Create crew with specific embedder
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
embedder=provider['config']
)
# Run your test and measure performance
result = crew.kickoff()
print(f"{provider['name']} completed successfully")
```
### Troubleshooting Embedding Issues
**Model not found errors:**
```python
# Verify model availability
from crewai.utilities.embedding_configurator import EmbeddingConfigurator
configurator = EmbeddingConfigurator()
try:
embedder = configurator.configure_embedder({
"provider": "ollama",
"config": {"model": "mxbai-embed-large"}
})
print("Embedder configured successfully")
except Exception as e:
print(f"Configuration error: {e}")
```
**API key issues:**
```python
import os
# Check if API keys are set
required_keys = ["OPENAI_API_KEY", "GOOGLE_API_KEY", "COHERE_API_KEY"]
for key in required_keys:
if os.getenv(key):
print(f"✅ {key} is set")
else:
print(f"❌ {key} is not set")
```
**Performance comparison:**
```python
import time
def test_embedding_performance(embedder_config, test_text="This is a test document"):
start_time = time.time()
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
embedder=embedder_config
)
# Simulate memory operation
crew.kickoff()
end_time = time.time()
return end_time - start_time
# Compare performance
openai_time = test_embedding_performance({
"provider": "openai",
"config": {"model": "text-embedding-3-small"}
})
ollama_time = test_embedding_performance({
"provider": "ollama",
"config": {"model": "mxbai-embed-large"}
})
print(f"OpenAI: {openai_time:.2f}s")
print(f"Ollama: {ollama_time:.2f}s")
```
## 2. User Memory with Mem0 (Legacy)
<Warning>

View File

@@ -164,8 +164,7 @@
"tools/ai-ml/llamaindextool",
"tools/ai-ml/langchaintool",
"tools/ai-ml/ragtool",
"tools/ai-ml/codeinterpretertool",
"tools/ai-ml/patronustools"
"tools/ai-ml/codeinterpretertool"
]
},
{
@@ -190,40 +189,43 @@
]
},
{
"group": "Agent Monitoring & Observability",
"group": "Observability",
"pages": [
"how-to/agentops-observability",
"how-to/arize-phoenix-observability",
"how-to/langfuse-observability",
"how-to/langtrace-observability",
"how-to/mlflow-observability",
"how-to/openlit-observability",
"how-to/opik-observability",
"how-to/portkey-observability",
"how-to/weave-integration"
"observability/overview",
"observability/agentops",
"observability/arize-phoenix",
"observability/langfuse",
"observability/langtrace",
"observability/mlflow",
"observability/openlit",
"observability/opik",
"observability/patronus-evaluation",
"observability/portkey",
"observability/weave"
]
},
{
"group": "Learn",
"pages": [
"how-to/conditional-tasks",
"how-to/coding-agents",
"how-to/create-custom-tools",
"how-to/custom-llm",
"how-to/custom-manager-agent",
"how-to/customizing-agents",
"how-to/dalle-image-generation",
"how-to/force-tool-output-as-result",
"how-to/hierarchical-process",
"how-to/human-in-the-loop",
"how-to/human-input-on-execution",
"how-to/kickoff-async",
"how-to/kickoff-for-each",
"how-to/llm-connections",
"how-to/multimodal-agents",
"how-to/replay-tasks-from-latest-crew-kickoff",
"how-to/sequential-process",
"how-to/using-annotations"
"learn/overview",
"learn/conditional-tasks",
"learn/coding-agents",
"learn/create-custom-tools",
"learn/custom-llm",
"learn/custom-manager-agent",
"learn/customizing-agents",
"learn/dalle-image-generation",
"learn/force-tool-output-as-result",
"learn/hierarchical-process",
"learn/human-in-the-loop",
"learn/human-input-on-execution",
"learn/kickoff-async",
"learn/kickoff-for-each",
"learn/llm-connections",
"learn/multimodal-agents",
"learn/replay-tasks-from-latest-crew-kickoff",
"learn/sequential-process",
"learn/using-annotations"
]
},
{
@@ -352,7 +354,7 @@
"navbar": {
"links": [
{
"label": "Start Free Trial",
"label": "Start Cloud Trial",
"href": "https://app.crewai.com"
}
],

View File

@@ -6,7 +6,7 @@ icon: message-pen
## Why Customize Prompts?
Although CrewAI's default prompts work well for many scenarios, low-level customization opens the door to significantly more flexible and powerful agent behavior. Heres why you might want to take advantage of this deeper control:
Although CrewAI's default prompts work well for many scenarios, low-level customization opens the door to significantly more flexible and powerful agent behavior. Here's why you might want to take advantage of this deeper control:
1. **Optimize for specific LLMs** Different models (such as GPT-4, Claude, or Llama) thrive with prompt formats tailored to their unique architectures.
2. **Change the language** Build agents that operate exclusively in languages beyond English, handling nuances with precision.
@@ -20,13 +20,174 @@ This guide explores how to tap into CrewAI's prompts at a lower level, giving yo
Under the hood, CrewAI employs a modular prompt system that you can customize extensively:
- **Agent templates** Govern each agents approach to their assigned role.
- **Agent templates** Govern each agent's approach to their assigned role.
- **Prompt slices** Control specialized behaviors such as tasks, tool usage, and output structure.
- **Error handling** Direct how agents respond to failures, exceptions, or timeouts.
- **Tool-specific prompts** Define detailed instructions for how tools are invoked or utilized.
Check out the [original prompt templates in CrewAI's repository](https://github.com/crewAIInc/crewAI/blob/main/src/crewai/translations/en.json) to see how these elements are organized. From there, you can override or adapt them as needed to unlock advanced behaviors.
## Understanding Default System Instructions
<Warning>
**Production Transparency Issue**: CrewAI automatically injects default instructions into your prompts that you might not be aware of. This section explains what's happening under the hood and how to gain full control.
</Warning>
When you define an agent with `role`, `goal`, and `backstory`, CrewAI automatically adds additional system instructions that control formatting and behavior. Understanding these default injections is crucial for production systems where you need full prompt transparency.
### What CrewAI Automatically Injects
Based on your agent configuration, CrewAI adds different default instructions:
#### For Agents Without Tools
```text
"I MUST use these formats, my job depends on it!"
```
#### For Agents With Tools
```text
"IMPORTANT: Use the following format in your response:
Thought: you should always think about what to do
Action: the action to take, only one name of [tool_names]
Action Input: the input to the action, just a simple JSON object...
```
#### For Structured Outputs (JSON/Pydantic)
```text
"Ensure your final answer contains only the content in the following format: {output_format}
Ensure the final output does not include any code block markers like ```json or ```python."
```
### Viewing the Complete System Prompt
To see exactly what prompt is being sent to your LLM, you can inspect the generated prompt:
```python
from crewai import Agent, Crew, Task
from crewai.utilities.prompts import Prompts
# Create your agent
agent = Agent(
role="Data Analyst",
goal="Analyze data and provide insights",
backstory="You are an expert data analyst with 10 years of experience.",
verbose=True
)
# Create a sample task
task = Task(
description="Analyze the sales data and identify trends",
expected_output="A detailed analysis with key insights and trends",
agent=agent
)
# Create the prompt generator
prompt_generator = Prompts(
agent=agent,
has_tools=len(agent.tools) > 0,
use_system_prompt=agent.use_system_prompt
)
# Generate and inspect the actual prompt
generated_prompt = prompt_generator.task_execution()
# Print the complete system prompt that will be sent to the LLM
if "system" in generated_prompt:
print("=== SYSTEM PROMPT ===")
print(generated_prompt["system"])
print("\n=== USER PROMPT ===")
print(generated_prompt["user"])
else:
print("=== COMPLETE PROMPT ===")
print(generated_prompt["prompt"])
# You can also see how the task description gets formatted
print("\n=== TASK CONTEXT ===")
print(f"Task Description: {task.description}")
print(f"Expected Output: {task.expected_output}")
```
### Overriding Default Instructions
You have several options to gain full control over the prompts:
#### Option 1: Custom Templates (Recommended)
```python
from crewai import Agent
# Define your own system template without default instructions
custom_system_template = """You are {role}. {backstory}
Your goal is: {goal}
Respond naturally and conversationally. Focus on providing helpful, accurate information."""
custom_prompt_template = """Task: {input}
Please complete this task thoughtfully."""
agent = Agent(
role="Research Assistant",
goal="Help users find accurate information",
backstory="You are a helpful research assistant.",
system_template=custom_system_template,
prompt_template=custom_prompt_template,
use_system_prompt=True # Use separate system/user messages
)
```
#### Option 2: Custom Prompt File
Create a `custom_prompts.json` file to override specific prompt slices:
```json
{
"slices": {
"no_tools": "\nProvide your best answer in a natural, conversational way.",
"tools": "\nYou have access to these tools: {tools}\n\nUse them when helpful, but respond naturally.",
"formatted_task_instructions": "Format your response as: {output_format}"
}
}
```
Then use it in your crew:
```python
crew = Crew(
agents=[agent],
tasks=[task],
prompt_file="custom_prompts.json",
verbose=True
)
```
#### Option 3: Disable System Prompts for o1 Models
```python
agent = Agent(
role="Analyst",
goal="Analyze data",
backstory="Expert analyst",
use_system_prompt=False # Disables system prompt separation
)
```
### Debugging with Observability Tools
For production transparency, integrate with observability platforms to monitor all prompts and LLM interactions. This allows you to see exactly what prompts (including default instructions) are being sent to your LLMs.
See our [Observability documentation](/how-to/observability) for detailed integration guides with various platforms including Langfuse, MLflow, Weights & Biases, and custom logging solutions.
### Best Practices for Production
1. **Always inspect generated prompts** before deploying to production
2. **Use custom templates** when you need full control over prompt content
3. **Integrate observability tools** for ongoing prompt monitoring (see [Observability docs](/how-to/observability))
4. **Test with different LLMs** as default instructions may work differently across models
5. **Document your prompt customizations** for team transparency
<Tip>
The default instructions exist to ensure consistent agent behavior, but they can interfere with domain-specific requirements. Use the customization options above to maintain full control over your agent's behavior in production systems.
</Tip>
## Best Practices for Managing Prompt Files
When engaging in low-level prompt customization, follow these guidelines to keep things organized and maintainable:
@@ -44,7 +205,7 @@ One straightforward approach is to create a JSON file for the prompts you want t
1. Craft a JSON file with your updated prompt slices.
2. Reference that file via the `prompt_file` parameter in your Crew.
CrewAI then merges your customizations with the defaults, so you dont have to redefine every prompt. Heres how:
CrewAI then merges your customizations with the defaults, so you don't have to redefine every prompt. Here's how:
### Example: Basic Prompt Customization
@@ -93,14 +254,14 @@ With these few edits, you gain low-level control over how your agents communicat
## Optimizing for Specific Models
Different models thrive on differently structured prompts. Making deeper adjustments can significantly boost performance by aligning your prompts with a models nuances.
Different models thrive on differently structured prompts. Making deeper adjustments can significantly boost performance by aligning your prompts with a model's nuances.
### Example: Llama 3.3 Prompting Template
For instance, when dealing with Metas Llama 3.3, deeper-level customization may reflect the recommended structure described at:
For instance, when dealing with Meta's Llama 3.3, deeper-level customization may reflect the recommended structure described at:
https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/#prompt-template
Heres an example to highlight how you might fine-tune an Agent to leverage Llama 3.3 in code:
Here's an example to highlight how you might fine-tune an Agent to leverage Llama 3.3 in code:
```python
from crewai import Agent, Crew, Task, Process
@@ -148,8 +309,8 @@ Through this deeper configuration, you can exercise comprehensive, low-level con
## Conclusion
Low-level prompt customization in CrewAI opens the door to super custom, complex use cases. By establishing well-organized prompt files (or direct inline templates), you can accommodate various models, languages, and specialized domains. This level of flexibility ensures you can craft precisely the AI behavior you need, all while knowing CrewAI still provides reliable defaults when you dont override them.
Low-level prompt customization in CrewAI opens the door to super custom, complex use cases. By establishing well-organized prompt files (or direct inline templates), you can accommodate various models, languages, and specialized domains. This level of flexibility ensures you can craft precisely the AI behavior you need, all while knowing CrewAI still provides reliable defaults when you don't override them.
<Check>
You now have the foundation for advanced prompt customizations in CrewAI. Whether youre adapting for model-specific structures or domain-specific constraints, this low-level approach lets you shape agent interactions in highly specialized ways.
You now have the foundation for advanced prompt customizations in CrewAI. Whether you're adapting for model-specific structures or domain-specific constraints, this low-level approach lets you shape agent interactions in highly specialized ways.
</Check>

View File

@@ -1,646 +0,0 @@
---
title: Custom LLM Implementation
description: Learn how to create custom LLM implementations in CrewAI.
icon: code
---
## Custom LLM Implementations
CrewAI now supports custom LLM implementations through the `BaseLLM` abstract base class. This allows you to create your own LLM implementations that don't rely on litellm's authentication mechanism.
To create a custom LLM implementation, you need to:
1. Inherit from the `BaseLLM` abstract base class
2. Implement the required methods:
- `call()`: The main method to call the LLM with messages
- `supports_function_calling()`: Whether the LLM supports function calling
- `supports_stop_words()`: Whether the LLM supports stop words
- `get_context_window_size()`: The context window size of the LLM
## Example: Basic Custom LLM
```python
from crewai import BaseLLM
from typing import Any, Dict, List, Optional, Union
class CustomLLM(BaseLLM):
def __init__(self, api_key: str, endpoint: str):
super().__init__() # Initialize the base class to set default attributes
if not api_key or not isinstance(api_key, str):
raise ValueError("Invalid API key: must be a non-empty string")
if not endpoint or not isinstance(endpoint, str):
raise ValueError("Invalid endpoint URL: must be a non-empty string")
self.api_key = api_key
self.endpoint = endpoint
self.stop = [] # You can customize stop words if needed
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
"""Call the LLM with the given messages.
Args:
messages: Input messages for the LLM.
tools: Optional list of tool schemas for function calling.
callbacks: Optional list of callback functions.
available_functions: Optional dict mapping function names to callables.
Returns:
Either a text response from the LLM or the result of a tool function call.
Raises:
TimeoutError: If the LLM request times out.
RuntimeError: If the LLM request fails for other reasons.
ValueError: If the response format is invalid.
"""
# Implement your own logic to call the LLM
# For example, using requests:
import requests
try:
headers = {
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json"
}
# Convert string message to proper format if needed
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
data = {
"messages": messages,
"tools": tools
}
response = requests.post(
self.endpoint,
headers=headers,
json=data,
timeout=30 # Set a reasonable timeout
)
response.raise_for_status() # Raise an exception for HTTP errors
return response.json()["choices"][0]["message"]["content"]
except requests.Timeout:
raise TimeoutError("LLM request timed out")
except requests.RequestException as e:
raise RuntimeError(f"LLM request failed: {str(e)}")
except (KeyError, IndexError, ValueError) as e:
raise ValueError(f"Invalid response format: {str(e)}")
def supports_function_calling(self) -> bool:
"""Check if the LLM supports function calling.
Returns:
True if the LLM supports function calling, False otherwise.
"""
# Return True if your LLM supports function calling
return True
def supports_stop_words(self) -> bool:
"""Check if the LLM supports stop words.
Returns:
True if the LLM supports stop words, False otherwise.
"""
# Return True if your LLM supports stop words
return True
def get_context_window_size(self) -> int:
"""Get the context window size of the LLM.
Returns:
The context window size as an integer.
"""
# Return the context window size of your LLM
return 8192
```
## Error Handling Best Practices
When implementing custom LLMs, it's important to handle errors properly to ensure robustness and reliability. Here are some best practices:
### 1. Implement Try-Except Blocks for API Calls
Always wrap API calls in try-except blocks to handle different types of errors:
```python
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
try:
# API call implementation
response = requests.post(
self.endpoint,
headers=self.headers,
json=self.prepare_payload(messages),
timeout=30 # Set a reasonable timeout
)
response.raise_for_status() # Raise an exception for HTTP errors
return response.json()["choices"][0]["message"]["content"]
except requests.Timeout:
raise TimeoutError("LLM request timed out")
except requests.RequestException as e:
raise RuntimeError(f"LLM request failed: {str(e)}")
except (KeyError, IndexError, ValueError) as e:
raise ValueError(f"Invalid response format: {str(e)}")
```
### 2. Implement Retry Logic for Transient Failures
For transient failures like network issues or rate limiting, implement retry logic with exponential backoff:
```python
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
import time
max_retries = 3
retry_delay = 1 # seconds
for attempt in range(max_retries):
try:
response = requests.post(
self.endpoint,
headers=self.headers,
json=self.prepare_payload(messages),
timeout=30
)
response.raise_for_status()
return response.json()["choices"][0]["message"]["content"]
except (requests.Timeout, requests.ConnectionError) as e:
if attempt < max_retries - 1:
time.sleep(retry_delay * (2 ** attempt)) # Exponential backoff
continue
raise TimeoutError(f"LLM request failed after {max_retries} attempts: {str(e)}")
except requests.RequestException as e:
raise RuntimeError(f"LLM request failed: {str(e)}")
```
### 3. Validate Input Parameters
Always validate input parameters to prevent runtime errors:
```python
def __init__(self, api_key: str, endpoint: str):
super().__init__()
if not api_key or not isinstance(api_key, str):
raise ValueError("Invalid API key: must be a non-empty string")
if not endpoint or not isinstance(endpoint, str):
raise ValueError("Invalid endpoint URL: must be a non-empty string")
self.api_key = api_key
self.endpoint = endpoint
```
### 4. Handle Authentication Errors Gracefully
Provide clear error messages for authentication failures:
```python
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
try:
response = requests.post(self.endpoint, headers=self.headers, json=data)
if response.status_code == 401:
raise ValueError("Authentication failed: Invalid API key or token")
elif response.status_code == 403:
raise ValueError("Authorization failed: Insufficient permissions")
response.raise_for_status()
# Process response
except Exception as e:
# Handle error
raise
```
## Example: JWT-based Authentication
For services that use JWT-based authentication instead of API keys, you can implement a custom LLM like this:
```python
from crewai import BaseLLM, Agent, Task
from typing import Any, Dict, List, Optional, Union
class JWTAuthLLM(BaseLLM):
def __init__(self, jwt_token: str, endpoint: str):
super().__init__() # Initialize the base class to set default attributes
if not jwt_token or not isinstance(jwt_token, str):
raise ValueError("Invalid JWT token: must be a non-empty string")
if not endpoint or not isinstance(endpoint, str):
raise ValueError("Invalid endpoint URL: must be a non-empty string")
self.jwt_token = jwt_token
self.endpoint = endpoint
self.stop = [] # You can customize stop words if needed
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
"""Call the LLM with JWT authentication.
Args:
messages: Input messages for the LLM.
tools: Optional list of tool schemas for function calling.
callbacks: Optional list of callback functions.
available_functions: Optional dict mapping function names to callables.
Returns:
Either a text response from the LLM or the result of a tool function call.
Raises:
TimeoutError: If the LLM request times out.
RuntimeError: If the LLM request fails for other reasons.
ValueError: If the response format is invalid.
"""
# Implement your own logic to call the LLM with JWT authentication
import requests
try:
headers = {
"Authorization": f"Bearer {self.jwt_token}",
"Content-Type": "application/json"
}
# Convert string message to proper format if needed
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
data = {
"messages": messages,
"tools": tools
}
response = requests.post(
self.endpoint,
headers=headers,
json=data,
timeout=30 # Set a reasonable timeout
)
if response.status_code == 401:
raise ValueError("Authentication failed: Invalid JWT token")
elif response.status_code == 403:
raise ValueError("Authorization failed: Insufficient permissions")
response.raise_for_status() # Raise an exception for HTTP errors
return response.json()["choices"][0]["message"]["content"]
except requests.Timeout:
raise TimeoutError("LLM request timed out")
except requests.RequestException as e:
raise RuntimeError(f"LLM request failed: {str(e)}")
except (KeyError, IndexError, ValueError) as e:
raise ValueError(f"Invalid response format: {str(e)}")
def supports_function_calling(self) -> bool:
"""Check if the LLM supports function calling.
Returns:
True if the LLM supports function calling, False otherwise.
"""
return True
def supports_stop_words(self) -> bool:
"""Check if the LLM supports stop words.
Returns:
True if the LLM supports stop words, False otherwise.
"""
return True
def get_context_window_size(self) -> int:
"""Get the context window size of the LLM.
Returns:
The context window size as an integer.
"""
return 8192
```
## Troubleshooting
Here are some common issues you might encounter when implementing custom LLMs and how to resolve them:
### 1. Authentication Failures
**Symptoms**: 401 Unauthorized or 403 Forbidden errors
**Solutions**:
- Verify that your API key or JWT token is valid and not expired
- Check that you're using the correct authentication header format
- Ensure that your token has the necessary permissions
### 2. Timeout Issues
**Symptoms**: Requests taking too long or timing out
**Solutions**:
- Implement timeout handling as shown in the examples
- Use retry logic with exponential backoff
- Consider using a more reliable network connection
### 3. Response Parsing Errors
**Symptoms**: KeyError, IndexError, or ValueError when processing responses
**Solutions**:
- Validate the response format before accessing nested fields
- Implement proper error handling for malformed responses
- Check the API documentation for the expected response format
### 4. Rate Limiting
**Symptoms**: 429 Too Many Requests errors
**Solutions**:
- Implement rate limiting in your custom LLM
- Add exponential backoff for retries
- Consider using a token bucket algorithm for more precise rate control
## Advanced Features
### Logging
Adding logging to your custom LLM can help with debugging and monitoring:
```python
import logging
from typing import Any, Dict, List, Optional, Union
class LoggingLLM(BaseLLM):
def __init__(self, api_key: str, endpoint: str):
super().__init__()
self.api_key = api_key
self.endpoint = endpoint
self.logger = logging.getLogger("crewai.llm.custom")
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
self.logger.info(f"Calling LLM with {len(messages) if isinstance(messages, list) else 1} messages")
try:
# API call implementation
response = self._make_api_call(messages, tools)
self.logger.debug(f"LLM response received: {response[:100]}...")
return response
except Exception as e:
self.logger.error(f"LLM call failed: {str(e)}")
raise
```
### Rate Limiting
Implementing rate limiting can help avoid overwhelming the LLM API:
```python
import time
from typing import Any, Dict, List, Optional, Union
class RateLimitedLLM(BaseLLM):
def __init__(
self,
api_key: str,
endpoint: str,
requests_per_minute: int = 60
):
super().__init__()
self.api_key = api_key
self.endpoint = endpoint
self.requests_per_minute = requests_per_minute
self.request_times: List[float] = []
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
self._enforce_rate_limit()
# Record this request time
self.request_times.append(time.time())
# Make the actual API call
return self._make_api_call(messages, tools)
def _enforce_rate_limit(self) -> None:
"""Enforce the rate limit by waiting if necessary."""
now = time.time()
# Remove request times older than 1 minute
self.request_times = [t for t in self.request_times if now - t < 60]
if len(self.request_times) >= self.requests_per_minute:
# Calculate how long to wait
oldest_request = min(self.request_times)
wait_time = 60 - (now - oldest_request)
if wait_time > 0:
time.sleep(wait_time)
```
### Metrics Collection
Collecting metrics can help you monitor your LLM usage:
```python
import time
from typing import Any, Dict, List, Optional, Union
class MetricsCollectingLLM(BaseLLM):
def __init__(self, api_key: str, endpoint: str):
super().__init__()
self.api_key = api_key
self.endpoint = endpoint
self.metrics: Dict[str, Any] = {
"total_calls": 0,
"total_tokens": 0,
"errors": 0,
"latency": []
}
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
start_time = time.time()
self.metrics["total_calls"] += 1
try:
response = self._make_api_call(messages, tools)
# Estimate tokens (simplified)
if isinstance(messages, str):
token_estimate = len(messages) // 4
else:
token_estimate = sum(len(m.get("content", "")) // 4 for m in messages)
self.metrics["total_tokens"] += token_estimate
return response
except Exception as e:
self.metrics["errors"] += 1
raise
finally:
latency = time.time() - start_time
self.metrics["latency"].append(latency)
def get_metrics(self) -> Dict[str, Any]:
"""Return the collected metrics."""
avg_latency = sum(self.metrics["latency"]) / len(self.metrics["latency"]) if self.metrics["latency"] else 0
return {
**self.metrics,
"avg_latency": avg_latency
}
```
## Advanced Usage: Function Calling
If your LLM supports function calling, you can implement the function calling logic in your custom LLM:
```python
import json
from typing import Any, Dict, List, Optional, Union
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
import requests
try:
headers = {
"Authorization": f"Bearer {self.jwt_token}",
"Content-Type": "application/json"
}
# Convert string message to proper format if needed
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
data = {
"messages": messages,
"tools": tools
}
response = requests.post(
self.endpoint,
headers=headers,
json=data,
timeout=30
)
response.raise_for_status()
response_data = response.json()
# Check if the LLM wants to call a function
if response_data["choices"][0]["message"].get("tool_calls"):
tool_calls = response_data["choices"][0]["message"]["tool_calls"]
# Process each tool call
for tool_call in tool_calls:
function_name = tool_call["function"]["name"]
function_args = json.loads(tool_call["function"]["arguments"])
if available_functions and function_name in available_functions:
function_to_call = available_functions[function_name]
function_response = function_to_call(**function_args)
# Add the function response to the messages
messages.append({
"role": "tool",
"tool_call_id": tool_call["id"],
"name": function_name,
"content": str(function_response)
})
# Call the LLM again with the updated messages
return self.call(messages, tools, callbacks, available_functions)
# Return the text response if no function call
return response_data["choices"][0]["message"]["content"]
except requests.Timeout:
raise TimeoutError("LLM request timed out")
except requests.RequestException as e:
raise RuntimeError(f"LLM request failed: {str(e)}")
except (KeyError, IndexError, ValueError) as e:
raise ValueError(f"Invalid response format: {str(e)}")
```
## Using Your Custom LLM with CrewAI
Once you've implemented your custom LLM, you can use it with CrewAI agents and crews:
```python
from crewai import Agent, Task, Crew
from typing import Dict, Any
# Create your custom LLM instance
jwt_llm = JWTAuthLLM(
jwt_token="your.jwt.token",
endpoint="https://your-llm-endpoint.com/v1/chat/completions"
)
# Use it with an agent
agent = Agent(
role="Research Assistant",
goal="Find information on a topic",
backstory="You are a research assistant tasked with finding information.",
llm=jwt_llm,
)
# Create a task for the agent
task = Task(
description="Research the benefits of exercise",
agent=agent,
expected_output="A summary of the benefits of exercise",
)
# Execute the task
result = agent.execute_task(task)
print(result)
# Or use it with a crew
crew = Crew(
agents=[agent],
tasks=[task],
manager_llm=jwt_llm, # Use your custom LLM for the manager
)
# Run the crew
result = crew.kickoff()
print(result)
```
## Implementing Your Own Authentication Mechanism
The `BaseLLM` class allows you to implement any authentication mechanism you need, not just JWT or API keys. You can use:
- OAuth tokens
- Client certificates
- Custom headers
- Session-based authentication
- Any other authentication method required by your LLM provider
Simply implement the appropriate authentication logic in your custom LLM class.

350
docs/learn/custom-llm.mdx Normal file
View File

@@ -0,0 +1,350 @@
---
title: Custom LLM Implementation
description: Learn how to create custom LLM implementations in CrewAI.
icon: code
---
## Overview
CrewAI supports custom LLM implementations through the `BaseLLM` abstract base class. This allows you to integrate any LLM provider that doesn't have built-in support in LiteLLM, or implement custom authentication mechanisms.
## Quick Start
Here's a minimal custom LLM implementation:
```python
from crewai import BaseLLM
from typing import Any, Dict, List, Optional, Union
import requests
class CustomLLM(BaseLLM):
def __init__(self, model: str, api_key: str, endpoint: str, temperature: Optional[float] = None):
# IMPORTANT: Call super().__init__() with required parameters
super().__init__(model=model, temperature=temperature)
self.api_key = api_key
self.endpoint = endpoint
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
"""Call the LLM with the given messages."""
# Convert string to message format if needed
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
# Prepare request
payload = {
"model": self.model,
"messages": messages,
"temperature": self.temperature,
}
# Add tools if provided and supported
if tools and self.supports_function_calling():
payload["tools"] = tools
# Make API call
response = requests.post(
self.endpoint,
headers={
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json"
},
json=payload,
timeout=30
)
response.raise_for_status()
result = response.json()
return result["choices"][0]["message"]["content"]
def supports_function_calling(self) -> bool:
"""Override if your LLM supports function calling."""
return True # Change to False if your LLM doesn't support tools
def get_context_window_size(self) -> int:
"""Return the context window size of your LLM."""
return 8192 # Adjust based on your model's actual context window
```
## Using Your Custom LLM
```python
from crewai import Agent, Task, Crew
# Assuming you have the CustomLLM class defined above
# Create your custom LLM
custom_llm = CustomLLM(
model="my-custom-model",
api_key="your-api-key",
endpoint="https://api.example.com/v1/chat/completions",
temperature=0.7
)
# Use with an agent
agent = Agent(
role="Research Assistant",
goal="Find and analyze information",
backstory="You are a research assistant.",
llm=custom_llm
)
# Create and execute tasks
task = Task(
description="Research the latest developments in AI",
expected_output="A comprehensive summary",
agent=agent
)
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
```
## Required Methods
### Constructor: `__init__()`
**Critical**: You must call `super().__init__(model, temperature)` with the required parameters:
```python
def __init__(self, model: str, api_key: str, temperature: Optional[float] = None):
# REQUIRED: Call parent constructor with model and temperature
super().__init__(model=model, temperature=temperature)
# Your custom initialization
self.api_key = api_key
```
### Abstract Method: `call()`
The `call()` method is the heart of your LLM implementation. It must:
- Accept messages (string or list of dicts with 'role' and 'content')
- Return a string response
- Handle tools and function calling if supported
- Raise appropriate exceptions for errors
### Optional Methods
```python
def supports_function_calling(self) -> bool:
"""Return True if your LLM supports function calling."""
return True # Default is True
def supports_stop_words(self) -> bool:
"""Return True if your LLM supports stop sequences."""
return True # Default is True
def get_context_window_size(self) -> int:
"""Return the context window size."""
return 4096 # Default is 4096
```
## Common Patterns
### Error Handling
```python
import requests
def call(self, messages, tools=None, callbacks=None, available_functions=None):
try:
response = requests.post(
self.endpoint,
headers={"Authorization": f"Bearer {self.api_key}"},
json=payload,
timeout=30
)
response.raise_for_status()
return response.json()["choices"][0]["message"]["content"]
except requests.Timeout:
raise TimeoutError("LLM request timed out")
except requests.RequestException as e:
raise RuntimeError(f"LLM request failed: {str(e)}")
except (KeyError, IndexError) as e:
raise ValueError(f"Invalid response format: {str(e)}")
```
### Custom Authentication
```python
from crewai import BaseLLM
from typing import Optional
class CustomAuthLLM(BaseLLM):
def __init__(self, model: str, auth_token: str, endpoint: str, temperature: Optional[float] = None):
super().__init__(model=model, temperature=temperature)
self.auth_token = auth_token
self.endpoint = endpoint
def call(self, messages, tools=None, callbacks=None, available_functions=None):
headers = {
"Authorization": f"Custom {self.auth_token}", # Custom auth format
"Content-Type": "application/json"
}
# Rest of implementation...
```
### Stop Words Support
CrewAI automatically adds `"\nObservation:"` as a stop word to control agent behavior. If your LLM supports stop words:
```python
def call(self, messages, tools=None, callbacks=None, available_functions=None):
payload = {
"model": self.model,
"messages": messages,
"stop": self.stop # Include stop words in API call
}
# Make API call...
def supports_stop_words(self) -> bool:
return True # Your LLM supports stop sequences
```
If your LLM doesn't support stop words natively:
```python
def call(self, messages, tools=None, callbacks=None, available_functions=None):
response = self._make_api_call(messages, tools)
content = response["choices"][0]["message"]["content"]
# Manually truncate at stop words
if self.stop:
for stop_word in self.stop:
if stop_word in content:
content = content.split(stop_word)[0]
break
return content
def supports_stop_words(self) -> bool:
return False # Tell CrewAI we handle stop words manually
```
## Function Calling
If your LLM supports function calling, implement the complete flow:
```python
import json
def call(self, messages, tools=None, callbacks=None, available_functions=None):
# Convert string to message format
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
# Make API call
response = self._make_api_call(messages, tools)
message = response["choices"][0]["message"]
# Check for function calls
if "tool_calls" in message and available_functions:
return self._handle_function_calls(
message["tool_calls"], messages, tools, available_functions
)
return message["content"]
def _handle_function_calls(self, tool_calls, messages, tools, available_functions):
"""Handle function calling with proper message flow."""
for tool_call in tool_calls:
function_name = tool_call["function"]["name"]
if function_name in available_functions:
# Parse and execute function
function_args = json.loads(tool_call["function"]["arguments"])
function_result = available_functions[function_name](**function_args)
# Add function call and result to message history
messages.append({
"role": "assistant",
"content": None,
"tool_calls": [tool_call]
})
messages.append({
"role": "tool",
"tool_call_id": tool_call["id"],
"name": function_name,
"content": str(function_result)
})
# Call LLM again with updated context
return self.call(messages, tools, None, available_functions)
return "Function call failed"
```
## Troubleshooting
### Common Issues
**Constructor Errors**
```python
# ❌ Wrong - missing required parameters
def __init__(self, api_key: str):
super().__init__()
# ✅ Correct
def __init__(self, model: str, api_key: str, temperature: Optional[float] = None):
super().__init__(model=model, temperature=temperature)
```
**Function Calling Not Working**
- Ensure `supports_function_calling()` returns `True`
- Check that you handle `tool_calls` in the response
- Verify `available_functions` parameter is used correctly
**Authentication Failures**
- Verify API key format and permissions
- Check authentication header format
- Ensure endpoint URLs are correct
**Response Parsing Errors**
- Validate response structure before accessing nested fields
- Handle cases where content might be None
- Add proper error handling for malformed responses
## Testing Your Custom LLM
```python
from crewai import Agent, Task, Crew
def test_custom_llm():
llm = CustomLLM(
model="test-model",
api_key="test-key",
endpoint="https://api.test.com"
)
# Test basic call
result = llm.call("Hello, world!")
assert isinstance(result, str)
assert len(result) > 0
# Test with CrewAI agent
agent = Agent(
role="Test Agent",
goal="Test custom LLM",
backstory="A test agent.",
llm=llm
)
task = Task(
description="Say hello",
expected_output="A greeting",
agent=agent
)
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
assert "hello" in result.raw.lower()
```
This guide covers the essentials of implementing custom LLMs in CrewAI.

158
docs/learn/overview.mdx Normal file
View File

@@ -0,0 +1,158 @@
---
title: "Overview"
description: "Learn how to build, customize, and optimize your CrewAI applications with comprehensive guides and tutorials"
icon: "face-smile"
---
## Learn CrewAI
This section provides comprehensive guides and tutorials to help you master CrewAI, from basic concepts to advanced techniques. Whether you're just getting started or looking to optimize your existing implementations, these resources will guide you through every aspect of building powerful AI agent workflows.
## Getting Started Guides
### Core Concepts
<CardGroup cols={2}>
<Card title="Sequential Process" icon="list-ol" href="/learn/sequential-process">
Learn how to execute tasks in a sequential order for structured workflows.
</Card>
<Card title="Hierarchical Process" icon="sitemap" href="/learn/hierarchical-process">
Implement hierarchical task execution with manager agents overseeing workflows.
</Card>
<Card title="Conditional Tasks" icon="code-branch" href="/learn/conditional-tasks">
Create dynamic workflows with conditional task execution based on outcomes.
</Card>
<Card title="Async Kickoff" icon="bolt" href="/learn/kickoff-async">
Execute crews asynchronously for improved performance and concurrency.
</Card>
</CardGroup>
### Agent Development
<CardGroup cols={2}>
<Card title="Customizing Agents" icon="user-gear" href="/learn/customizing-agents">
Learn how to customize agent behavior, roles, and capabilities.
</Card>
<Card title="Coding Agents" icon="code" href="/learn/coding-agents">
Build agents that can write, execute, and debug code automatically.
</Card>
<Card title="Multimodal Agents" icon="images" href="/learn/multimodal-agents">
Create agents that can process text, images, and other media types.
</Card>
<Card title="Custom Manager Agent" icon="user-tie" href="/learn/custom-manager-agent">
Implement custom manager agents for complex hierarchical workflows.
</Card>
</CardGroup>
## Advanced Features
### Workflow Control
<CardGroup cols={2}>
<Card title="Human in the Loop" icon="user-check" href="/learn/human-in-the-loop">
Integrate human oversight and intervention into agent workflows.
</Card>
<Card title="Human Input on Execution" icon="hand-paper" href="/learn/human-input-on-execution">
Allow human input during task execution for dynamic decision making.
</Card>
<Card title="Replay Tasks" icon="rotate-left" href="/learn/replay-tasks-from-latest-crew-kickoff">
Replay and resume tasks from previous crew executions.
</Card>
<Card title="Kickoff for Each" icon="repeat" href="/learn/kickoff-for-each">
Execute crews multiple times with different inputs efficiently.
</Card>
</CardGroup>
### Customization & Integration
<CardGroup cols={2}>
<Card title="Custom LLM" icon="brain" href="/learn/custom-llm">
Integrate custom language models and providers with CrewAI.
</Card>
<Card title="LLM Connections" icon="link" href="/learn/llm-connections">
Configure and manage connections to various LLM providers.
</Card>
<Card title="Create Custom Tools" icon="wrench" href="/learn/create-custom-tools">
Build custom tools to extend agent capabilities.
</Card>
<Card title="Using Annotations" icon="at" href="/learn/using-annotations">
Use Python annotations for cleaner, more maintainable code.
</Card>
</CardGroup>
## Specialized Applications
### Content & Media
<CardGroup cols={2}>
<Card title="DALL-E Image Generation" icon="image" href="/learn/dalle-image-generation">
Generate images using DALL-E integration with your agents.
</Card>
<Card title="Bring Your Own Agent" icon="user-plus" href="/learn/bring-your-own-agent">
Integrate existing agents and models into CrewAI workflows.
</Card>
</CardGroup>
### Tool Management
<CardGroup cols={2}>
<Card title="Force Tool Output as Result" icon="hammer" href="/learn/force-tool-output-as-result">
Configure tools to return their output directly as task results.
</Card>
</CardGroup>
## Learning Path Recommendations
### For Beginners
1. Start with **Sequential Process** to understand basic workflow execution
2. Learn **Customizing Agents** to create effective agent configurations
3. Explore **Create Custom Tools** to extend functionality
4. Try **Human in the Loop** for interactive workflows
### For Intermediate Users
1. Master **Hierarchical Process** for complex multi-agent systems
2. Implement **Conditional Tasks** for dynamic workflows
3. Use **Async Kickoff** for performance optimization
4. Integrate **Custom LLM** for specialized models
### For Advanced Users
1. Build **Multimodal Agents** for complex media processing
2. Create **Custom Manager Agents** for sophisticated orchestration
3. Implement **Bring Your Own Agent** for hybrid systems
4. Use **Replay Tasks** for robust error recovery
## Best Practices
### Development
- **Start Simple**: Begin with basic sequential workflows before adding complexity
- **Test Incrementally**: Test each component before integrating into larger systems
- **Use Annotations**: Leverage Python annotations for cleaner, more maintainable code
- **Custom Tools**: Build reusable tools that can be shared across different agents
### Production
- **Error Handling**: Implement robust error handling and recovery mechanisms
- **Performance**: Use async execution and optimize LLM calls for better performance
- **Monitoring**: Integrate observability tools to track agent performance
- **Human Oversight**: Include human checkpoints for critical decisions
### Optimization
- **Resource Management**: Monitor and optimize token usage and API costs
- **Workflow Design**: Design workflows that minimize unnecessary LLM calls
- **Tool Efficiency**: Create efficient tools that provide maximum value with minimal overhead
- **Iterative Improvement**: Use feedback and metrics to continuously improve agent performance
## Getting Help
- **Documentation**: Each guide includes detailed examples and explanations
- **Community**: Join the [CrewAI Forum](https://community.crewai.com) for discussions and support
- **Examples**: Check the Examples section for complete working implementations
- **Support**: Contact [support@crewai.com](mailto:support@crewai.com) for technical assistance
Start with the guides that match your current needs and gradually explore more advanced topics as you become comfortable with the fundamentals.

View File

@@ -0,0 +1,118 @@
---
title: "Overview"
description: "Monitor, evaluate, and optimize your CrewAI agents with comprehensive observability tools"
icon: "face-smile"
---
## Observability for CrewAI
Observability is crucial for understanding how your CrewAI agents perform, identifying bottlenecks, and ensuring reliable operation in production environments. This section covers various tools and platforms that provide monitoring, evaluation, and optimization capabilities for your agent workflows.
## Why Observability Matters
- **Performance Monitoring**: Track agent execution times, token usage, and resource consumption
- **Quality Assurance**: Evaluate output quality and consistency across different scenarios
- **Debugging**: Identify and resolve issues in agent behavior and task execution
- **Cost Management**: Monitor LLM API usage and associated costs
- **Continuous Improvement**: Gather insights to optimize agent performance over time
## Available Observability Tools
### Monitoring & Tracing Platforms
<CardGroup cols={2}>
<Card title="AgentOps" icon="paperclip" href="/observability/agentops">
Session replays, metrics, and monitoring for agent development and production.
</Card>
<Card title="OpenLIT" icon="magnifying-glass-chart" href="/observability/openlit">
OpenTelemetry-native monitoring with cost tracking and performance analytics.
</Card>
<Card title="MLflow" icon="bars-staggered" href="/observability/mlflow">
Machine learning lifecycle management with tracing and evaluation capabilities.
</Card>
<Card title="Langfuse" icon="link" href="/observability/langfuse">
LLM engineering platform with detailed tracing and analytics.
</Card>
<Card title="Langtrace" icon="chart-line" href="/observability/langtrace">
Open-source observability for LLMs and agent frameworks.
</Card>
<Card title="Arize Phoenix" icon="meteor" href="/observability/arize-phoenix">
AI observability platform for monitoring and troubleshooting.
</Card>
<Card title="Portkey" icon="key" href="/observability/portkey">
AI gateway with comprehensive monitoring and reliability features.
</Card>
<Card title="Opik" icon="meteor" href="/observability/opik">
Debug, evaluate, and monitor LLM applications with comprehensive tracing.
</Card>
<Card title="Weave" icon="network-wired" href="/observability/weave">
Weights & Biases platform for tracking and evaluating AI applications.
</Card>
</CardGroup>
### Evaluation & Quality Assurance
<CardGroup cols={2}>
<Card title="Patronus AI" icon="shield-check" href="/observability/patronus-evaluation">
Comprehensive evaluation platform for LLM outputs and agent behaviors.
</Card>
</CardGroup>
## Key Observability Metrics
### Performance Metrics
- **Execution Time**: How long agents take to complete tasks
- **Token Usage**: Input/output tokens consumed by LLM calls
- **API Latency**: Response times from external services
- **Success Rate**: Percentage of successfully completed tasks
### Quality Metrics
- **Output Accuracy**: Correctness of agent responses
- **Consistency**: Reliability across similar inputs
- **Relevance**: How well outputs match expected results
- **Safety**: Compliance with content policies and guidelines
### Cost Metrics
- **API Costs**: Expenses from LLM provider usage
- **Resource Utilization**: Compute and memory consumption
- **Cost per Task**: Economic efficiency of agent operations
- **Budget Tracking**: Monitoring against spending limits
## Getting Started
1. **Choose Your Tools**: Select observability platforms that match your needs
2. **Instrument Your Code**: Add monitoring to your CrewAI applications
3. **Set Up Dashboards**: Configure visualizations for key metrics
4. **Define Alerts**: Create notifications for important events
5. **Establish Baselines**: Measure initial performance for comparison
6. **Iterate and Improve**: Use insights to optimize your agents
## Best Practices
### Development Phase
- Use detailed tracing to understand agent behavior
- Implement evaluation metrics early in development
- Monitor resource usage during testing
- Set up automated quality checks
### Production Phase
- Implement comprehensive monitoring and alerting
- Track performance trends over time
- Monitor for anomalies and degradation
- Maintain cost visibility and control
### Continuous Improvement
- Regular performance reviews and optimization
- A/B testing of different agent configurations
- Feedback loops for quality improvement
- Documentation of lessons learned
Choose the observability tools that best fit your use case, infrastructure, and monitoring requirements to ensure your CrewAI agents perform reliably and efficiently.

View File

@@ -1,16 +1,26 @@
---
title: Patronus Evaluation Tools
description: The Patronus evaluation tools enable CrewAI agents to evaluate and score model inputs and outputs using the Patronus AI platform.
icon: check
title: Patronus AI Evaluation
description: Monitor and evaluate CrewAI agent performance using Patronus AI's comprehensive evaluation platform for LLM outputs and agent behaviors.
icon: shield-check
---
# `Patronus Evaluation Tools`
# Patronus AI Evaluation
## Description
## Overview
The [Patronus evaluation tools](https://patronus.ai) are designed to enable CrewAI agents to evaluate and score model inputs and outputs using the Patronus AI platform. These tools provide different levels of control over the evaluation process, from allowing agents to select the most appropriate evaluator and criteria to using predefined criteria or custom local evaluators.
[Patronus AI](https://patronus.ai) provides comprehensive evaluation and monitoring capabilities for CrewAI agents, enabling you to assess model outputs, agent behaviors, and overall system performance. This integration allows you to implement continuous evaluation workflows that help maintain quality and reliability in production environments.
There are three main Patronus evaluation tools:
## Key Features
- **Automated Evaluation**: Real-time assessment of agent outputs and behaviors
- **Custom Criteria**: Define specific evaluation criteria tailored to your use cases
- **Performance Monitoring**: Track agent performance metrics over time
- **Quality Assurance**: Ensure consistent output quality across different scenarios
- **Safety & Compliance**: Monitor for potential issues and policy violations
## Evaluation Tools
Patronus provides three main evaluation tools for different use cases:
1. **PatronusEvalTool**: Allows agents to select the most appropriate evaluator and criteria for the evaluation task.
2. **PatronusPredefinedCriteriaEvalTool**: Uses predefined evaluator and criteria specified by the user.

View File

@@ -37,9 +37,7 @@ These tools integrate with AI and machine learning services to enhance your agen
Execute Python code and perform data analysis.
</Card>
<Card title="Patronus Tools" icon="shield" href="/tools/ai-ml/patronustools">
AI safety and content moderation capabilities.
</Card>
</CardGroup>
## **Common Use Cases**