mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-08 15:48:29 +00:00
* docs: Fix major memory system documentation issues - Remove misleading deprecation warnings, fix confusing comments, clearly separate three memory approaches, provide accurate examples that match implementation * fix: Correct broken image paths in README - Update crewai_logo.png and asset.png paths to point to docs/images/ directory instead of docs/ directly * docs: Add system prompt transparency and customization guide - Add 'Understanding Default System Instructions' section to address black-box concerns - Document what CrewAI automatically injects into prompts - Provide code examples to inspect complete system prompts - Show 3 methods to override default instructions - Include observability integration examples with Langfuse - Add best practices for production prompt management * docs: Fix implementation accuracy issues in memory documentation - Fix Ollama embedding URL parameter and remove unsupported Cohere input_type parameter * docs: Reference observability docs instead of showing specific tool examples * docs: Reorganize knowledge documentation for better developer experience - Move quickstart examples right after overview for immediate hands-on experience - Create logical learning progression: basics → configuration → advanced → troubleshooting - Add comprehensive agent vs crew knowledge guide with working examples - Consolidate debugging and troubleshooting in dedicated section - Organize best practices by topic in accordion format - Improve content flow from simple concepts to advanced features - Ensure all examples are grounded in actual codebase implementation * docs: enhance custom LLM documentation with comprehensive examples and accurate imports * docs: reorganize observability tools into dedicated section with comprehensive overview and improved navigation * docs: rename how-to section to learn and add comprehensive overview page * docs: finalize documentation reorganization and update navigation labels * docs: enhance README with comprehensive badges, navigation links, and getting started video
203 lines
7.5 KiB
Plaintext
203 lines
7.5 KiB
Plaintext
---
|
|
title: Portkey Integration
|
|
description: How to use Portkey with CrewAI
|
|
icon: key
|
|
---
|
|
|
|
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-CrewAI.png" alt="Portkey CrewAI Header Image" width="70%" />
|
|
|
|
|
|
[Portkey](https://portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) is a 2-line upgrade to make your CrewAI agents reliable, cost-efficient, and fast.
|
|
|
|
Portkey adds 4 core production capabilities to any CrewAI agent:
|
|
1. Routing to **200+ LLMs**
|
|
2. Making each LLM call more robust
|
|
3. Full-stack tracing & cost, performance analytics
|
|
4. Real-time guardrails to enforce behavior
|
|
|
|
## Getting Started
|
|
|
|
<Steps>
|
|
<Step title="Install CrewAI and Portkey">
|
|
```bash
|
|
pip install -qU crewai portkey-ai
|
|
```
|
|
</Step>
|
|
<Step title="Configure the LLM Client">
|
|
To build CrewAI Agents with Portkey, you'll need two keys:
|
|
- **Portkey API Key**: Sign up on the [Portkey app](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) and copy your API key
|
|
- **Virtual Key**: Virtual Keys securely manage your LLM API keys in one place. Store your LLM provider API keys securely in Portkey's vault
|
|
|
|
```python
|
|
from crewai import LLM
|
|
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
|
|
|
|
gpt_llm = LLM(
|
|
model="gpt-4",
|
|
base_url=PORTKEY_GATEWAY_URL,
|
|
api_key="dummy", # We are using Virtual key
|
|
extra_headers=createHeaders(
|
|
api_key="YOUR_PORTKEY_API_KEY",
|
|
virtual_key="YOUR_VIRTUAL_KEY", # Enter your Virtual key from Portkey
|
|
)
|
|
)
|
|
```
|
|
</Step>
|
|
<Step title="Create and Run Your First Agent">
|
|
```python
|
|
from crewai import Agent, Task, Crew
|
|
|
|
# Define your agents with roles and goals
|
|
coder = Agent(
|
|
role='Software developer',
|
|
goal='Write clear, concise code on demand',
|
|
backstory='An expert coder with a keen eye for software trends.',
|
|
llm=gpt_llm
|
|
)
|
|
|
|
# Create tasks for your agents
|
|
task1 = Task(
|
|
description="Define the HTML for making a simple website with heading- Hello World! Portkey is working!",
|
|
expected_output="A clear and concise HTML code",
|
|
agent=coder
|
|
)
|
|
|
|
# Instantiate your crew
|
|
crew = Crew(
|
|
agents=[coder],
|
|
tasks=[task1],
|
|
)
|
|
|
|
result = crew.kickoff()
|
|
print(result)
|
|
```
|
|
</Step>
|
|
</Steps>
|
|
|
|
## Key Features
|
|
|
|
| Feature | Description |
|
|
|:--------|:------------|
|
|
| 🌐 Multi-LLM Support | Access OpenAI, Anthropic, Gemini, Azure, and 250+ providers through a unified interface |
|
|
| 🛡️ Production Reliability | Implement retries, timeouts, load balancing, and fallbacks |
|
|
| 📊 Advanced Observability | Track 40+ metrics including costs, tokens, latency, and custom metadata |
|
|
| 🔍 Comprehensive Logging | Debug with detailed execution traces and function call logs |
|
|
| 🚧 Security Controls | Set budget limits and implement role-based access control |
|
|
| 🔄 Performance Analytics | Capture and analyze feedback for continuous improvement |
|
|
| 💾 Intelligent Caching | Reduce costs and latency with semantic or simple caching |
|
|
|
|
|
|
## Production Features with Portkey Configs
|
|
|
|
All features mentioned below are through Portkey's Config system. Portkey's Config system allows you to define routing strategies using simple JSON objects in your LLM API calls. You can create and manage Configs directly in your code or through the Portkey Dashboard. Each Config has a unique ID for easy reference.
|
|
|
|
<Frame>
|
|
<img src="https://raw.githubusercontent.com/Portkey-AI/docs-core/refs/heads/main/images/libraries/libraries-3.avif"/>
|
|
</Frame>
|
|
|
|
|
|
### 1. Use 250+ LLMs
|
|
Access various LLMs like Anthropic, Gemini, Mistral, Azure OpenAI, and more with minimal code changes. Switch between providers or use them together seamlessly. [Learn more about Universal API](https://portkey.ai/docs/product/ai-gateway/universal-api)
|
|
|
|
|
|
Easily switch between different LLM providers:
|
|
|
|
```python
|
|
# Anthropic Configuration
|
|
anthropic_llm = LLM(
|
|
model="claude-3-5-sonnet-latest",
|
|
base_url=PORTKEY_GATEWAY_URL,
|
|
api_key="dummy",
|
|
extra_headers=createHeaders(
|
|
api_key="YOUR_PORTKEY_API_KEY",
|
|
virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY", #You don't need provider when using Virtual keys
|
|
trace_id="anthropic_agent"
|
|
)
|
|
)
|
|
|
|
# Azure OpenAI Configuration
|
|
azure_llm = LLM(
|
|
model="gpt-4",
|
|
base_url=PORTKEY_GATEWAY_URL,
|
|
api_key="dummy",
|
|
extra_headers=createHeaders(
|
|
api_key="YOUR_PORTKEY_API_KEY",
|
|
virtual_key="YOUR_AZURE_VIRTUAL_KEY", #You don't need provider when using Virtual keys
|
|
trace_id="azure_agent"
|
|
)
|
|
)
|
|
```
|
|
|
|
|
|
### 2. Caching
|
|
Improve response times and reduce costs with two powerful caching modes:
|
|
- **Simple Cache**: Perfect for exact matches
|
|
- **Semantic Cache**: Matches responses for requests that are semantically similar
|
|
[Learn more about Caching](https://portkey.ai/docs/product/ai-gateway/cache-simple-and-semantic)
|
|
|
|
```py
|
|
config = {
|
|
"cache": {
|
|
"mode": "semantic", # or "simple" for exact matching
|
|
}
|
|
}
|
|
```
|
|
|
|
### 3. Production Reliability
|
|
Portkey provides comprehensive reliability features:
|
|
- **Automatic Retries**: Handle temporary failures gracefully
|
|
- **Request Timeouts**: Prevent hanging operations
|
|
- **Conditional Routing**: Route requests based on specific conditions
|
|
- **Fallbacks**: Set up automatic provider failovers
|
|
- **Load Balancing**: Distribute requests efficiently
|
|
|
|
[Learn more about Reliability Features](https://portkey.ai/docs/product/ai-gateway/)
|
|
|
|
|
|
|
|
### 4. Metrics
|
|
|
|
Agent runs are complex. Portkey automatically logs **40+ comprehensive metrics** for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Portkey's customizable filters provide the metrics you need.
|
|
|
|
|
|
- Cost per agent interaction
|
|
- Response times and latency
|
|
- Token usage and efficiency
|
|
- Success/failure rates
|
|
- Cache hit rates
|
|
|
|
<img src="https://github.com/siddharthsambharia-portkey/Portkey-Product-Images/blob/main/Portkey-Dashboard.png?raw=true" width="70%" alt="Portkey Dashboard" />
|
|
|
|
### 5. Detailed Logging
|
|
Logs are essential for understanding agent behavior, diagnosing issues, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.
|
|
|
|
|
|
Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.
|
|
|
|
<details>
|
|
<summary><b>Traces</b></summary>
|
|
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Traces.png" alt="Portkey Traces" width="70%" />
|
|
</details>
|
|
|
|
<details>
|
|
<summary><b>Logs</b></summary>
|
|
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Logs.png" alt="Portkey Logs" width="70%" />
|
|
</details>
|
|
|
|
### 6. Enterprise Security Features
|
|
- Set budget limit and rate limts per Virtual Key (disposable API keys)
|
|
- Implement role-based access control
|
|
- Track system changes with audit logs
|
|
- Configure data retention policies
|
|
|
|
|
|
|
|
For detailed information on creating and managing Configs, visit the [Portkey documentation](https://docs.portkey.ai/product/ai-gateway/configs).
|
|
|
|
## Resources
|
|
|
|
- [📘 Portkey Documentation](https://docs.portkey.ai)
|
|
- [📊 Portkey Dashboard](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai)
|
|
- [🐦 Twitter](https://twitter.com/portkeyai)
|
|
- [💬 Discord Community](https://discord.gg/DD7vgKK299)
|