mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-07 15:18:29 +00:00
* docs: Fix major memory system documentation issues - Remove misleading deprecation warnings, fix confusing comments, clearly separate three memory approaches, provide accurate examples that match implementation * fix: Correct broken image paths in README - Update crewai_logo.png and asset.png paths to point to docs/images/ directory instead of docs/ directly * docs: Add system prompt transparency and customization guide - Add 'Understanding Default System Instructions' section to address black-box concerns - Document what CrewAI automatically injects into prompts - Provide code examples to inspect complete system prompts - Show 3 methods to override default instructions - Include observability integration examples with Langfuse - Add best practices for production prompt management * docs: Fix implementation accuracy issues in memory documentation - Fix Ollama embedding URL parameter and remove unsupported Cohere input_type parameter * docs: Reference observability docs instead of showing specific tool examples * docs: Reorganize knowledge documentation for better developer experience - Move quickstart examples right after overview for immediate hands-on experience - Create logical learning progression: basics → configuration → advanced → troubleshooting - Add comprehensive agent vs crew knowledge guide with working examples - Consolidate debugging and troubleshooting in dedicated section - Organize best practices by topic in accordion format - Improve content flow from simple concepts to advanced features - Ensure all examples are grounded in actual codebase implementation * docs: enhance custom LLM documentation with comprehensive examples and accurate imports * docs: reorganize observability tools into dedicated section with comprehensive overview and improved navigation * docs: rename how-to section to learn and add comprehensive overview page * docs: finalize documentation reorganization and update navigation labels * docs: enhance README with comprehensive badges, navigation links, and getting started video
73 lines
2.5 KiB
Plaintext
73 lines
2.5 KiB
Plaintext
---
|
|
title: Langtrace Integration
|
|
description: How to monitor cost, latency, and performance of CrewAI Agents using Langtrace, an external observability tool.
|
|
icon: chart-line
|
|
---
|
|
|
|
# Langtrace Overview
|
|
|
|
Langtrace is an open-source, external tool that helps you set up observability and evaluations for Large Language Models (LLMs), LLM frameworks, and Vector Databases.
|
|
While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents.
|
|
This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.
|
|
|
|

|
|

|
|

|
|
|
|
## Setup Instructions
|
|
|
|
<Steps>
|
|
<Step title="Sign up for Langtrace">
|
|
Sign up by visiting [https://langtrace.ai/signup](https://langtrace.ai/signup).
|
|
</Step>
|
|
<Step title="Create a project">
|
|
Set the project type to `CrewAI` and generate an API key.
|
|
</Step>
|
|
<Step title="Install Langtrace in your CrewAI project">
|
|
Use the following command:
|
|
|
|
```bash
|
|
pip install langtrace-python-sdk
|
|
```
|
|
</Step>
|
|
<Step title="Import Langtrace">
|
|
Import and initialize Langtrace at the beginning of your script, before any CrewAI imports:
|
|
|
|
```python
|
|
from langtrace_python_sdk import langtrace
|
|
langtrace.init(api_key='<LANGTRACE_API_KEY>')
|
|
|
|
# Now import CrewAI modules
|
|
from crewai import Agent, Task, Crew
|
|
```
|
|
</Step>
|
|
</Steps>
|
|
|
|
### Features and Their Application to CrewAI
|
|
|
|
1. **LLM Token and Cost Tracking**
|
|
|
|
- Monitor the token usage and associated costs for each CrewAI agent interaction.
|
|
|
|
2. **Trace Graph for Execution Steps**
|
|
|
|
- Visualize the execution flow of your CrewAI tasks, including latency and logs.
|
|
- Useful for identifying bottlenecks in your agent workflows.
|
|
|
|
3. **Dataset Curation with Manual Annotation**
|
|
|
|
- Create datasets from your CrewAI task outputs for future training or evaluation.
|
|
|
|
4. **Prompt Versioning and Management**
|
|
|
|
- Keep track of different versions of prompts used in your CrewAI agents.
|
|
- Useful for A/B testing and optimizing agent performance.
|
|
|
|
5. **Prompt Playground with Model Comparisons**
|
|
|
|
- Test and compare different prompts and models for your CrewAI agents before deployment.
|
|
|
|
6. **Testing and Evaluations**
|
|
|
|
- Set up automated tests for your CrewAI agents and tasks.
|