3.2 KiB
title, description
| title | description |
|---|---|
| CrewAI Agent Monitoring with Langtrace | How to monitor cost, latency, and performance of CrewAI Agents using Langtrace, an external observability tool. |
Langtrace Overview
Langtrace is an open-source, external tool that helps you set up observability and evaluations for Large Language Models (LLMs), LLM frameworks, and Vector Databases. While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents. This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.
Setup Instructions
- Sign up for Langtrace by visiting https://langtrace.ai/signup.
- Create a project and generate an API key.
- Install Langtrace in your CrewAI project using the following commands:
# Install the SDK
pip install langtrace-python-sdk
Using Langtrace with CrewAI
To integrate Langtrace with your CrewAI project, follow these steps:
- Import and initialize Langtrace at the beginning of your script, before any CrewAI imports:
from langtrace_python_sdk import langtrace
langtrace.init(api_key='<LANGTRACE_API_KEY>')
# Now import CrewAI modules
from crewai import Agent, Task, Crew
-
Create your CrewAI agents and tasks as usual.
-
Use Langtrace's tracking functions to monitor your CrewAI operations. For example:
with langtrace.trace("CrewAI Task Execution"):
result = crew.kickoff()
Features and Their Application to CrewAI
-
LLM Token and Cost Tracking
- Monitor the token usage and associated costs for each CrewAI agent interaction.
- Example:
with langtrace.trace("Agent Interaction"): agent_response = agent.execute(task)
-
Trace Graph for Execution Steps
- Visualize the execution flow of your CrewAI tasks, including latency and logs.
- Useful for identifying bottlenecks in your agent workflows.
-
Dataset Curation with Manual Annotation
- Create datasets from your CrewAI task outputs for future training or evaluation.
- Example:
langtrace.log_dataset_item(task_input, agent_output, {"task_type": "research"})
-
Prompt Versioning and Management
- Keep track of different versions of prompts used in your CrewAI agents.
- Useful for A/B testing and optimizing agent performance.
-
Prompt Playground with Model Comparisons
- Test and compare different prompts and models for your CrewAI agents before deployment.
-
Testing and Evaluations
- Set up automated tests for your CrewAI agents and tasks.
- Example:
langtrace.evaluate(agent_output, expected_output, "accuracy")
Monitoring New CrewAI Features
CrewAI has introduced several new features that can be monitored using Langtrace:
-
Code Execution: Monitor the performance and output of code executed by agents.
with langtrace.trace("Agent Code Execution"): code_output = agent.execute_code(code_snippet) -
Third-party Agent Integration: Track interactions with LlamaIndex, LangChain, and Autogen agents.