mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-08 15:48:29 +00:00
* docs: Fix major memory system documentation issues - Remove misleading deprecation warnings, fix confusing comments, clearly separate three memory approaches, provide accurate examples that match implementation * fix: Correct broken image paths in README - Update crewai_logo.png and asset.png paths to point to docs/images/ directory instead of docs/ directly * docs: Add system prompt transparency and customization guide - Add 'Understanding Default System Instructions' section to address black-box concerns - Document what CrewAI automatically injects into prompts - Provide code examples to inspect complete system prompts - Show 3 methods to override default instructions - Include observability integration examples with Langfuse - Add best practices for production prompt management * docs: Fix implementation accuracy issues in memory documentation - Fix Ollama embedding URL parameter and remove unsupported Cohere input_type parameter * docs: Reference observability docs instead of showing specific tool examples * docs: Reorganize knowledge documentation for better developer experience - Move quickstart examples right after overview for immediate hands-on experience - Create logical learning progression: basics → configuration → advanced → troubleshooting - Add comprehensive agent vs crew knowledge guide with working examples - Consolidate debugging and troubleshooting in dedicated section - Organize best practices by topic in accordion format - Improve content flow from simple concepts to advanced features - Ensure all examples are grounded in actual codebase implementation * docs: enhance custom LLM documentation with comprehensive examples and accurate imports * docs: reorganize observability tools into dedicated section with comprehensive overview and improved navigation * docs: rename how-to section to learn and add comprehensive overview page * docs: finalize documentation reorganization and update navigation labels * docs: enhance README with comprehensive badges, navigation links, and getting started video
125 lines
5.3 KiB
Plaintext
125 lines
5.3 KiB
Plaintext
---
|
|
title: Weave Integration
|
|
description: Learn how to use Weights & Biases (W&B) Weave to track, experiment with, evaluate, and improve your CrewAI applications.
|
|
icon: radar
|
|
---
|
|
|
|
# Weave Overview
|
|
|
|
[Weights & Biases (W&B) Weave](https://weave-docs.wandb.ai/) is a framework for tracking, experimenting with, evaluating, deploying, and improving LLM-based applications.
|
|
|
|

|
|
|
|
Weave provides comprehensive support for every stage of your CrewAI application development:
|
|
|
|
- **Tracing & Monitoring**: Automatically track LLM calls and application logic to debug and analyze production systems
|
|
- **Systematic Iteration**: Refine and iterate on prompts, datasets, and models
|
|
- **Evaluation**: Use custom or pre-built scorers to systematically assess and enhance agent performance
|
|
- **Guardrails**: Protect your agents with pre- and post-safeguards for content moderation and prompt safety
|
|
|
|
Weave automatically captures traces for your CrewAI applications, enabling you to monitor and analyze your agents' performance, interactions, and execution flow. This helps you build better evaluation datasets and optimize your agent workflows.
|
|
|
|
## Setup Instructions
|
|
|
|
<Steps>
|
|
<Step title="Install required packages">
|
|
```shell
|
|
pip install crewai weave
|
|
```
|
|
</Step>
|
|
<Step title="Set up W&B Account">
|
|
Sign up for a [Weights & Biases account](https://wandb.ai) if you haven't already. You'll need this to view your traces and metrics.
|
|
</Step>
|
|
<Step title="Initialize Weave in Your Application">
|
|
Add the following code to your application:
|
|
|
|
```python
|
|
import weave
|
|
|
|
# Initialize Weave with your project name
|
|
weave.init(project_name="crewai_demo")
|
|
```
|
|
|
|
After initialization, Weave will provide a URL where you can view your traces and metrics.
|
|
</Step>
|
|
<Step title="Create your Crews/Flows">
|
|
```python
|
|
from crewai import Agent, Task, Crew, LLM, Process
|
|
|
|
# Create an LLM with a temperature of 0 to ensure deterministic outputs
|
|
llm = LLM(model="gpt-4o", temperature=0)
|
|
|
|
# Create agents
|
|
researcher = Agent(
|
|
role='Research Analyst',
|
|
goal='Find and analyze the best investment opportunities',
|
|
backstory='Expert in financial analysis and market research',
|
|
llm=llm,
|
|
verbose=True,
|
|
allow_delegation=False,
|
|
)
|
|
|
|
writer = Agent(
|
|
role='Report Writer',
|
|
goal='Write clear and concise investment reports',
|
|
backstory='Experienced in creating detailed financial reports',
|
|
llm=llm,
|
|
verbose=True,
|
|
allow_delegation=False,
|
|
)
|
|
|
|
# Create tasks
|
|
research_task = Task(
|
|
description='Deep research on the {topic}',
|
|
expected_output='Comprehensive market data including key players, market size, and growth trends.',
|
|
agent=researcher
|
|
)
|
|
|
|
writing_task = Task(
|
|
description='Write a detailed report based on the research',
|
|
expected_output='The report should be easy to read and understand. Use bullet points where applicable.',
|
|
agent=writer
|
|
)
|
|
|
|
# Create a crew
|
|
crew = Crew(
|
|
agents=[researcher, writer],
|
|
tasks=[research_task, writing_task],
|
|
verbose=True,
|
|
process=Process.sequential,
|
|
)
|
|
|
|
# Run the crew
|
|
result = crew.kickoff(inputs={"topic": "AI in material science"})
|
|
print(result)
|
|
```
|
|
</Step>
|
|
<Step title="View Traces in Weave">
|
|
After running your CrewAI application, visit the Weave URL provided during initialization to view:
|
|
- LLM calls and their metadata
|
|
- Agent interactions and task execution flow
|
|
- Performance metrics like latency and token usage
|
|
- Any errors or issues that occurred during execution
|
|
|
|
<Frame caption="Weave Tracing Dashboard">
|
|
<img src="/images/weave-tracing.png" alt="Weave tracing example with CrewAI" />
|
|
</Frame>
|
|
</Step>
|
|
</Steps>
|
|
|
|
## Features
|
|
|
|
- Weave automatically captures all CrewAI operations: agent interactions and task executions; LLM calls with metadata and token usage; tool usage and results.
|
|
- The integration supports all CrewAI execution methods: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
|
|
- Automatic tracing of all [crewAI-tools](https://github.com/crewAIInc/crewAI-tools).
|
|
- Flow feature support with decorator patching (`@start`, `@listen`, `@router`, `@or_`, `@and_`).
|
|
- Track custom guardrails passed to CrewAI `Task` with `@weave.op()`.
|
|
|
|
For detailed information on what's supported, visit the [Weave CrewAI documentation](https://weave-docs.wandb.ai/guides/integrations/crewai/#getting-started-with-flow).
|
|
|
|
## Resources
|
|
|
|
- [📘 Weave Documentation](https://weave-docs.wandb.ai)
|
|
- [📊 Example Weave x CrewAI dashboard](https://wandb.ai/ayut/crewai_demo/weave/traces?cols=%7B%22wb_run_id%22%3Afalse%2C%22attributes.weave.client_version%22%3Afalse%2C%22attributes.weave.os_name%22%3Afalse%2C%22attributes.weave.os_release%22%3Afalse%2C%22attributes.weave.os_version%22%3Afalse%2C%22attributes.weave.source%22%3Afalse%2C%22attributes.weave.sys_version%22%3Afalse%7D&peekPath=%2Fayut%2Fcrewai_demo%2Fcalls%2F0195c838-38cb-71a2-8a15-651ecddf9d89)
|
|
- [🐦 X](https://x.com/weave_wb)
|