mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-15 20:08:29 +00:00
feat: add streaming result support to flows and crews
* feat: add streaming result support to flows and crews * docs: add streaming execution documentation and integration tests
This commit is contained in:
@@ -33,6 +33,7 @@ A crew in crewAI represents a collaborative group of agents working together to
|
||||
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
|
||||
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
|
||||
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. |
|
||||
| **Stream** _(optional)_ | `stream` | Enable streaming output to receive real-time updates during crew execution. Returns a `CrewStreamingOutput` object that can be iterated for chunks. Defaults to `False`. |
|
||||
|
||||
<Tip>
|
||||
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
|
||||
@@ -338,6 +339,29 @@ for async_result in async_results:
|
||||
|
||||
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs.
|
||||
|
||||
### Streaming Crew Execution
|
||||
|
||||
For real-time visibility into crew execution, you can enable streaming to receive output as it's generated:
|
||||
|
||||
```python Code
|
||||
# Enable streaming
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
stream=True
|
||||
)
|
||||
|
||||
# Iterate over streaming output
|
||||
streaming = crew.kickoff(inputs={"topic": "AI"})
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access final result
|
||||
result = streaming.result
|
||||
```
|
||||
|
||||
Learn more about streaming in the [Streaming Crew Execution](/en/learn/streaming-crew-execution) guide.
|
||||
|
||||
### Replaying from a Specific Task
|
||||
|
||||
You can now replay from a specific task using our CLI command `replay`.
|
||||
|
||||
@@ -897,6 +897,31 @@ flow = ExampleFlow()
|
||||
result = flow.kickoff()
|
||||
```
|
||||
|
||||
### Streaming Flow Execution
|
||||
|
||||
For real-time visibility into flow execution, you can enable streaming to receive output as it's generated:
|
||||
|
||||
```python
|
||||
class StreamingFlow(Flow):
|
||||
stream = True # Enable streaming
|
||||
|
||||
@start()
|
||||
def research(self):
|
||||
# Your flow implementation
|
||||
pass
|
||||
|
||||
# Iterate over streaming output
|
||||
flow = StreamingFlow()
|
||||
streaming = flow.kickoff()
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access final result
|
||||
result = streaming.result
|
||||
```
|
||||
|
||||
Learn more about streaming in the [Streaming Flow Execution](/en/learn/streaming-flow-execution) guide.
|
||||
|
||||
### Using the CLI
|
||||
|
||||
Starting from version 0.103.0, you can run flows using the `crewai run` command:
|
||||
|
||||
320
docs/en/learn/streaming-crew-execution.mdx
Normal file
320
docs/en/learn/streaming-crew-execution.mdx
Normal file
@@ -0,0 +1,320 @@
|
||||
---
|
||||
title: Streaming Crew Execution
|
||||
description: Stream real-time output from your CrewAI crew execution
|
||||
icon: wave-pulse
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI provides the ability to stream real-time output during crew execution, allowing you to display results as they're generated rather than waiting for the entire process to complete. This feature is particularly useful for building interactive applications, providing user feedback, and monitoring long-running processes.
|
||||
|
||||
## How Streaming Works
|
||||
|
||||
When streaming is enabled, CrewAI captures LLM responses and tool calls as they happen, packaging them into structured chunks that include context about which task and agent is executing. You can iterate over these chunks in real-time and access the final result once execution completes.
|
||||
|
||||
## Enabling Streaming
|
||||
|
||||
To enable streaming, set the `stream` parameter to `True` when creating your crew:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Crew, Task
|
||||
|
||||
# Create your agents and tasks
|
||||
researcher = Agent(
|
||||
role="Research Analyst",
|
||||
goal="Gather comprehensive information on topics",
|
||||
backstory="You are an experienced researcher with excellent analytical skills.",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research the latest developments in AI",
|
||||
expected_output="A detailed report on recent AI advancements",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
# Enable streaming
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
stream=True # Enable streaming output
|
||||
)
|
||||
```
|
||||
|
||||
## Synchronous Streaming
|
||||
|
||||
When you call `kickoff()` on a crew with streaming enabled, it returns a `CrewStreamingOutput` object that you can iterate over to receive chunks as they arrive:
|
||||
|
||||
```python Code
|
||||
# Start streaming execution
|
||||
streaming = crew.kickoff(inputs={"topic": "artificial intelligence"})
|
||||
|
||||
# Iterate over chunks as they arrive
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access the final result after streaming completes
|
||||
result = streaming.result
|
||||
print(f"\n\nFinal output: {result.raw}")
|
||||
```
|
||||
|
||||
### Stream Chunk Information
|
||||
|
||||
Each chunk provides rich context about the execution:
|
||||
|
||||
```python Code
|
||||
streaming = crew.kickoff(inputs={"topic": "AI"})
|
||||
|
||||
for chunk in streaming:
|
||||
print(f"Task: {chunk.task_name} (index {chunk.task_index})")
|
||||
print(f"Agent: {chunk.agent_role}")
|
||||
print(f"Content: {chunk.content}")
|
||||
print(f"Type: {chunk.chunk_type}") # TEXT or TOOL_CALL
|
||||
if chunk.tool_call:
|
||||
print(f"Tool: {chunk.tool_call.tool_name}")
|
||||
print(f"Arguments: {chunk.tool_call.arguments}")
|
||||
```
|
||||
|
||||
### Accessing Streaming Results
|
||||
|
||||
The `CrewStreamingOutput` object provides several useful properties:
|
||||
|
||||
```python Code
|
||||
streaming = crew.kickoff(inputs={"topic": "AI"})
|
||||
|
||||
# Iterate and collect chunks
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# After iteration completes
|
||||
print(f"\nCompleted: {streaming.is_completed}")
|
||||
print(f"Full text: {streaming.get_full_text()}")
|
||||
print(f"All chunks: {len(streaming.chunks)}")
|
||||
print(f"Final result: {streaming.result.raw}")
|
||||
```
|
||||
|
||||
## Asynchronous Streaming
|
||||
|
||||
For async applications, use `kickoff_async()` with async iteration:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
|
||||
async def stream_crew():
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
stream=True
|
||||
)
|
||||
|
||||
# Start async streaming
|
||||
streaming = await crew.kickoff_async(inputs={"topic": "AI"})
|
||||
|
||||
# Async iteration over chunks
|
||||
async for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access final result
|
||||
result = streaming.result
|
||||
print(f"\n\nFinal output: {result.raw}")
|
||||
|
||||
asyncio.run(stream_crew())
|
||||
```
|
||||
|
||||
## Streaming with kickoff_for_each
|
||||
|
||||
When executing a crew for multiple inputs with `kickoff_for_each()`, streaming works differently depending on whether you use sync or async:
|
||||
|
||||
### Synchronous kickoff_for_each
|
||||
|
||||
With synchronous `kickoff_for_each()`, you get a list of `CrewStreamingOutput` objects, one for each input:
|
||||
|
||||
```python Code
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
stream=True
|
||||
)
|
||||
|
||||
inputs_list = [
|
||||
{"topic": "AI in healthcare"},
|
||||
{"topic": "AI in finance"}
|
||||
]
|
||||
|
||||
# Returns list of streaming outputs
|
||||
streaming_outputs = crew.kickoff_for_each(inputs=inputs_list)
|
||||
|
||||
# Iterate over each streaming output
|
||||
for i, streaming in enumerate(streaming_outputs):
|
||||
print(f"\n=== Input {i + 1} ===")
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
result = streaming.result
|
||||
print(f"\n\nResult {i + 1}: {result.raw}")
|
||||
```
|
||||
|
||||
### Asynchronous kickoff_for_each_async
|
||||
|
||||
With async `kickoff_for_each_async()`, you get a single `CrewStreamingOutput` that yields chunks from all crews as they arrive concurrently:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
|
||||
async def stream_multiple_crews():
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
stream=True
|
||||
)
|
||||
|
||||
inputs_list = [
|
||||
{"topic": "AI in healthcare"},
|
||||
{"topic": "AI in finance"}
|
||||
]
|
||||
|
||||
# Returns single streaming output for all crews
|
||||
streaming = await crew.kickoff_for_each_async(inputs=inputs_list)
|
||||
|
||||
# Chunks from all crews arrive as they're generated
|
||||
async for chunk in streaming:
|
||||
print(f"[{chunk.task_name}] {chunk.content}", end="", flush=True)
|
||||
|
||||
# Access all results
|
||||
results = streaming.results # List of CrewOutput objects
|
||||
for i, result in enumerate(results):
|
||||
print(f"\n\nResult {i + 1}: {result.raw}")
|
||||
|
||||
asyncio.run(stream_multiple_crews())
|
||||
```
|
||||
|
||||
## Stream Chunk Types
|
||||
|
||||
Chunks can be of different types, indicated by the `chunk_type` field:
|
||||
|
||||
### TEXT Chunks
|
||||
|
||||
Standard text content from LLM responses:
|
||||
|
||||
```python Code
|
||||
for chunk in streaming:
|
||||
if chunk.chunk_type == StreamChunkType.TEXT:
|
||||
print(chunk.content, end="", flush=True)
|
||||
```
|
||||
|
||||
### TOOL_CALL Chunks
|
||||
|
||||
Information about tool calls being made:
|
||||
|
||||
```python Code
|
||||
for chunk in streaming:
|
||||
if chunk.chunk_type == StreamChunkType.TOOL_CALL:
|
||||
print(f"\nCalling tool: {chunk.tool_call.tool_name}")
|
||||
print(f"Arguments: {chunk.tool_call.arguments}")
|
||||
```
|
||||
|
||||
## Practical Example: Building a UI with Streaming
|
||||
|
||||
Here's a complete example showing how to build an interactive application with streaming:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.types.streaming import StreamChunkType
|
||||
|
||||
async def interactive_research():
|
||||
# Create crew with streaming enabled
|
||||
researcher = Agent(
|
||||
role="Research Analyst",
|
||||
goal="Provide detailed analysis on any topic",
|
||||
backstory="You are an expert researcher with broad knowledge.",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research and analyze: {topic}",
|
||||
expected_output="A comprehensive analysis with key insights",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
stream=True,
|
||||
verbose=False
|
||||
)
|
||||
|
||||
# Get user input
|
||||
topic = input("Enter a topic to research: ")
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Researching: {topic}")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
# Start streaming execution
|
||||
streaming = await crew.kickoff_async(inputs={"topic": topic})
|
||||
|
||||
current_task = ""
|
||||
async for chunk in streaming:
|
||||
# Show task transitions
|
||||
if chunk.task_name != current_task:
|
||||
current_task = chunk.task_name
|
||||
print(f"\n[{chunk.agent_role}] Working on: {chunk.task_name}")
|
||||
print("-" * 60)
|
||||
|
||||
# Display text chunks
|
||||
if chunk.chunk_type == StreamChunkType.TEXT:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Display tool calls
|
||||
elif chunk.chunk_type == StreamChunkType.TOOL_CALL and chunk.tool_call:
|
||||
print(f"\n🔧 Using tool: {chunk.tool_call.tool_name}")
|
||||
|
||||
# Show final result
|
||||
result = streaming.result
|
||||
print(f"\n\n{'='*60}")
|
||||
print("Analysis Complete!")
|
||||
print(f"{'='*60}")
|
||||
print(f"\nToken Usage: {result.token_usage}")
|
||||
|
||||
asyncio.run(interactive_research())
|
||||
```
|
||||
|
||||
## Use Cases
|
||||
|
||||
Streaming is particularly valuable for:
|
||||
|
||||
- **Interactive Applications**: Provide real-time feedback to users as agents work
|
||||
- **Long-Running Tasks**: Show progress for research, analysis, or content generation
|
||||
- **Debugging and Monitoring**: Observe agent behavior and decision-making in real-time
|
||||
- **User Experience**: Reduce perceived latency by showing incremental results
|
||||
- **Live Dashboards**: Build monitoring interfaces that display crew execution status
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Streaming automatically enables LLM streaming for all agents in the crew
|
||||
- You must iterate through all chunks before accessing the `.result` property
|
||||
- For `kickoff_for_each_async()` with streaming, use `.results` (plural) to get all outputs
|
||||
- Streaming adds minimal overhead and can actually improve perceived performance
|
||||
- Each chunk includes full context (task, agent, chunk type) for rich UIs
|
||||
|
||||
## Error Handling
|
||||
|
||||
Handle errors during streaming execution:
|
||||
|
||||
```python Code
|
||||
streaming = crew.kickoff(inputs={"topic": "AI"})
|
||||
|
||||
try:
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
result = streaming.result
|
||||
print(f"\nSuccess: {result.raw}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\nError during streaming: {e}")
|
||||
if streaming.is_completed:
|
||||
print("Streaming completed but an error occurred")
|
||||
```
|
||||
|
||||
By leveraging streaming, you can build more responsive and interactive applications with CrewAI, providing users with real-time visibility into agent execution and results.
|
||||
450
docs/en/learn/streaming-flow-execution.mdx
Normal file
450
docs/en/learn/streaming-flow-execution.mdx
Normal file
@@ -0,0 +1,450 @@
|
||||
---
|
||||
title: Streaming Flow Execution
|
||||
description: Stream real-time output from your CrewAI flow execution
|
||||
icon: wave-pulse
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI Flows support streaming output, allowing you to receive real-time updates as your flow executes. This feature enables you to build responsive applications that display results incrementally, provide live progress updates, and create better user experiences for long-running workflows.
|
||||
|
||||
## How Flow Streaming Works
|
||||
|
||||
When streaming is enabled on a Flow, CrewAI captures and streams output from any crews or LLM calls within the flow. The stream delivers structured chunks containing the content, task context, and agent information as execution progresses.
|
||||
|
||||
## Enabling Streaming
|
||||
|
||||
To enable streaming, set the `stream` attribute to `True` on your Flow class:
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from crewai import Agent, Crew, Task
|
||||
|
||||
class ResearchFlow(Flow):
|
||||
stream = True # Enable streaming for the entire flow
|
||||
|
||||
@start()
|
||||
def initialize(self):
|
||||
return {"topic": "AI trends"}
|
||||
|
||||
@listen(initialize)
|
||||
def research_topic(self, data):
|
||||
researcher = Agent(
|
||||
role="Research Analyst",
|
||||
goal="Research topics thoroughly",
|
||||
backstory="Expert researcher with analytical skills",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research {topic} and provide insights",
|
||||
expected_output="Detailed research findings",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
)
|
||||
|
||||
return crew.kickoff(inputs=data)
|
||||
```
|
||||
|
||||
## Synchronous Streaming
|
||||
|
||||
When you call `kickoff()` on a flow with streaming enabled, it returns a `FlowStreamingOutput` object that you can iterate over:
|
||||
|
||||
```python Code
|
||||
flow = ResearchFlow()
|
||||
|
||||
# Start streaming execution
|
||||
streaming = flow.kickoff()
|
||||
|
||||
# Iterate over chunks as they arrive
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access the final result after streaming completes
|
||||
result = streaming.result
|
||||
print(f"\n\nFinal output: {result}")
|
||||
```
|
||||
|
||||
### Stream Chunk Information
|
||||
|
||||
Each chunk provides context about where it originated in the flow:
|
||||
|
||||
```python Code
|
||||
streaming = flow.kickoff()
|
||||
|
||||
for chunk in streaming:
|
||||
print(f"Agent: {chunk.agent_role}")
|
||||
print(f"Task: {chunk.task_name}")
|
||||
print(f"Content: {chunk.content}")
|
||||
print(f"Type: {chunk.chunk_type}") # TEXT or TOOL_CALL
|
||||
```
|
||||
|
||||
### Accessing Streaming Properties
|
||||
|
||||
The `FlowStreamingOutput` object provides useful properties and methods:
|
||||
|
||||
```python Code
|
||||
streaming = flow.kickoff()
|
||||
|
||||
# Iterate and collect chunks
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# After iteration completes
|
||||
print(f"\nCompleted: {streaming.is_completed}")
|
||||
print(f"Full text: {streaming.get_full_text()}")
|
||||
print(f"Total chunks: {len(streaming.chunks)}")
|
||||
print(f"Final result: {streaming.result}")
|
||||
```
|
||||
|
||||
## Asynchronous Streaming
|
||||
|
||||
For async applications, use `kickoff_async()` with async iteration:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
|
||||
async def stream_flow():
|
||||
flow = ResearchFlow()
|
||||
|
||||
# Start async streaming
|
||||
streaming = await flow.kickoff_async()
|
||||
|
||||
# Async iteration over chunks
|
||||
async for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Access final result
|
||||
result = streaming.result
|
||||
print(f"\n\nFinal output: {result}")
|
||||
|
||||
asyncio.run(stream_flow())
|
||||
```
|
||||
|
||||
## Streaming with Multi-Step Flows
|
||||
|
||||
Streaming works seamlessly across multiple flow steps, including flows that execute multiple crews:
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from crewai import Agent, Crew, Task
|
||||
|
||||
class MultiStepFlow(Flow):
|
||||
stream = True
|
||||
|
||||
@start()
|
||||
def research_phase(self):
|
||||
"""First crew: Research the topic."""
|
||||
researcher = Agent(
|
||||
role="Research Analyst",
|
||||
goal="Gather comprehensive information",
|
||||
backstory="Expert at finding relevant information",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research AI developments in healthcare",
|
||||
expected_output="Research findings on AI in healthcare",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[researcher], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
|
||||
self.state["research"] = result.raw
|
||||
return result.raw
|
||||
|
||||
@listen(research_phase)
|
||||
def analysis_phase(self, research_data):
|
||||
"""Second crew: Analyze the research."""
|
||||
analyst = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Analyze information and extract insights",
|
||||
backstory="Expert at identifying patterns and trends",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Analyze this research: {research}",
|
||||
expected_output="Key insights and trends",
|
||||
agent=analyst,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[analyst], tasks=[task])
|
||||
return crew.kickoff(inputs={"research": research_data})
|
||||
|
||||
|
||||
# Stream across both phases
|
||||
flow = MultiStepFlow()
|
||||
streaming = flow.kickoff()
|
||||
|
||||
current_step = ""
|
||||
for chunk in streaming:
|
||||
# Track which flow step is executing
|
||||
if chunk.task_name != current_step:
|
||||
current_step = chunk.task_name
|
||||
print(f"\n\n=== {chunk.task_name} ===\n")
|
||||
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
result = streaming.result
|
||||
print(f"\n\nFinal analysis: {result}")
|
||||
```
|
||||
|
||||
## Practical Example: Progress Dashboard
|
||||
|
||||
Here's a complete example showing how to build a progress dashboard with streaming:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.types.streaming import StreamChunkType
|
||||
|
||||
class ResearchPipeline(Flow):
|
||||
stream = True
|
||||
|
||||
@start()
|
||||
def gather_data(self):
|
||||
researcher = Agent(
|
||||
role="Data Gatherer",
|
||||
goal="Collect relevant information",
|
||||
backstory="Skilled at finding quality sources",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Gather data on renewable energy trends",
|
||||
expected_output="Collection of relevant data points",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[researcher], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
self.state["data"] = result.raw
|
||||
return result.raw
|
||||
|
||||
@listen(gather_data)
|
||||
def analyze_data(self, data):
|
||||
analyst = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Extract meaningful insights",
|
||||
backstory="Expert at data analysis",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Analyze: {data}",
|
||||
expected_output="Key insights and trends",
|
||||
agent=analyst,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[analyst], tasks=[task])
|
||||
return crew.kickoff(inputs={"data": data})
|
||||
|
||||
|
||||
async def run_with_dashboard():
|
||||
flow = ResearchPipeline()
|
||||
|
||||
print("="*60)
|
||||
print("RESEARCH PIPELINE DASHBOARD")
|
||||
print("="*60)
|
||||
|
||||
streaming = await flow.kickoff_async()
|
||||
|
||||
current_agent = ""
|
||||
current_task = ""
|
||||
chunk_count = 0
|
||||
|
||||
async for chunk in streaming:
|
||||
chunk_count += 1
|
||||
|
||||
# Display phase transitions
|
||||
if chunk.task_name != current_task:
|
||||
current_task = chunk.task_name
|
||||
current_agent = chunk.agent_role
|
||||
print(f"\n\n📋 Phase: {current_task}")
|
||||
print(f"👤 Agent: {current_agent}")
|
||||
print("-" * 60)
|
||||
|
||||
# Display text output
|
||||
if chunk.chunk_type == StreamChunkType.TEXT:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
# Display tool usage
|
||||
elif chunk.chunk_type == StreamChunkType.TOOL_CALL and chunk.tool_call:
|
||||
print(f"\n🔧 Tool: {chunk.tool_call.tool_name}")
|
||||
|
||||
# Show completion summary
|
||||
result = streaming.result
|
||||
print(f"\n\n{'='*60}")
|
||||
print("PIPELINE COMPLETE")
|
||||
print(f"{'='*60}")
|
||||
print(f"Total chunks: {chunk_count}")
|
||||
print(f"Final output length: {len(str(result))} characters")
|
||||
|
||||
asyncio.run(run_with_dashboard())
|
||||
```
|
||||
|
||||
## Streaming with State Management
|
||||
|
||||
Streaming works naturally with Flow state management:
|
||||
|
||||
```python Code
|
||||
from pydantic import BaseModel
|
||||
|
||||
class AnalysisState(BaseModel):
|
||||
topic: str = ""
|
||||
research: str = ""
|
||||
insights: str = ""
|
||||
|
||||
class StatefulStreamingFlow(Flow[AnalysisState]):
|
||||
stream = True
|
||||
|
||||
@start()
|
||||
def research(self):
|
||||
# State is available during streaming
|
||||
topic = self.state.topic
|
||||
print(f"Researching: {topic}")
|
||||
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="Research topics thoroughly",
|
||||
backstory="Expert researcher",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description=f"Research {topic}",
|
||||
expected_output="Research findings",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[researcher], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
|
||||
self.state.research = result.raw
|
||||
return result.raw
|
||||
|
||||
@listen(research)
|
||||
def analyze(self, research):
|
||||
# Access updated state
|
||||
print(f"Analyzing {len(self.state.research)} chars of research")
|
||||
|
||||
analyst = Agent(
|
||||
role="Analyst",
|
||||
goal="Extract insights",
|
||||
backstory="Expert analyst",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Analyze: {research}",
|
||||
expected_output="Key insights",
|
||||
agent=analyst,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[analyst], tasks=[task])
|
||||
result = crew.kickoff(inputs={"research": research})
|
||||
|
||||
self.state.insights = result.raw
|
||||
return result.raw
|
||||
|
||||
|
||||
# Run with streaming
|
||||
flow = StatefulStreamingFlow()
|
||||
streaming = flow.kickoff(inputs={"topic": "quantum computing"})
|
||||
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
result = streaming.result
|
||||
print(f"\n\nFinal state:")
|
||||
print(f"Topic: {flow.state.topic}")
|
||||
print(f"Research length: {len(flow.state.research)}")
|
||||
print(f"Insights length: {len(flow.state.insights)}")
|
||||
```
|
||||
|
||||
## Use Cases
|
||||
|
||||
Flow streaming is particularly valuable for:
|
||||
|
||||
- **Multi-Stage Workflows**: Show progress across research, analysis, and synthesis phases
|
||||
- **Complex Pipelines**: Provide visibility into long-running data processing flows
|
||||
- **Interactive Applications**: Build responsive UIs that display intermediate results
|
||||
- **Monitoring and Debugging**: Observe flow execution and crew interactions in real-time
|
||||
- **Progress Tracking**: Show users which stage of the workflow is currently executing
|
||||
- **Live Dashboards**: Create monitoring interfaces for production flows
|
||||
|
||||
## Stream Chunk Types
|
||||
|
||||
Like crew streaming, flow chunks can be of different types:
|
||||
|
||||
### TEXT Chunks
|
||||
|
||||
Standard text content from LLM responses:
|
||||
|
||||
```python Code
|
||||
for chunk in streaming:
|
||||
if chunk.chunk_type == StreamChunkType.TEXT:
|
||||
print(chunk.content, end="", flush=True)
|
||||
```
|
||||
|
||||
### TOOL_CALL Chunks
|
||||
|
||||
Information about tool calls within the flow:
|
||||
|
||||
```python Code
|
||||
for chunk in streaming:
|
||||
if chunk.chunk_type == StreamChunkType.TOOL_CALL and chunk.tool_call:
|
||||
print(f"\nTool: {chunk.tool_call.tool_name}")
|
||||
print(f"Args: {chunk.tool_call.arguments}")
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
Handle errors gracefully during streaming:
|
||||
|
||||
```python Code
|
||||
flow = ResearchFlow()
|
||||
streaming = flow.kickoff()
|
||||
|
||||
try:
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
result = streaming.result
|
||||
print(f"\nSuccess! Result: {result}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\nError during flow execution: {e}")
|
||||
if streaming.is_completed:
|
||||
print("Streaming completed but flow encountered an error")
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Streaming automatically enables LLM streaming for any crews used within the flow
|
||||
- You must iterate through all chunks before accessing the `.result` property
|
||||
- Streaming works with both structured and unstructured flow state
|
||||
- Flow streaming captures output from all crews and LLM calls in the flow
|
||||
- Each chunk includes context about which agent and task generated it
|
||||
- Streaming adds minimal overhead to flow execution
|
||||
|
||||
## Combining with Flow Visualization
|
||||
|
||||
You can combine streaming with flow visualization to provide a complete picture:
|
||||
|
||||
```python Code
|
||||
# Generate flow visualization
|
||||
flow = ResearchFlow()
|
||||
flow.plot("research_flow") # Creates HTML visualization
|
||||
|
||||
# Run with streaming
|
||||
streaming = flow.kickoff()
|
||||
for chunk in streaming:
|
||||
print(chunk.content, end="", flush=True)
|
||||
|
||||
result = streaming.result
|
||||
print(f"\nFlow complete! View structure at: research_flow.html")
|
||||
```
|
||||
|
||||
By leveraging flow streaming, you can build sophisticated, responsive applications that provide users with real-time visibility into complex multi-stage workflows, making your AI automations more transparent and engaging.
|
||||
Reference in New Issue
Block a user