diff --git a/docs/concepts/llms.mdx b/docs/concepts/llms.mdx
index 12061d1a6..8d815246f 100644
--- a/docs/concepts/llms.mdx
+++ b/docs/concepts/llms.mdx
@@ -540,6 +540,46 @@ In this section, you'll find detailed examples that help you select, configure,
+## Streaming Responses
+
+CrewAI supports streaming responses from LLMs, allowing your application to receive and process outputs in real-time as they're generated.
+
+
+
+ Enable streaming by setting the `stream` parameter to `True` when initializing your LLM:
+
+ ```python
+ from crewai import LLM
+
+ # Create an LLM with streaming enabled
+ llm = LLM(
+ model="openai/gpt-4o",
+ stream=True # Enable streaming
+ )
+ ```
+
+ When streaming is enabled, responses are delivered in chunks as they're generated, creating a more responsive user experience.
+
+
+
+ CrewAI emits events for each chunk received during streaming:
+
+ ```python
+ from crewai import LLM
+ from crewai.utilities.events import EventHandler, LLMStreamChunkEvent
+
+ class MyEventHandler(EventHandler):
+ def on_llm_stream_chunk(self, event: LLMStreamChunkEvent):
+ # Process each chunk as it arrives
+ print(f"Received chunk: {event.chunk}")
+
+ # Register the event handler
+ from crewai.utilities.events import crewai_event_bus
+ crewai_event_bus.register_handler(MyEventHandler())
+ ```
+
+
+
## Structured LLM Calls
CrewAI supports structured responses from LLM calls by allowing you to define a `response_format` using a Pydantic model. This enables the framework to automatically parse and validate the output, making it easier to integrate the response into your application without manual post-processing.
@@ -669,46 +709,4 @@ Learn how to get the most out of your LLM configuration:
Use larger context models for extensive tasks
- ```python
- # Large context model
- llm = LLM(model="openai/gpt-4o") # 128K tokens
```
-
-
-
-## Getting Help
-
-If you need assistance, these resources are available:
-
-
-
- Comprehensive documentation for LiteLLM integration and troubleshooting common issues.
-
-
- Report bugs, request features, or browse existing issues for solutions.
-
-
- Connect with other CrewAI users, share experiences, and get help from the community.
-
-
-
-
- Best Practices for API Key Security:
- - Use environment variables or secure vaults
- - Never commit keys to version control
- - Rotate keys regularly
- - Use separate keys for development and production
- - Monitor key usage for unusual patterns
-