diff --git a/docs/concepts/flows.mdx b/docs/concepts/flows.mdx
index 1ea9db094..65cbb95c2 100644
--- a/docs/concepts/flows.mdx
+++ b/docs/concepts/flows.mdx
@@ -653,4 +653,17 @@ If you're interested in exploring additional examples of flows, we have a variet
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
-By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
\ No newline at end of file
+By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
+
+Also, check out our YouTube video on how to use flows in CrewAI below!
+
+
\ No newline at end of file
diff --git a/docs/concepts/pipeline.mdx b/docs/concepts/pipeline.mdx
deleted file mode 100644
index 67dae8cc0..000000000
--- a/docs/concepts/pipeline.mdx
+++ /dev/null
@@ -1,277 +0,0 @@
----
-title: Pipelines
-description: Understanding and utilizing pipelines in the crewAI framework for efficient multi-stage task processing.
-icon: timeline-arrow
----
-
-## What is a Pipeline?
-
-A pipeline in CrewAI represents a structured workflow that allows for the sequential or parallel execution of multiple crews. It provides a way to organize complex processes involving multiple stages, where the output of one stage can serve as input for subsequent stages.
-
-## Key Terminology
-
-Understanding the following terms is crucial for working effectively with pipelines:
-
-- **Stage**: A distinct part of the pipeline, which can be either sequential (a single crew) or parallel (multiple crews executing concurrently).
-- **Kickoff**: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline.
-- **Branch**: Parallel executions within a stage (e.g., concurrent crew operations).
-- **Trace**: The journey of an individual input through the entire pipeline, capturing the path and transformations it undergoes.
-
-Example pipeline structure:
-
-```bash Pipeline
-crew1 >> [crew2, crew3] >> crew4
-```
-
-This represents a pipeline with three stages:
-
-1. A sequential stage (crew1)
-2. A parallel stage with two branches (crew2 and crew3 executing concurrently)
-3. Another sequential stage (crew4)
-
-Each input creates its own kickoff, flowing through all stages of the pipeline. Multiple kickoffs can be processed concurrently, each following the defined pipeline structure.
-
-## Pipeline Attributes
-
-| Attribute | Parameters | Description |
-| :--------- | :---------- | :----------------------------------------------------------------------------------------------------------------- |
-| **Stages** | `stages` | A list of `PipelineStage` (crews, lists of crews, or routers) representing the stages to be executed in sequence. |
-
-## Creating a Pipeline
-
-When creating a pipeline, you define a series of stages, each consisting of either a single crew or a list of crews for parallel execution.
-The pipeline ensures that each stage is executed in order, with the output of one stage feeding into the next.
-
-### Example: Assembling a Pipeline
-
-```python
-from crewai import Crew, Process, Pipeline
-
-# Define your crews
-research_crew = Crew(
- agents=[researcher],
- tasks=[research_task],
- process=Process.sequential
-)
-
-analysis_crew = Crew(
- agents=[analyst],
- tasks=[analysis_task],
- process=Process.sequential
-)
-
-writing_crew = Crew(
- agents=[writer],
- tasks=[writing_task],
- process=Process.sequential
-)
-
-# Assemble the pipeline
-my_pipeline = Pipeline(
- stages=[research_crew, analysis_crew, writing_crew]
-)
-```
-
-## Pipeline Methods
-
-| Method | Description |
-| :--------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| **kickoff** | Executes the pipeline, processing all stages and returning the results. This method initiates one or more kickoffs through the pipeline, handling the flow of data between stages. |
-| **process_runs** | Runs the pipeline for each input provided, handling the flow and transformation of data between stages. |
-
-## Pipeline Output
-
-The output of a pipeline in the CrewAI framework is encapsulated within the `PipelineKickoffResult` class.
-This class provides a structured way to access the results of the pipeline's execution, including various formats such as raw strings, JSON, and Pydantic models.
-
-### Pipeline Output Attributes
-
-| Attribute | Parameters | Type | Description |
-| :-------------- | :------------ | :------------------------ | :-------------------------------------------------------------------------------------------------------- |
-| **ID** | `id` | `UUID4` | A unique identifier for the pipeline output. |
-| **Run Results** | `run_results` | `List[PipelineRunResult]` | A list of `PipelineRunResult` objects, each representing the output of a single run through the pipeline. |
-
-### Pipeline Output Methods
-
-| Method/Property | Description |
-| :----------------- | :----------------------------------------------------- |
-| **add_run_result** | Adds a `PipelineRunResult` to the list of run results. |
-
-### Pipeline Run Result Attributes
-
-| Attribute | Parameters | Type | Description |
-| :---------------- | :-------------- | :------------------------- | :-------------------------------------------------------------------------------------------- |
-| **ID** | `id` | `UUID4` | A unique identifier for the run result. |
-| **Raw** | `raw` | `str` | The raw output of the final stage in the pipeline kickoff. |
-| **Pydantic** | `pydantic` | `Any` | A Pydantic model object representing the structured output of the final stage, if applicable. |
-| **JSON Dict** | `json_dict` | `Union[Dict[str, Any], None]` | A dictionary representing the JSON output of the final stage, if applicable. |
-| **Token Usage** | `token_usage` | `Dict[str, UsageMetrics]` | A summary of token usage across all stages of the pipeline kickoff. |
-| **Trace** | `trace` | `List[Any]` | A trace of the journey of inputs through the pipeline kickoff. |
-| **Crews Outputs** | `crews_outputs` | `List[CrewOutput]` | A list of `CrewOutput` objects, representing the outputs from each crew in the pipeline kickoff. |
-
-### Pipeline Run Result Methods and Properties
-
-| Method/Property | Description |
-| :-------------- | :------------------------------------------------------------------------------------------------------- |
-| **json** | Returns the JSON string representation of the run result if the output format of the final task is JSON. |
-| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
-| **str** | Returns the string representation of the run result, prioritizing Pydantic, then JSON, then raw. |
-
-### Accessing Pipeline Outputs
-
-Once a pipeline has been executed, its output can be accessed through the `PipelineOutput` object returned by the `process_runs` method.
-The `PipelineOutput` class provides access to individual `PipelineRunResult` objects, each representing a single run through the pipeline.
-
-#### Example
-
-```python
-# Define input data for the pipeline
-input_data = [
- {"initial_query": "Latest advancements in AI"},
- {"initial_query": "Future of robotics"}
-]
-
-# Execute the pipeline
-pipeline_output = await my_pipeline.process_runs(input_data)
-
-# Access the results
-for run_result in pipeline_output.run_results:
- print(f"Run ID: {run_result.id}")
- print(f"Final Raw Output: {run_result.raw}")
- if run_result.json_dict:
- print(f"JSON Output: {json.dumps(run_result.json_dict, indent=2)}")
- if run_result.pydantic:
- print(f"Pydantic Output: {run_result.pydantic}")
- print(f"Token Usage: {run_result.token_usage}")
- print(f"Trace: {run_result.trace}")
- print("Crew Outputs:")
- for crew_output in run_result.crews_outputs:
- print(f" Crew: {crew_output.raw}")
- print("\n")
-```
-
-This example demonstrates how to access and work with the pipeline output, including individual run results and their associated data.
-
-## Using Pipelines
-
-Pipelines are particularly useful for complex workflows that involve multiple stages of processing, analysis, or content generation. They allow you to:
-
-1. **Sequence Operations**: Execute crews in a specific order, ensuring that the output of one crew is available as input to the next.
-2. **Parallel Processing**: Run multiple crews concurrently within a stage for increased efficiency.
-3. **Manage Complex Workflows**: Break down large tasks into smaller, manageable steps executed by specialized crews.
-
-### Example: Running a Pipeline
-
-```python
-# Define input data for the pipeline
-input_data = [{"initial_query": "Latest advancements in AI"}]
-
-# Execute the pipeline, initiating a run for each input
-results = await my_pipeline.process_runs(input_data)
-
-# Access the results
-for result in results:
- print(f"Final Output: {result.raw}")
- print(f"Token Usage: {result.token_usage}")
- print(f"Trace: {result.trace}") # Shows the path of the input through all stages
-```
-
-## Advanced Features
-
-### Parallel Execution within Stages
-
-You can define parallel execution within a stage by providing a list of crews, creating multiple branches:
-
-```python
-parallel_analysis_crew = Crew(agents=[financial_analyst], tasks=[financial_analysis_task])
-market_analysis_crew = Crew(agents=[market_analyst], tasks=[market_analysis_task])
-
-my_pipeline = Pipeline(
- stages=[
- research_crew,
- [parallel_analysis_crew, market_analysis_crew], # Parallel execution (branching)
- writing_crew
- ]
-)
-```
-
-### Routers in Pipelines
-
-Routers are a powerful feature in crewAI pipelines that allow for dynamic decision-making and branching within your workflow.
-They enable you to direct the flow of execution based on specific conditions or criteria, making your pipelines more flexible and adaptive.
-
-#### What is a Router?
-
-A router in crewAI is a special component that can be included as a stage in your pipeline. It evaluates the input data and determines which path the execution should take next.
-This allows for conditional branching in your pipeline, where different crews or sub-pipelines can be executed based on the router's decision.
-
-#### Key Components of a Router
-
-1. **Routes**: A dictionary of named routes, each associated with a condition and a pipeline to execute if the condition is met.
-2. **Default Route**: A fallback pipeline that is executed if none of the defined route conditions are met.
-
-#### Creating a Router
-
-Here's an example of how to create a router:
-
-```python
-from crewai import Router, Route, Pipeline, Crew, Agent, Task
-
-# Define your agents
-classifier = Agent(name="Classifier", role="Email Classifier")
-urgent_handler = Agent(name="Urgent Handler", role="Urgent Email Processor")
-normal_handler = Agent(name="Normal Handler", role="Normal Email Processor")
-
-# Define your tasks
-classify_task = Task(description="Classify the email based on its content and metadata.")
-urgent_task = Task(description="Process and respond to urgent email quickly.")
-normal_task = Task(description="Process and respond to normal email thoroughly.")
-
-# Define your crews
-classification_crew = Crew(agents=[classifier], tasks=[classify_task]) # classify email between high and low urgency 1-10
-urgent_crew = Crew(agents=[urgent_handler], tasks=[urgent_task])
-normal_crew = Crew(agents=[normal_handler], tasks=[normal_task])
-
-# Create pipelines for different urgency levels
-urgent_pipeline = Pipeline(stages=[urgent_crew])
-normal_pipeline = Pipeline(stages=[normal_crew])
-
-# Create a router
-email_router = Router(
- routes={
- "high_urgency": Route(
- condition=lambda x: x.get("urgency_score", 0) > 7,
- pipeline=urgent_pipeline
- ),
- "low_urgency": Route(
- condition=lambda x: x.get("urgency_score", 0) <= 7,
- pipeline=normal_pipeline
- )
- },
- default=Pipeline(stages=[normal_pipeline]) # Default to just normal if no urgency score
-)
-
-# Use the router in a main pipeline
-main_pipeline = Pipeline(stages=[classification_crew, email_router])
-
-inputs = [{"email": "..."}, {"email": "..."}] # List of email data
-
-main_pipeline.kickoff(inputs=inputs)
-```
-
-In this example, the router decides between an urgent pipeline and a normal pipeline based on the urgency score of the email. If the urgency score is greater than 7,
-it routes to the urgent pipeline; otherwise, it uses the normal pipeline. If the input doesn't include an urgency score, it defaults to just the classification crew.
-
-#### Benefits of Using Routers
-
-1. **Dynamic Workflow**: Adapt your pipeline's behavior based on input characteristics or intermediate results.
-2. **Efficiency**: Route urgent tasks to quicker processes, reserving more thorough pipelines for less time-sensitive inputs.
-3. **Flexibility**: Easily modify or extend your pipeline's logic without changing the core structure.
-4. **Scalability**: Handle a wide range of email types and urgency levels with a single pipeline structure.
-
-### Error Handling and Validation
-
-The `Pipeline` class includes validation mechanisms to ensure the robustness of the pipeline structure:
-
-- Validates that stages contain only Crew instances or lists of Crew instances.
-- Prevents double nesting of stages to maintain a clear structure.
\ No newline at end of file
diff --git a/docs/getting-started/Create-a-New-CrewAI-Pipeline-Template-Method.md b/docs/getting-started/Create-a-New-CrewAI-Pipeline-Template-Method.md
deleted file mode 100644
index 6f30698c2..000000000
--- a/docs/getting-started/Create-a-New-CrewAI-Pipeline-Template-Method.md
+++ /dev/null
@@ -1,163 +0,0 @@
-# Creating a CrewAI Pipeline Project
-
-Welcome to the comprehensive guide for creating a new CrewAI pipeline project. This document will walk you through the steps to create, customize, and run your CrewAI pipeline project, ensuring you have everything you need to get started.
-
-To learn more about CrewAI pipelines, visit the [CrewAI documentation](https://docs.crewai.com/core-concepts/Pipeline/).
-
-## Prerequisites
-
-Before getting started with CrewAI pipelines, make sure that you have installed CrewAI via pip:
-
-```shell
-$ pip install crewai crewai-tools
-```
-
-The same prerequisites for virtual environments and Code IDEs apply as in regular CrewAI projects.
-
-## Creating a New Pipeline Project
-
-To create a new CrewAI pipeline project, you have two options:
-
-1. For a basic pipeline template:
-
-```shell
-$ crewai create pipeline
-```
-
-2. For a pipeline example that includes a router:
-
-```shell
-$ crewai create pipeline --router
-```
-
-These commands will create a new project folder with the following structure:
-
-```
-/
-├── README.md
-├── uv.lock
-├── pyproject.toml
-├── src/
-│ └── /
-│ ├── __init__.py
-│ ├── main.py
-│ ├── crews/
-│ │ ├── crew1/
-│ │ │ ├── crew1.py
-│ │ │ └── config/
-│ │ │ ├── agents.yaml
-│ │ │ └── tasks.yaml
-│ │ ├── crew2/
-│ │ │ ├── crew2.py
-│ │ │ └── config/
-│ │ │ ├── agents.yaml
-│ │ │ └── tasks.yaml
-│ ├── pipelines/
-│ │ ├── __init__.py
-│ │ ├── pipeline1.py
-│ │ └── pipeline2.py
-│ └── tools/
-│ ├── __init__.py
-│ └── custom_tool.py
-└── tests/
-```
-
-## Customizing Your Pipeline Project
-
-To customize your pipeline project, you can:
-
-1. Modify the crew files in `src//crews/` to define your agents and tasks for each crew.
-2. Modify the pipeline files in `src//pipelines/` to define your pipeline structure.
-3. Modify `src//main.py` to set up and run your pipelines.
-4. Add your environment variables into the `.env` file.
-
-## Example 1: Defining a Two-Stage Sequential Pipeline
-
-Here's an example of how to define a pipeline with sequential stages in `src//pipelines/pipeline.py`:
-
-```python
-from crewai import Pipeline
-from crewai.project import PipelineBase
-from ..crews.research_crew.research_crew import ResearchCrew
-from ..crews.write_x_crew.write_x_crew import WriteXCrew
-
-@PipelineBase
-class SequentialPipeline:
- def __init__(self):
- # Initialize crews
- self.research_crew = ResearchCrew().crew()
- self.write_x_crew = WriteXCrew().crew()
-
- def create_pipeline(self):
- return Pipeline(
- stages=[
- self.research_crew,
- self.write_x_crew
- ]
- )
-
- async def kickoff(self, inputs):
- pipeline = self.create_pipeline()
- results = await pipeline.kickoff(inputs)
- return results
-```
-
-## Example 2: Defining a Two-Stage Pipeline with Parallel Execution
-
-```python
-from crewai import Pipeline
-from crewai.project import PipelineBase
-from ..crews.research_crew.research_crew import ResearchCrew
-from ..crews.write_x_crew.write_x_crew import WriteXCrew
-from ..crews.write_linkedin_crew.write_linkedin_crew import WriteLinkedInCrew
-
-@PipelineBase
-class ParallelExecutionPipeline:
- def __init__(self):
- # Initialize crews
- self.research_crew = ResearchCrew().crew()
- self.write_x_crew = WriteXCrew().crew()
- self.write_linkedin_crew = WriteLinkedInCrew().crew()
-
- def create_pipeline(self):
- return Pipeline(
- stages=[
- self.research_crew,
- [self.write_x_crew, self.write_linkedin_crew] # Parallel execution
- ]
- )
-
- async def kickoff(self, inputs):
- pipeline = self.create_pipeline()
- results = await pipeline.kickoff(inputs)
- return results
-```
-
-### Annotations
-
-The main annotation you'll use for pipelines is `@PipelineBase`. This annotation is used to decorate your pipeline classes, similar to how `@CrewBase` is used for crews.
-
-## Installing Dependencies
-
-To install the dependencies for your project, use `uv` the install command is optional because when running `crewai run`, it will automatically install the dependencies for you:
-
-```shell
-$ cd
-$ crewai install (optional)
-```
-
-## Running Your Pipeline Project
-
-To run your pipeline project, use the following command:
-
-```shell
-$ crewai run
-```
-
-This will initialize your pipeline and begin task execution as defined in your `main.py` file.
-
-## Deploying Your Pipeline Project
-
-Pipelines can be deployed in the same way as regular CrewAI projects. The easiest way is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your pipeline in a few clicks.
-
-Remember, when working with pipelines, you're orchestrating multiple crews to work together in a sequence or parallel fashion. This allows for more complex workflows and information processing tasks.
diff --git a/docs/getting-started/Start-a-New-CrewAI-Project-Template-Method.md b/docs/getting-started/Start-a-New-CrewAI-Project-Template-Method.md
deleted file mode 100644
index a0482c175..000000000
--- a/docs/getting-started/Start-a-New-CrewAI-Project-Template-Method.md
+++ /dev/null
@@ -1,236 +0,0 @@
----
-
-title: Starting a New CrewAI Project - Using Template
-
-description: A comprehensive guide to starting a new CrewAI project, including the latest updates and project setup methods.
----
-
-# Starting Your CrewAI Project
-
-Welcome to the ultimate guide for starting a new CrewAI project. This document will walk you through the steps to create, customize, and run your CrewAI project, ensuring you have everything you need to get started.
-
-Before we start, there are a couple of things to note:
-
-1. CrewAI is a Python package and requires Python >=3.10 and <=3.13 to run.
-2. The preferred way of setting up CrewAI is using the `crewai create crew` command. This will create a new project folder and install a skeleton template for you to work on.
-
-## Prerequisites
-
-Before getting started with CrewAI, make sure that you have installed it via pip:
-
-```shell
-$ pip install 'crewai[tools]'
-```
-
-## Creating a New Project
-
-In this example, we will be using `uv` as our virtual environment manager.
-
-To create a new CrewAI project, run the following CLI command:
-
-```shell
-$ crewai create crew
-```
-
-This command will create a new project folder with the following structure:
-
-```shell
-my_project/
-├── .gitignore
-├── pyproject.toml
-├── README.md
-└── src/
- └── my_project/
- ├── __init__.py
- ├── main.py
- ├── crew.py
- ├── tools/
- │ ├── custom_tool.py
- │ └── __init__.py
- └── config/
- ├── agents.yaml
- └── tasks.yaml
-```
-
-You can now start developing your project by editing the files in the `src/my_project` folder. The `main.py` file is the entry point of your project, and the `crew.py` file is where you define your agents and tasks.
-
-## Customizing Your Project
-
-To customize your project, you can:
-- Modify `src/my_project/config/agents.yaml` to define your agents.
-- Modify `src/my_project/config/tasks.yaml` to define your tasks.
-- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments.
-- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks.
-- Add your environment variables into the `.env` file.
-
-### Example: Defining Agents and Tasks
-
-#### agents.yaml
-
-```yaml
-researcher:
- role: >
- Job Candidate Researcher
- goal: >
- Find potential candidates for the job
- backstory: >
- You are adept at finding the right candidates by exploring various online
- resources. Your skill in identifying suitable candidates ensures the best
- match for job positions.
-```
-
-#### tasks.yaml
-
-```yaml
-research_candidates_task:
- description: >
- Conduct thorough research to find potential candidates for the specified job.
- Utilize various online resources and databases to gather a comprehensive list of potential candidates.
- Ensure that the candidates meet the job requirements provided.
-
- Job Requirements:
- {job_requirements}
- expected_output: >
- A list of 10 potential candidates with their contact information and brief profiles highlighting their suitability.
- agent: researcher # THIS NEEDS TO MATCH THE AGENT NAME IN THE AGENTS.YAML FILE AND THE AGENT DEFINED IN THE crew.py FILE
- context: # THESE NEED TO MATCH THE TASK NAMES DEFINED ABOVE AND THE TASKS.YAML FILE AND THE TASK DEFINED IN THE crew.py FILE
- - researcher
-```
-
-### Referencing Variables:
-
-Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from `tasks.yaml` file. Ensure your annotated agent and function name are the same; otherwise, your task won't recognize the reference properly.
-
-#### Example References
-
-`agents.yaml`
-
-```yaml
-email_summarizer:
- role: >
- Email Summarizer
- goal: >
- Summarize emails into a concise and clear summary
- backstory: >
- You will create a 5 bullet point summary of the report
- llm: mixtal_llm
-```
-
-`tasks.yaml`
-
-```yaml
-email_summarizer_task:
- description: >
- Summarize the email into a 5 bullet point summary
- expected_output: >
- A 5 bullet point summary of the email
- agent: email_summarizer
- context:
- - reporting_task
- - research_task
-```
-
-Use the annotations to properly reference the agent and task in the `crew.py` file.
-
-### Annotations include:
-
-* `@agent`
-* `@task`
-* `@crew`
-* `@tool`
-* `@callback`
-* `@output_json`
-* `@output_pydantic`
-* `@cache_handler`
-
-`crew.py`
-
-```python
-# ...
-@agent
-def email_summarizer(self) -> Agent:
- return Agent(
- config=self.agents_config["email_summarizer"],
- )
-
-@task
-def email_summarizer_task(self) -> Task:
- return Task(
- config=self.tasks_config["email_summarizer_task"],
- )
-# ...
-```
-
-## Installing Dependencies
-
-To install the dependencies for your project, you can use `uv`. Running the following command is optional since when running `crewai run`, it will automatically install the dependencies for you.
-
-```shell
-$ cd my_project
-$ crewai install (optional)
-```
-
-This will install the dependencies specified in the `pyproject.toml` file.
-
-## Interpolating Variables
-
-Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{variable}` will be replaced by the value of the variable in the `main.py` file.
-
-#### tasks.yaml
-
-```yaml
-research_task:
- description: >
- Conduct a thorough research about the customer and competitors in the context
- of {customer_domain}.
- Make sure you find any interesting and relevant information given the
- current year is 2024.
- expected_output: >
- A complete report on the customer and their customers and competitors,
- including their demographics, preferences, market positioning and audience engagement.
-```
-
-#### main.py
-
-```python
-# main.py
-def run():
- inputs = {
- "customer_domain": "crewai.com"
- }
- MyProjectCrew(inputs).crew().kickoff(inputs=inputs)
-```
-
-## Running Your Project
-
-To run your project, use the following command:
-
-```shell
-$ crewai run
-```
-
-This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
-
-### Replay Tasks from Latest Crew Kickoff
-
-CrewAI now includes a replay feature that allows you to list the tasks from the last run and replay from a specific one. To use this feature, run:
-
-```shell
-$ crewai replay
-```
-
-Replace `` with the ID of the task you want to replay.
-
-### Reset Crew Memory
-
-If you need to reset the memory of your crew before running it again, you can do so by calling the reset memory feature:
-
-```shell
-$ crewai reset-memory
-```
-
-This will clear the crew's memory, allowing for a fresh start.
-
-## Deploying Your Project
-
-The easiest way to deploy your crew is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your crew in a few clicks.
diff --git a/docs/how-to/agentops-observability.mdx b/docs/how-to/agentops-observability.mdx
index 605cfda14..ce50d1fc5 100644
--- a/docs/how-to/agentops-observability.mdx
+++ b/docs/how-to/agentops-observability.mdx
@@ -25,9 +25,9 @@ It provides a dashboard for tracking agent performance, session replays, and cus
Additionally, AgentOps provides session drilldowns for viewing Crew agent interactions, LLM calls, and tool usage in real-time.
This feature is useful for debugging and understanding how agents interact with users as well as other agents.
-
-
-
+
+
+
### Features
@@ -123,4 +123,4 @@ For feature requests or bug reports, please reach out to the AgentOps team on th
• 🖇️ AgentOps Dashboard •
-📙 Documentation
\ No newline at end of file
+📙 Documentation
diff --git a/docs/how-to/langtrace-observability.mdx b/docs/how-to/langtrace-observability.mdx
index fddb2cd65..c8bb15259 100644
--- a/docs/how-to/langtrace-observability.mdx
+++ b/docs/how-to/langtrace-observability.mdx
@@ -10,9 +10,9 @@ Langtrace is an open-source, external tool that helps you set up observability a
While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents.
This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.
-
-
-
+
+
+
## Setup Instructions
@@ -69,4 +69,4 @@ This integration allows you to log hyperparameters, monitor performance regressi
6. **Testing and Evaluations**
- - Set up automated tests for your CrewAI agents and tasks.
\ No newline at end of file
+ - Set up automated tests for your CrewAI agents and tasks.
diff --git a/docs/images/crewai-run-poetry-error.png b/docs/images/crewai-run-poetry-error.png
new file mode 100644
index 000000000..07d3a1d90
Binary files /dev/null and b/docs/images/crewai-run-poetry-error.png differ
diff --git a/docs/images/crewai-update.png b/docs/images/crewai-update.png
new file mode 100644
index 000000000..cacd84c7f
Binary files /dev/null and b/docs/images/crewai-update.png differ
diff --git a/docs/installation.mdx b/docs/installation.mdx
index b5d3ef901..ec3da38b7 100644
--- a/docs/installation.mdx
+++ b/docs/installation.mdx
@@ -1,11 +1,9 @@
---
-title: Installation & Setup
+title: Installation
description:
icon: wrench
---
-## Install CrewAI
-
This guide will walk you through the installation process for CrewAI and its dependencies.
CrewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently.
Let's get started! 🚀
@@ -15,17 +13,8 @@ Let's get started! 🚀
-
- First, if you haven't already, install [Poetry](https://python-poetry.org/).
- CrewAI uses Poetry for dependency management and package handling, offering a seamless setup and execution experience.
-
- ```shell Terminal
- pip install poetry
- ```
-
-
- Then, install the main CrewAI package:
+ Install the main CrewAI package with the following command:
```shell Terminal
pip install crewai
@@ -45,15 +34,29 @@ Let's get started! 🚀
- To upgrade CrewAI and CrewAI Tools to the latest version, run the following command:
+ To upgrade CrewAI and CrewAI Tools to the latest version, run the following command
```shell Terminal
pip install --upgrade crewai crewai-tools
```
+
+ 1. If you're using an older version of CrewAI, you may receive a warning about using `Poetry` for dependency management.
+ 
+
+ 2. In this case, you'll need to run the command below to update your project.
+ This command will migrate your project to use [UV](https://github.com/astral-sh/uv) and update the necessary files.
+ ```shell Terminal
+ crewai update
+ ```
+ 3. After running the command above, you should see the following output:
+ 
+
+ 4. You're all set! You can now proceed to the next step! 🎉
+
- To verify that `crewai` and `crewai-tools` are installed correctly, run the following command:
+ To verify that `crewai` and `crewai-tools` are installed correctly, run the following command
```shell Terminal
pip freeze | grep crewai
diff --git a/docs/introduction.mdx b/docs/introduction.mdx
index 4e2bdca31..d657c9fb2 100644
--- a/docs/introduction.mdx
+++ b/docs/introduction.mdx
@@ -45,5 +45,5 @@ By fostering collaborative intelligence, CrewAI empowers agents to work together
## Next Step
-- [Install CrewAI](/installation)
+- [Install CrewAI](/installation) to get started with your first agent.
diff --git a/docs/mint.json b/docs/mint.json
index e57397ce4..3ea9f5baf 100644
--- a/docs/mint.json
+++ b/docs/mint.json
@@ -66,18 +66,17 @@
"pages": [
"concepts/agents",
"concepts/tasks",
- "concepts/tools",
- "concepts/processes",
"concepts/crews",
+ "concepts/flows",
+ "concepts/llms",
+ "concepts/processes",
"concepts/collaboration",
- "concepts/pipeline",
"concepts/training",
"concepts/memory",
"concepts/planning",
"concepts/testing",
- "concepts/flows",
"concepts/cli",
- "concepts/llms",
+ "concepts/tools",
"concepts/langchain-tools",
"concepts/llamaindex-tools"
]
diff --git a/docs/quickstart.mdx b/docs/quickstart.mdx
index 2f39066b8..49a690093 100644
--- a/docs/quickstart.mdx
+++ b/docs/quickstart.mdx
@@ -26,6 +26,7 @@ Follow the steps below to get crewing! 🚣♂️
You can also modify the agents as needed to fit your use case or copy and paste as is to your project.
+ Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{topic}` will be replaced by the value of the variable in the `main.py` file.
```yaml agents.yaml
# src/latest_ai_development/config/agents.yaml
@@ -124,7 +125,7 @@ Follow the steps below to get crewing! 🚣♂️
```
- For example, you can pass the `topic` input to your crew to customize the research and reporting to medical llms or any other topic.
+ For example, you can pass the `topic` input to your crew to customize the research and reporting.
```python main.py
#!/usr/bin/env python
# src/latest_ai_development/main.py
@@ -233,6 +234,74 @@ Follow the steps below to get crewing! 🚣♂️
+### Note on Consistency in Naming
+
+The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
+For example, you can reference the agent for specific tasks from `tasks.yaml` file.
+This naming consistency allows CrewAI to automatically link your configurations with your code; otherwise, your task won't recognize the reference properly.
+
+#### Example References
+
+
+ Note how we use the same name for the agent in the `agents.yaml` (`email_summarizer`) file as the method name in the `crew.py` (`email_summarizer`) file.
+
+
+```yaml agents.yaml
+email_summarizer:
+ role: >
+ Email Summarizer
+ goal: >
+ Summarize emails into a concise and clear summary
+ backstory: >
+ You will create a 5 bullet point summary of the report
+ llm: mixtal_llm
+```
+
+
+ Note how we use the same name for the agent in the `tasks.yaml` (`email_summarizer_task`) file as the method name in the `crew.py` (`email_summarizer_task`) file.
+
+
+```yaml tasks.yaml
+email_summarizer_task:
+ description: >
+ Summarize the email into a 5 bullet point summary
+ expected_output: >
+ A 5 bullet point summary of the email
+ agent: email_summarizer
+ context:
+ - reporting_task
+ - research_task
+```
+
+Use the annotations to properly reference the agent and task in the `crew.py` file.
+
+### Annotations include:
+
+* `@agent`
+* `@task`
+* `@crew`
+* `@tool`
+* `@callback`
+* `@output_json`
+* `@output_pydantic`
+* `@cache_handler`
+
+```python crew.py
+# ...
+@agent
+def email_summarizer(self) -> Agent:
+ return Agent(
+ config=self.agents_config["email_summarizer"],
+ )
+
+@task
+def email_summarizer_task(self) -> Task:
+ return Task(
+ config=self.tasks_config["email_summarizer_task"],
+ )
+# ...
+```
+
In addition to the [sequential process](../how-to/sequential-process), you can use the [hierarchical process](../how-to/hierarchical-process),
which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results.
@@ -241,7 +310,7 @@ You can learn more about the core concepts [here](/concepts).
### Replay Tasks from Latest Crew Kickoff
-CrewAI now includes a replay feature that allows you to list the tasks from the last run and replay from a specific one. To use this feature, run:
+CrewAI now includes a replay feature that allows you to list the tasks from the last run and replay from a specific one. To use this feature, run.
```shell
crewai replay