Files
crewAI/docs/en/concepts/planning.mdx
2026-04-29 00:43:29 +00:00

352 lines
14 KiB
Plaintext

---
title: Planning
description: Learn how to add planning to CrewAI at the crew level (sequential task planning) and the agent level (Plan-and-Act with PlanningConfig).
icon: ruler-combined
mode: "wide"
---
## Overview
CrewAI provides two complementary planning systems:
- **Crew-level planning** — before each crew iteration, an `AgentPlanner` produces a step-by-step plan for every task and injects it into the task description. Useful when you want the crew to think through the *whole pipeline* before any agent starts working.
- **Agent-level planning (Plan-and-Act)** — a single agent builds an explicit multi-step plan, executes it step by step, and observes/replans as it goes. Configured per-agent via `PlanningConfig`. Useful when you want one agent to tackle a complex task adaptively.
The two are independent and can be combined: a crew can have planning enabled, and individual agents in that crew can also use `planning_config`.
## Crew-Level Planning
The crew-level planning feature adds planning capability to your crew. When enabled, before each Crew iteration,
all Crew information is sent to an `AgentPlanner` that will plan the tasks step by step, and this plan will be added to each task description.
### Using the Planning Feature
Getting started with crew-level planning is very easy, the only step required is to add `planning=True` to your Crew:
<CodeGroup>
```python Code
from crewai import Crew, Agent, Task, Process
# Assemble your crew with planning capabilities
my_crew = Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
planning=True,
)
```
</CodeGroup>
From this point on, your crew will have planning enabled, and the tasks will be planned before each iteration.
<Warning>
When planning is enabled, crewAI will use `gpt-4o-mini` as the default LLM for planning, which requires a valid OpenAI API key. Since your agents might be using different LLMs, this could cause confusion if you don't have an OpenAI API key configured or if you're experiencing unexpected behavior related to LLM API calls.
</Warning>
#### Planning LLM
Now you can define the LLM that will be used to plan the tasks.
When running the base case example, you will see something like the output below, which represents the output of the `AgentPlanner`
responsible for creating the step-by-step logic to add to the Agents' tasks.
<CodeGroup>
```python Code
from crewai import Crew, Agent, Task, Process
# Assemble your crew with planning capabilities and custom LLM
my_crew = Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
planning=True,
planning_llm="gpt-4o"
)
# Run the crew
my_crew.kickoff()
```
```markdown Result
[2024-07-15 16:49:11][INFO]: Planning the crew execution
**Step-by-Step Plan for Task Execution**
**Task Number 1: Conduct a thorough research about AI LLMs**
**Agent:** AI LLMs Senior Data Researcher
**Agent Goal:** Uncover cutting-edge developments in AI LLMs
**Task Expected Output:** A list with 10 bullet points of the most relevant information about AI LLMs
**Task Tools:** None specified
**Agent Tools:** None specified
**Step-by-Step Plan:**
1. **Define Research Scope:**
- Determine the specific areas of AI LLMs to focus on, such as advancements in architecture, use cases, ethical considerations, and performance metrics.
2. **Identify Reliable Sources:**
- List reputable sources for AI research, including academic journals, industry reports, conferences (e.g., NeurIPS, ACL), AI research labs (e.g., OpenAI, Google AI), and online databases (e.g., IEEE Xplore, arXiv).
3. **Collect Data:**
- Search for the latest papers, articles, and reports published in 2024 and early 2025.
- Use keywords like "Large Language Models 2025", "AI LLM advancements", "AI ethics 2025", etc.
4. **Analyze Findings:**
- Read and summarize the key points from each source.
- Highlight new techniques, models, and applications introduced in the past year.
5. **Organize Information:**
- Categorize the information into relevant topics (e.g., new architectures, ethical implications, real-world applications).
- Ensure each bullet point is concise but informative.
6. **Create the List:**
- Compile the 10 most relevant pieces of information into a bullet point list.
- Review the list to ensure clarity and relevance.
**Expected Output:**
A list with 10 bullet points of the most relevant information about AI LLMs.
---
**Task Number 2: Review the context you got and expand each topic into a full section for a report**
**Agent:** AI LLMs Reporting Analyst
**Agent Goal:** Create detailed reports based on AI LLMs data analysis and research findings
**Task Expected Output:** A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'
**Task Tools:** None specified
**Agent Tools:** None specified
**Step-by-Step Plan:**
1. **Review the Bullet Points:**
- Carefully read through the list of 10 bullet points provided by the AI LLMs Senior Data Researcher.
2. **Outline the Report:**
- Create an outline with each bullet point as a main section heading.
- Plan sub-sections under each main heading to cover different aspects of the topic.
3. **Research Further Details:**
- For each bullet point, conduct additional research if necessary to gather more detailed information.
- Look for case studies, examples, and statistical data to support each section.
4. **Write Detailed Sections:**
- Expand each bullet point into a comprehensive section.
- Ensure each section includes an introduction, detailed explanation, examples, and a conclusion.
- Use markdown formatting for headings, subheadings, lists, and emphasis.
5. **Review and Edit:**
- Proofread the report for clarity, coherence, and correctness.
- Make sure the report flows logically from one section to the next.
- Format the report according to markdown standards.
6. **Finalize the Report:**
- Ensure the report is complete with all sections expanded and detailed.
- Double-check formatting and make any necessary adjustments.
**Expected Output:**
A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.
```
</CodeGroup>
## Agent-Level Planning (Plan-and-Act)
Agent-level planning gives a single agent an explicit Plan-and-Act loop: it builds a structured multi-step plan up front, executes each step, observes the result, and can replan or refine when reality diverges from the plan. It's configured per-agent through `PlanningConfig`.
### Enabling Agent Planning
Pass a `PlanningConfig` to the agent. The presence of a `PlanningConfig` enables planning — you don't need a separate flag.
<CodeGroup>
```python Defaults
from crewai import Agent, PlanningConfig
agent = Agent(
role="Data Analyst",
goal="Analyze datasets and surface insights",
backstory="You are an experienced data analyst.",
planning_config=PlanningConfig(), # medium effort, defaults
)
```
```python Tuned
from crewai import Agent, PlanningConfig
agent = Agent(
role="Data Analyst",
goal="Analyze datasets and surface insights",
backstory="You are an experienced data analyst.",
planning_config=PlanningConfig(
reasoning_effort="high",
max_steps=10,
max_replans=2,
max_step_iterations=10,
step_timeout=120,
llm="gpt-4o-mini",
),
)
```
</CodeGroup>
### Reasoning Effort
`reasoning_effort` controls what happens *between steps* — how aggressively the agent observes, replans, and refines as it executes the plan. It is the most important knob for tuning latency vs. adaptiveness.
<ParamField body="low" type="string">
Observe each step for success validation only. Skip the decide/replan/refine pipeline; steps are marked complete and execution continues linearly. **Fastest option** — best when the plan is likely to be correct on the first try and you want minimal overhead per step.
</ParamField>
<ParamField body="medium" type="string" default="default">
Observe each step. On failure, trigger replanning. On success, skip refinement and continue. **Balanced option (default)** — replans only when something goes wrong, so you get adaptiveness without paying for it on the happy path.
</ParamField>
<ParamField body="high" type="string">
Full observation pipeline with `decide_next_action` after every step. Can trigger early goal achievement (finish before all steps run), full replanning, or lightweight step refinement. **Most adaptive, highest latency** — best for open-ended or exploratory tasks where the right path can't be predicted up front.
</ParamField>
### PlanningConfig Fields
<ParamField body="reasoning_effort" type="Literal['low', 'medium', 'high']" default="medium">
Post-step observation/replanning behavior. See above.
</ParamField>
<ParamField body="max_attempts" type="int | None" default="None">
Maximum number of planning refinement attempts during the initial plan creation. If `None`, the agent keeps refining until it indicates readiness.
</ParamField>
<ParamField body="max_steps" type="int" default="20">
Maximum number of steps in the generated plan. Must be `>= 1`. Lower this when you want concise plans; raise it for complex tasks that legitimately need many steps.
</ParamField>
<ParamField body="max_replans" type="int" default="3">
Maximum number of full replanning cycles allowed during execution. Must be `>= 0`. Set to `0` to forbid replanning entirely (the agent will stick to the original plan even if steps fail).
</ParamField>
<ParamField body="max_step_iterations" type="int" default="15">
Maximum LLM iterations per step inside the `StepExecutor` multi-turn loop. Must be `>= 1`. Lower values make individual steps faster but less thorough — useful when each step is a small, well-scoped action.
</ParamField>
<ParamField body="step_timeout" type="int | None" default="None">
Wall-clock seconds for a single step. If exceeded, the step is marked failed and observation decides whether to continue or replan. `None` means no per-step timeout.
</ParamField>
<ParamField body="system_prompt" type="str | None" default="None">
Override the default planning system prompt. Use this to inject domain-specific instructions for how plans should be structured.
</ParamField>
<ParamField body="plan_prompt" type="str | None" default="None">
Override the prompt used to create the initial plan. Supports template variables like `{description}`.
</ParamField>
<ParamField body="refine_prompt" type="str | None" default="None">
Override the prompt used to refine the plan during the `max_attempts` refinement loop.
</ParamField>
<ParamField body="llm" type="str | BaseLLM | None" default="None">
LLM used for planning. Falls back to the agent's own LLM if not provided. Pass either a model string (e.g., `"gpt-4o-mini"`) or a `BaseLLM` instance.
</ParamField>
### How the Plan-and-Act Loop Works
When `planning_config` is set, the agent executes the task as follows:
1. **Plan** — build an initial multi-step plan, refining up to `max_attempts` times until ready.
2. **Execute step** — run one step through the `StepExecutor` (up to `max_step_iterations` LLM turns, bounded by `step_timeout`).
3. **Observe** — validate whether the step succeeded.
4. **Decide next action** — depending on `reasoning_effort`:
- `low`: continue to the next step.
- `medium`: continue on success; replan on failure.
- `high`: route through `decide_next_action`, which can finish early, replan, refine the next step, or continue.
5. Repeat until the plan completes, the goal is achieved, or `max_replans` is exhausted.
### Custom Prompts Example
```python
from crewai import Agent, PlanningConfig
agent = Agent(
role="Researcher",
goal="Research topics",
backstory="Expert researcher",
planning_config=PlanningConfig(
reasoning_effort="high",
max_attempts=3,
max_steps=10,
plan_prompt="Create a focused plan for: {description}",
refine_prompt="Tighten this plan, removing any step that doesn't materially advance the goal.",
llm="gpt-4o-mini",
),
)
```
### Migration from `reasoning=True`
The original agent reasoning API used two fields directly on `Agent`:
- `reasoning: bool = False`
- `max_reasoning_attempts: int | None = None`
Both are **deprecated**. They still work — passing them emits a `DeprecationWarning` and CrewAI auto-migrates them to an equivalent `PlanningConfig` — but new code should use `PlanningConfig` directly.
<Warning>
`Agent(reasoning=True, ...)` and `Agent(max_reasoning_attempts=N, ...)` are deprecated and will be removed in a future release. Migrate to `planning_config=PlanningConfig(...)`.
</Warning>
<CodeGroup>
```python Before (deprecated)
from crewai import Agent
agent = Agent(
role="Data Analyst",
goal="Analyze data and provide insights",
backstory="Expert data analyst.",
reasoning=True,
max_reasoning_attempts=3,
)
```
```python After
from crewai import Agent, PlanningConfig
agent = Agent(
role="Data Analyst",
goal="Analyze data and provide insights",
backstory="Expert data analyst.",
planning_config=PlanningConfig(max_attempts=3),
)
```
</CodeGroup>
The mapping is direct:
- `reasoning=True` → presence of `planning_config` enables planning.
- `max_reasoning_attempts=N` → `PlanningConfig(max_attempts=N)`.
Everything else (`reasoning_effort`, `max_steps`, `max_replans`, `max_step_iterations`, `step_timeout`, custom prompts, dedicated planning LLM) is new functionality only available through `PlanningConfig`.
## Choosing Between Crew-Level and Agent-Level Planning
| Concern | Crew-level (`Crew(planning=True)`) | Agent-level (`PlanningConfig`) |
| --- | --- | --- |
| Scope | Plans every task in the crew up front | Plans one agent's task adaptively |
| When the plan is built | Once per crew iteration, before any task runs | At the start of each agent's task |
| Adapts mid-execution | No — the plan is injected as guidance | Yes — observes, replans, and refines per step |
| Best for | Multi-task pipelines where ordering and hand-offs matter | Open-ended tasks where the right path emerges as the agent works |
| Configuration surface | `planning`, `planning_llm` on `Crew` | `PlanningConfig` on `Agent` |
The two are complementary — you can enable crew-level planning to coordinate the overall pipeline and use `planning_config` on individual agents that need to think adaptively while executing their step.