mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-05-08 02:29:00 +00:00
docs: add OSS upgrade & crew-to-flow migration guide
This commit is contained in:
329
docs/upgrade-crewai.mdx
Normal file
329
docs/upgrade-crewai.mdx
Normal file
@@ -0,0 +1,329 @@
|
||||
---
|
||||
title: "Upgrading & Migrating CrewAI"
|
||||
description: How to upgrade CrewAI, migrate around breaking changes, and move standalone Crews onto Flows.
|
||||
icon: arrow-up-right-dots
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
CrewAI moves quickly. New releases regularly tighten import paths, change defaults on `Agent`, `Crew`, and `Task`, and introduce new orchestration primitives like `Flow` and checkpointing. This guide collects the practical steps needed to:
|
||||
|
||||
- Upgrade the `crewai` CLI and your project dependencies
|
||||
- Adapt to breaking changes in imports and parameters
|
||||
- Migrate a standalone `Crew` to a typed `Flow`
|
||||
- Avoid the gotchas that show up the first time you re-run an upgraded project
|
||||
|
||||
If you're starting fresh, see [Installation](/en/installation). If you're coming from another framework, see the [migration guides](/en/guides/migration/migrating-from-langgraph).
|
||||
|
||||
---
|
||||
|
||||
## Upgrading CrewAI
|
||||
|
||||
CrewAI is distributed as a `uv` tool. `pip install -U crewai` works in a pinch, but the supported upgrade path is `uv`.
|
||||
|
||||
### 1. Check your current version
|
||||
|
||||
```bash
|
||||
uv tool list
|
||||
```
|
||||
|
||||
You should see a line like:
|
||||
|
||||
```
|
||||
crewai v0.102.0
|
||||
- crewai
|
||||
```
|
||||
|
||||
### 2. Upgrade the CLI
|
||||
|
||||
```bash
|
||||
uv tool install crewai --upgrade
|
||||
```
|
||||
|
||||
If your shell warns about `PATH` after the upgrade, refresh it:
|
||||
|
||||
```bash
|
||||
uv tool update-shell
|
||||
```
|
||||
|
||||
### 3. Verify the upgrade
|
||||
|
||||
```bash
|
||||
uv tool list
|
||||
crewai --version
|
||||
```
|
||||
|
||||
### 4. Update project dependencies
|
||||
|
||||
The CLI upgrade only updates the global tool. Each project pins its own `crewai` dependency in `pyproject.toml`. Inside the project directory, run:
|
||||
|
||||
```bash
|
||||
crewai install
|
||||
```
|
||||
|
||||
This re-syncs your project's lockfile and virtual environment against the new version. Run your tests or `crewai run` afterwards to surface any breakage early.
|
||||
|
||||
<Note>
|
||||
CrewAI requires `Python >=3.10, <3.14`. If `uv` was installed against an older interpreter, recreate the project venv with a supported Python before running `crewai install`.
|
||||
</Note>
|
||||
|
||||
---
|
||||
|
||||
## Breaking Changes & Migration Notes
|
||||
|
||||
Most upgrades only require small adjustments. The areas below are the ones that break silently or with confusing tracebacks.
|
||||
|
||||
### Import paths: tools and `BaseTool`
|
||||
|
||||
The canonical import location for tools is `crewai.tools`. Older paths still surface in tutorials but should be updated.
|
||||
|
||||
```python
|
||||
# Before
|
||||
from crewai_tools import BaseTool
|
||||
from crewai.agents.tools import tool
|
||||
|
||||
# After
|
||||
from crewai.tools import BaseTool, tool
|
||||
```
|
||||
|
||||
The `@tool` decorator and `BaseTool` subclass both live in `crewai.tools`. `AgentFinish` and other internal-agent symbols are no longer part of the public surface — if you were importing them, switch to event listeners or `Task` callbacks instead.
|
||||
|
||||
### `Agent` parameter changes
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Find authoritative sources on {topic}",
|
||||
backstory="You are a careful, source-driven researcher.",
|
||||
llm="gpt-4o-mini", # string model name OR an LLM object
|
||||
verbose=True, # bool, not an int level
|
||||
max_iter=15, # default has changed across versions — set explicitly
|
||||
allow_delegation=False,
|
||||
)
|
||||
```
|
||||
|
||||
- `llm` accepts either a string model name (resolved via the configured provider) or an `LLM` object for fine-grained control.
|
||||
- `verbose` is a plain `bool`. Passing an integer no longer toggles log levels.
|
||||
- `max_iter` defaults have shifted between releases. If your agent silently stops looping after the first tool call, set `max_iter` explicitly.
|
||||
|
||||
### `Crew` parameters
|
||||
|
||||
```python
|
||||
from crewai import Crew, Process
|
||||
|
||||
crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential, # or Process.hierarchical
|
||||
memory=True,
|
||||
cache=True,
|
||||
embedder={"provider": "openai", "config": {"model": "text-embedding-3-small"}},
|
||||
)
|
||||
```
|
||||
|
||||
- `process=Process.hierarchical` requires either `manager_llm=` or `manager_agent=`. Without one, kickoff raises at validation time.
|
||||
- `memory=True` with a non-default embedding provider needs an `embedder` dict — see [Memory & embedder config](#memory-embedder-config) below.
|
||||
|
||||
### `Task` structured output
|
||||
|
||||
Use `output_pydantic`, `output_json`, or `output_file` to coerce a task's result into a typed shape:
|
||||
|
||||
```python
|
||||
from pydantic import BaseModel
|
||||
from crewai import Task
|
||||
|
||||
class Article(BaseModel):
|
||||
title: str
|
||||
body: str
|
||||
|
||||
write = Task(
|
||||
description="Write an article about {topic}",
|
||||
expected_output="A short article with a title and body",
|
||||
agent=writer,
|
||||
output_pydantic=Article, # the class, NOT an instance
|
||||
output_file="output/article.md",
|
||||
)
|
||||
```
|
||||
|
||||
`output_pydantic` takes the **class** itself. Passing `Article(title="", body="")` is a common mistake and fails with a confusing validation error.
|
||||
|
||||
### Memory & embedder config
|
||||
|
||||
If `memory=True` and you're not using the default OpenAI embeddings, you must pass an `embedder`:
|
||||
|
||||
```python
|
||||
crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
memory=True,
|
||||
embedder={
|
||||
"provider": "ollama",
|
||||
"config": {"model": "nomic-embed-text"},
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
Set the relevant provider credentials (`OPENAI_API_KEY`, `OLLAMA_HOST`, etc.) in your `.env` file. Memory storage paths are project-local by default — delete the project's memory directory if you change embedders, since dimensions don't mix.
|
||||
|
||||
---
|
||||
|
||||
## Migrating a Crew to a Flow
|
||||
|
||||
`Crew` is the right primitive when you have a single team of agents executing one workflow. Once you need branching, multiple crews, or persistent state across runs, reach for `Flow`.
|
||||
|
||||
### When to use Flows vs standalone Crews
|
||||
|
||||
| Situation | Use |
|
||||
| --- | --- |
|
||||
| Single team, single linear/hierarchical workflow | `Crew` |
|
||||
| Conditional branches, retries, routing on results | `Flow` |
|
||||
| Multiple specialized crews chained together | `Flow` |
|
||||
| State that must persist between steps or runs | `Flow` (with checkpointing) |
|
||||
| You want typed, IDE-friendly state | `Flow[MyState]` with a Pydantic model |
|
||||
|
||||
If you only need one of: branching, multi-crew, or persistent state — start with a `Flow`. The boilerplate is small and you won't have to rewrite later.
|
||||
|
||||
### Step-by-step migration
|
||||
|
||||
**Before — standalone crew:**
|
||||
|
||||
```python
|
||||
from crewai import Crew
|
||||
|
||||
crew = Crew(agents=[researcher, writer], tasks=[research_task, write_task])
|
||||
result = crew.kickoff(inputs={"topic": "vector databases"})
|
||||
print(result)
|
||||
```
|
||||
|
||||
**After — crew inside a typed Flow:**
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, start, listen
|
||||
from pydantic import BaseModel
|
||||
|
||||
class MyState(BaseModel):
|
||||
input_data: str = ""
|
||||
result: str = ""
|
||||
|
||||
class MyFlow(Flow[MyState]):
|
||||
@start()
|
||||
def run_crew(self):
|
||||
result = MyCrew().crew().kickoff(inputs={"topic": self.state.input_data})
|
||||
self.state.result = str(result)
|
||||
return self.state.result
|
||||
|
||||
flow = MyFlow()
|
||||
flow.kickoff(inputs={"input_data": "vector databases"})
|
||||
```
|
||||
|
||||
What changed:
|
||||
|
||||
1. The crew is constructed inside a method, not at module load.
|
||||
2. Inputs flow through `self.state` instead of being threaded as kwargs.
|
||||
3. The entry point is marked with `@start()`. Subsequent steps use `@listen(run_crew)` to chain.
|
||||
|
||||
### Structured state setup
|
||||
|
||||
Prefer typed state (`Flow[MyState]`) over the untyped dict variant. You get autocompletion, validation at the boundary, and serializable state for checkpointing:
|
||||
|
||||
```python
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class ResearchState(BaseModel):
|
||||
topic: str = ""
|
||||
sources: list[str] = Field(default_factory=list)
|
||||
draft: str = ""
|
||||
final: str = ""
|
||||
```
|
||||
|
||||
Untyped state (`Flow()` with no generic) still works, but you lose static checks and checkpointing fidelity.
|
||||
|
||||
### Multi-crew Flow pattern
|
||||
|
||||
Chaining two crews — research, then writing — is the canonical reason to adopt Flows:
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, start, listen, router
|
||||
from pydantic import BaseModel
|
||||
|
||||
class PipelineState(BaseModel):
|
||||
topic: str = ""
|
||||
research: str = ""
|
||||
article: str = ""
|
||||
|
||||
class ContentPipeline(Flow[PipelineState]):
|
||||
@start()
|
||||
def research(self):
|
||||
out = ResearchCrew().crew().kickoff(inputs={"topic": self.state.topic})
|
||||
self.state.research = str(out)
|
||||
return self.state.research
|
||||
|
||||
@router(research)
|
||||
def gate(self):
|
||||
return "write" if len(self.state.research) > 200 else "abort"
|
||||
|
||||
@listen("write")
|
||||
def write(self):
|
||||
out = WritingCrew().crew().kickoff(
|
||||
inputs={"topic": self.state.topic, "notes": self.state.research}
|
||||
)
|
||||
self.state.article = str(out)
|
||||
return self.state.article
|
||||
|
||||
@listen("abort")
|
||||
def bail(self):
|
||||
self.state.article = "Insufficient research."
|
||||
return self.state.article
|
||||
|
||||
ContentPipeline().kickoff(inputs={"topic": "vector databases"})
|
||||
```
|
||||
|
||||
`@start()`, `@listen()`, and `@router()` are the three decorators you'll use 95% of the time. See [Flows](/en/concepts/flows) for the full reference.
|
||||
|
||||
---
|
||||
|
||||
## Common Gotchas
|
||||
|
||||
1. **`pip install` instead of `uv tool install`.** The CLI is a `uv` tool. Mixing pip-installed and uv-installed `crewai` leads to two binaries on `PATH` and confusing version skew. Pick one — the supported one is `uv`.
|
||||
2. **Forgetting `crewai install` after upgrading.** The CLI upgrade is global; your project venv still pins the old version until you re-sync.
|
||||
3. **Passing a Pydantic instance to `output_pydantic`.** It expects the class. `output_pydantic=Article`, not `output_pydantic=Article(...)`.
|
||||
4. **Hierarchical process with no manager.** `process=Process.hierarchical` requires `manager_llm=` or `manager_agent=`.
|
||||
5. **Memory enabled with the wrong embedder.** Switching embedders without clearing the on-disk memory directory causes dimension mismatches. Delete the project's memory store after changing providers.
|
||||
6. **Dict state when you wanted typed state.** `Flow()` with no generic gives you a dict. For type checking and clean checkpointing, use `Flow[MyState]` with a `BaseModel`.
|
||||
7. **Stale tool imports.** `from crewai_tools import BaseTool` works in some versions but is not the canonical path. Standardize on `from crewai.tools import BaseTool, tool`.
|
||||
8. **Python version drift.** CrewAI requires `>=3.10, <3.14`. `uv` will happily build a venv against 3.14+ if it's the default; pin the Python version in `pyproject.toml`.
|
||||
9. **`verbose=2` and similar integer flags.** `verbose` is a `bool`. Use event listeners for finer-grained logging.
|
||||
10. **Calling `crew.kickoff()` from inside a Flow without wrapping in `inputs={}`.** Flows pass state, not kwargs. The crew still expects `inputs={...}`.
|
||||
|
||||
---
|
||||
|
||||
## Checkpointing
|
||||
|
||||
Checkpointing is a newer addition that persists agent, crew, and flow state between runs. It lets long-running workflows resume after a crash, a manual stop, or a deploy.
|
||||
|
||||
```python
|
||||
crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
checkpoint=True,
|
||||
)
|
||||
```
|
||||
|
||||
The same flag is supported on `Flow` and `Agent`. State is written to the project's local store and replayed on the next `kickoff()` with the same identifier.
|
||||
|
||||
<Note>
|
||||
Checkpointing is in early release. APIs around resume semantics, storage backends, and identifiers may still shift between minor versions — pin your `crewai` version if you depend on it in production.
|
||||
</Note>
|
||||
|
||||
See [Checkpointing](/en/concepts/checkpointing) for the full feature reference.
|
||||
|
||||
---
|
||||
|
||||
## Getting Help
|
||||
|
||||
- **Changelog** — every breaking change is noted in the [release notes](/en/changelog).
|
||||
- **GitHub Issues** — open one at [github.com/crewAIInc/crewAI/issues](https://github.com/crewAIInc/crewAI/issues) with a minimal repro and your `crewai --version` output.
|
||||
- **Discord** — the CrewAI community Discord is the fastest path to debugging help: [community.crewai.com](https://community.crewai.com).
|
||||
- **Migration guides** — if you're moving from another framework, start at [Migrating from LangGraph](/en/guides/migration/migrating-from-langgraph).
|
||||
Reference in New Issue
Block a user