--- title: Changelog description: View the latest updates and changes to CrewAI icon: timeline --- ## Release Highlights
View on GitHub
**Core Improvements & Fixes** - Upgraded **crewai-tools** to latest version - Upgraded **liteLLM** to latest version - Fixed **Mem0 OSS**
## Release Highlights
View on GitHub
**New Features & Enhancements** - Added `result_as_answer` parameter support in `@tool` decorator. - Introduced support for new language models: GPT-4.1, Gemini-2.0, and Gemini-2.5 Pro. - Enhanced knowledge management capabilities. - Added Huggingface provider option in CLI. - Improved compatibility and CI support for Python 3.10+. **Core Improvements & Fixes** - Fixed issues with incorrect template parameters and missing inputs. - Improved asynchronous flow handling with coroutine condition checks. - Enhanced memory management with isolated configuration and correct memory object copying. - Fixed initialization of lite agents with correct references. - Addressed Python type hint issues and removed redundant imports. - Updated event placement for improved tool usage tracking. - Raised explicit exceptions when flows fail. - Removed unused code and redundant comments from various modules. - Updated GitHub App token action to v2. **Documentation & Guides** - Enhanced documentation structure, including enterprise deployment instructions. - Automatically create output folders for documentation generation. - Fixed broken link in WeaviateVectorSearchTool documentation. - Fixed guardrail documentation usage and import paths for JSON search tools. - Updated documentation for CodeInterpreterTool. - Improved SEO, contextual navigation, and error handling for documentation pages.
## Release Highlights
View on GitHub
**New Features & Enhancements** - Agents as an atomic unit. (`Agent(...).kickoff()`) - Support for [Custom LLM implementations](https://docs.crewai.com/guides/advanced/custom-llm). - Integrated External Memory and [Opik observability](https://docs.crewai.com/how-to/opik-observability). - Enhanced YAML extraction. - Multimodal agent validation. - Added Secure fingerprints for agents and crews. **Core Improvements & Fixes** - Improved serialization, agent copying, and Python compatibility. - Added wildcard support to `emit()` - Added support for additional router calls and context window adjustments. - Fixed typing issues, validation, and import statements. - Improved method performance. - Enhanced agent task handling, event emissions, and memory management. - Fixed CLI issues, conditional tasks, cloning behavior, and tool outputs. **Documentation & Guides** - Improved documentation structure, theme, and organization. - Added guides for Local NVIDIA NIM with WSL2, W&B Weave, and Arize Phoenix. - Updated tool configuration examples, prompts, and observability docs. - Guide on using singular agents within Flows.
## Release Highlights
View on GitHub
**New Features & Enhancements** - Converted tabs to spaces in `crew.py` template - Enhanced LLM Streaming Response Handling and Event System - Included `model_name` - Enhanced Event Listener with rich visualization and improved logging - Added fingerprints **Bug Fixes** - Fixed Mistral issues - Fixed a bug in documentation - Fixed type check error in fingerprint property **Documentation Updates** - Improved tool documentation - Updated installation guide for the `uv` tool package - Added instructions for upgrading crewAI with the `uv` tool - Added documentation for `ApifyActorsTool`
**Core Improvements & Fixes** - Fixed issues with missing template variables and user memory configuration - Improved async flow support and addressed agent response formatting - Enhanced memory reset functionality and fixed CLI memory commands - Fixed type issues, tool calling properties, and telemetry decoupling **New Features & Enhancements** - Added Flow state export and improved state utilities - Enhanced agent knowledge setup with optional crew embedder - Introduced event emitter for better observability and LLM call tracking - Added support for Python 3.10 and ChatOllama from langchain_ollama - Integrated context window size support for the o3-mini model - Added support for multiple router calls **Documentation & Guides** - Improved documentation layout and hierarchical structure - Added QdrantVectorSearchTool guide and clarified event listener usage - Fixed typos in prompts and updated Amazon Bedrock model listings **Core Improvements & Fixes** - Enhanced LLM Support: Improved structured LLM output, parameter handling, and formatting for Anthropic models - Crew & Agent Stability: Fixed issues with cloning agents/crews using knowledge sources, multiple task outputs in conditional tasks, and ignored Crew task callbacks - Memory & Storage Fixes: Fixed short-term memory handling with Bedrock, ensured correct embedder initialization, and added a reset memories function in the crew class - Training & Execution Reliability: Fixed broken training and interpolation issues with dict and list input types **New Features & Enhancements** - Advanced Knowledge Management: Improved naming conventions and enhanced embedding configuration with custom embedder support - Expanded Logging & Observability: Added JSON format support for logging and integrated MLflow tracing documentation - Data Handling Improvements: Updated excel_knowledge_source.py to process multi-tab files - General Performance & Codebase Clean-Up: Streamlined enterprise code alignment and resolved linting issues - Adding new tool: `QdrantVectorSearchTool` **Documentation & Guides** - Updated AI & Memory Docs: Improved Bedrock, Google AI, and long-term memory documentation - Task & Workflow Clarity: Added "Human Input" row to Task Attributes, Langfuse guide, and FileWriterTool documentation - Fixed Various Typos & Formatting Issues **Features** - Add Composio docs - Add SageMaker as a LLM provider **Fixes** - Overall LLM connection issues - Using safe accessors on training - Add version check to crew_chat.py **Documentation** - New docs for crewai chat - Improve formatting and clarity in CLI and Composio Tool docs **Features** - Conversation crew v1 - Add unique ID to flow states - Add @persist decorator with FlowPersistence interface **Integrations** - Add SambaNova integration - Add NVIDIA NIM provider in cli - Introducing VoyageAI **Fixes** - Fix API Key Behavior and Entity Handling in Mem0 Integration - Fixed core invoke loop logic and relevant tests - Make tool inputs actual objects and not strings - Add important missing parts to creating tools - Drop litellm version to prevent windows issue - Before kickoff if inputs are none - Fixed typos, nested pydantic model issue, and docling issues **New Features** - Adding Multimodal Abilities to Crew - Programatic Guardrails - HITL multiple rounds - Gemini 2.0 Support - CrewAI Flows Improvements - Add Workflow Permissions - Add support for langfuse with litellm - Portkey Integration with CrewAI - Add interpolate_only method and improve error handling - Docling Support - Weviate Support **Fixes** - output_file not respecting system path - disk I/O error when resetting short-term memory - CrewJSONEncoder now accepts enums - Python max version - Interpolation for output_file in Task - Handle coworker role name case/whitespace properly - Add tiktoken as explicit dependency and document Rust requirement - Include agent knowledge in planning process - Change storage initialization to None for KnowledgeStorage - Fix optional storage checks - include event emitter in flows - Docstring, Error Handling, and Type Hints Improvements - Suppressed userWarnings from litellm pydantic issues **Changes** - Remove all references to pipeline and pipeline router - Add Nvidia NIM as provider in Custom LLM - Add knowledge demo + improve knowledge docs - Add HITL multiple rounds of followup - New docs about yaml crew with decorators - Simplify template crew **Features** - Added knowledge to agent level - Feat/remove langchain - Improve typed task outputs - Log in to Tool Repository on crewai login **Fixes** - Fixes issues with result as answer not properly exiting LLM loop - Fix missing key name when running with ollama provider - Fix spelling issue found **Documentation** - Update readme for running mypy - Add knowledge to mint.json - Update Github actions - Update Agents docs to include two approaches for creating an agent - Improvements to LLM Configuration and Usage **New Features** - New before_kickoff and after_kickoff crew callbacks - Support to pre-seed agents with Knowledge - Add support for retrieving user preferences and memories using Mem0 **Fixes** - Fix Async Execution - Upgrade chroma and adjust embedder function generator - Update CLI Watson supported models + docs - Reduce level for Bandit - Fixing all tests **Documentation** - Update Docs **Fixes** - Fixing Tokens callback replacement bug - Fixing Step callback issue - Add cached prompt tokens info on usage metrics - Fix crew_train_success test