--- title: Changelog description: View the latest updates and changes to CrewAI icon: timeline --- **Features** - Converted tabs to spaces in `crew.py` template - Enhanced LLM Streaming Response Handling and Event System - Included `model_name` - Enhanced Event Listener with rich visualization and improved logging - Added fingerprints **Bug Fixes** - Fixed Mistral issues - Fixed a bug in documentation - Fixed type check error in fingerprint property **Documentation Updates** - Improved tool documentation - Updated installation guide for the `uv` tool package - Added instructions for upgrading crewAI with the `uv` tool - Added documentation for `ApifyActorsTool` **Core Improvements & Fixes** - Fixed issues with missing template variables and user memory configuration - Improved async flow support and addressed agent response formatting - Enhanced memory reset functionality and fixed CLI memory commands - Fixed type issues, tool calling properties, and telemetry decoupling **New Features & Enhancements** - Added Flow state export and improved state utilities - Enhanced agent knowledge setup with optional crew embedder - Introduced event emitter for better observability and LLM call tracking - Added support for Python 3.10 and ChatOllama from langchain_ollama - Integrated context window size support for the o3-mini model - Added support for multiple router calls **Documentation & Guides** - Improved documentation layout and hierarchical structure - Added QdrantVectorSearchTool guide and clarified event listener usage - Fixed typos in prompts and updated Amazon Bedrock model listings **Core Improvements & Fixes** - Enhanced LLM Support: Improved structured LLM output, parameter handling, and formatting for Anthropic models - Crew & Agent Stability: Fixed issues with cloning agents/crews using knowledge sources, multiple task outputs in conditional tasks, and ignored Crew task callbacks - Memory & Storage Fixes: Fixed short-term memory handling with Bedrock, ensured correct embedder initialization, and added a reset memories function in the crew class - Training & Execution Reliability: Fixed broken training and interpolation issues with dict and list input types **New Features & Enhancements** - Advanced Knowledge Management: Improved naming conventions and enhanced embedding configuration with custom embedder support - Expanded Logging & Observability: Added JSON format support for logging and integrated MLflow tracing documentation - Data Handling Improvements: Updated excel_knowledge_source.py to process multi-tab files - General Performance & Codebase Clean-Up: Streamlined enterprise code alignment and resolved linting issues - Adding new tool: `QdrantVectorSearchTool` **Documentation & Guides** - Updated AI & Memory Docs: Improved Bedrock, Google AI, and long-term memory documentation - Task & Workflow Clarity: Added "Human Input" row to Task Attributes, Langfuse guide, and FileWriterTool documentation - Fixed Various Typos & Formatting Issues **Features** - Add Composio docs - Add SageMaker as a LLM provider **Fixes** - Overall LLM connection issues - Using safe accessors on training - Add version check to crew_chat.py **Documentation** - New docs for crewai chat - Improve formatting and clarity in CLI and Composio Tool docs **Features** - Conversation crew v1 - Add unique ID to flow states - Add @persist decorator with FlowPersistence interface **Integrations** - Add SambaNova integration - Add NVIDIA NIM provider in cli - Introducing VoyageAI **Fixes** - Fix API Key Behavior and Entity Handling in Mem0 Integration - Fixed core invoke loop logic and relevant tests - Make tool inputs actual objects and not strings - Add important missing parts to creating tools - Drop litellm version to prevent windows issue - Before kickoff if inputs are none - Fixed typos, nested pydantic model issue, and docling issues **New Features** - Adding Multimodal Abilities to Crew - Programatic Guardrails - HITL multiple rounds - Gemini 2.0 Support - CrewAI Flows Improvements - Add Workflow Permissions - Add support for langfuse with litellm - Portkey Integration with CrewAI - Add interpolate_only method and improve error handling - Docling Support - Weviate Support **Fixes** - output_file not respecting system path - disk I/O error when resetting short-term memory - CrewJSONEncoder now accepts enums - Python max version - Interpolation for output_file in Task - Handle coworker role name case/whitespace properly - Add tiktoken as explicit dependency and document Rust requirement - Include agent knowledge in planning process - Change storage initialization to None for KnowledgeStorage - Fix optional storage checks - include event emitter in flows - Docstring, Error Handling, and Type Hints Improvements - Suppressed userWarnings from litellm pydantic issues **Changes** - Remove all references to pipeline and pipeline router - Add Nvidia NIM as provider in Custom LLM - Add knowledge demo + improve knowledge docs - Add HITL multiple rounds of followup - New docs about yaml crew with decorators - Simplify template crew **Features** - Added knowledge to agent level - Feat/remove langchain - Improve typed task outputs - Log in to Tool Repository on crewai login **Fixes** - Fixes issues with result as answer not properly exiting LLM loop - Fix missing key name when running with ollama provider - Fix spelling issue found **Documentation** - Update readme for running mypy - Add knowledge to mint.json - Update Github actions - Update Agents docs to include two approaches for creating an agent - Improvements to LLM Configuration and Usage **New Features** - New before_kickoff and after_kickoff crew callbacks - Support to pre-seed agents with Knowledge - Add support for retrieving user preferences and memories using Mem0 **Fixes** - Fix Async Execution - Upgrade chroma and adjust embedder function generator - Update CLI Watson supported models + docs - Reduce level for Bandit - Fixing all tests **Documentation** - Update Docs **Fixes** - Fixing Tokens callback replacement bug - Fixing Step callback issue - Add cached prompt tokens info on usage metrics - Fix crew_train_success test