* feat: add messages to task and agent outputs
- Introduced a new field in and to capture messages from the last task execution.
- Updated the class to store the last messages and provide a property for easy access.
- Enhanced the and classes to include messages in their outputs.
- Added tests to ensure that messages are correctly included in task outputs and agent outputs during execution.
* using typing_extensions for 3.10 compatability
* feat: add last_messages attribute to agent for improved task tracking
- Introduced a new `last_messages` attribute in the agent class to store messages from the last task execution.
- Updated the `Crew` class to handle the new messages attribute in task outputs.
- Enhanced existing tests to ensure that the `last_messages` attribute is correctly initialized and utilized across various guardrail scenarios.
* fix: add messages field to TaskOutput in tests for consistency
- Updated multiple test cases to include the new `messages` field in the `TaskOutput` instances.
- Ensured that all relevant tests reflect the latest changes in the TaskOutput structure, maintaining consistency across the test suite.
- This change aligns with the recent addition of the `last_messages` attribute in the agent class for improved task tracking.
* feat: preserve messages in task outputs during replay
- Added functionality to the Crew class to store and retrieve messages in task outputs.
- Enhanced the replay mechanism to ensure that messages from stored task outputs are preserved and accessible.
- Introduced a new test case to verify that messages are correctly stored and replayed, ensuring consistency in task execution and output handling.
- This change improves the overall tracking and context retention of task interactions within the CrewAI framework.
* fix original test, prev was debugging
- Added section on LLM-based guardrails, explaining their usage and requirements.
- Updated examples to demonstrate the implementation of multiple guardrails, including both function-based and LLM-based approaches.
- Clarified the distinction between single and multiple guardrails in task configurations.
- Improved explanations of guardrail functionality to ensure better understanding of validation processes.
* docs: update LLM integration details and examples
- Changed references from LiteLLM to native SDKs for LLM providers.
- Enhanced OpenAI and AWS Bedrock sections with new usage examples and advanced configuration options.
- Added structured output examples and supported environment variables for better clarity.
- Improved documentation on additional parameters and features for LLM configurations.
* drop this example - should use strucutred output from task instead
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* docs(cli): document device-code login and config reset guidance; renumber sections
* docs(cli): fix duplicate numbering (renumber Login/API Keys/Configuration sections)
* docs: Fix webhook documentation to include meta dict in all webhook payloads
- Add note explaining that meta objects from kickoff requests are included in all webhook payloads
- Update webhook examples to show proper payload structure including meta field
- Fix webhook examples to match actual API implementation
- Apply changes to English, Korean, and Portuguese documentation
Resolves the documentation gap where meta dict passing to webhooks was not documented despite being implemented in the API.
* WIP: CrewAI docs theme, changelog, GEO, localization
* docs(cli): fix merge markers; ensure mode: "wide"; convert ASCII tables to Markdown (en/pt-BR/ko)
* docs: add group icons across locales; split Automation/Integrations; update tools overviews and links
refactor(events): relocate events module & update imports
- Move events from utilities/ to top-level events/ with types/, listeners/, utils/ structure
- Update all source/tests/docs to new import paths
- Add backwards compatibility stubs in crewai.utilities.events with deprecation warnings
- Restore test mocks and fix related test imports
* docs: fix API Reference OpenAPI sources and redirects; clarify training data usage; add Mermaid diagram; correct CLI usage and notes
* docs(mintlify): use explicit openapi {source, directory} with absolute paths to fix branch deployment routing
* docs(mintlify): add explicit endpoint MDX pages and include in nav; keep OpenAPI auto-gen as fallback
* docs(mintlify): remove OpenAPI Endpoints groups; add localized MDX endpoint pages for pt-BR and ko
* Dropping User Memory
* Dropping checks for user memory
* changed memory.mdx documentation removed user memory.
* Flaky Test Case Maybe
* Drop memory_config
* Fixed test cases
* Fixed some test cases
* Changed docs
* Changed BR docs
* Docs fixing
* Fix minor doc
* Fix minor doc
* Fix minor doc
* Added fallback mechanism in Mem0
- Change model format from "gemini/gemini-1.5-pro-latest" to "gemini-1.5-pro-latest"
in Vertex AI section examples
- Update both English and Portuguese documentation files
- Fixes incorrect provider prefix usage for Vertex AI models
- Ensures consistency with Vertex AI provider requirements
Files changed:
- docs/en/concepts/llms.mdx (line 272)
- docs/pt-BR/concepts/llms.mdx (line 270)
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
* Changed v1.1 -> v2
* Fixed Test Cases:
* Fixed linting issues
* Changed docs
* Refractored the storage
* Fixed test cases
* Fixing run-time checks
* Fixed Test Case
* Updated docs and added test case for custom categories
* Add the TODO back
* Minor Changes
* Added output_format in search
* Minor changes
* Added output_format and version in both search and save
* Small change
* Minor bugs
* Fixed test cases
* Changed docs
---------
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
* docs: add create_directory parameter
* docs: remove string guardrails to focus on function guardrails
* docs: remove get help from docs.json
* docs: update pt-BR docs.json changes
- Mark UserMemory and UserMemoryItem for removal in v0.156.0 or 2025-08-04
- Update all references with deprecation warnings
- Users should migrate to ExternalMemory
* Added add_sources()
* Fixed the agent knowledge querying
* Added test cases
* Fixed linting issue
* Fixed logic
* Seems like a falky test case
* Minor changes
* Added knowledge attriute to the crew documentation
* Flaky test
* fixed spaces
* Flaky Test Case
* Seems like a flaky test case
---------
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
* feat: add capability to track LLM calls by task and agent
This makes it possible to filter or scope LLM events by specific agents or tasks, which can be very useful for debugging or analytics in real-time application
* feat: add docs about LLM tracking by Agents and Tasks
* fix incompatible BaseLLM.call method signature
* feat: support to filter LLM Events from Lite Agent
* Adding Nebius to docs
Submitting this PR on behalf of Nebius AI Studio to add Nebius models to the CrewAI documentation.
I tested with the latest CrewAI + Nebius setup to ensure compatibility.
cc @tonykipkemboi
* updated LiteLLM page
---------
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
* docs: add pt-br translations
Powered by a CrewAI Flow https://github.com/danielfsbarreto/docs_translator
* Update mcp/overview.mdx brazilian docs
Its en-US counterpart was updated after I did a pass,
so now it includes the new section about @CrewBase