Files
crewAI/docs/learn/force-tool-output-as-result.mdx
Tony Kipkemboi dfc4255f2f docs: Add transparency features for prompts and memory systems (#2902)
* docs: Fix major memory system documentation issues - Remove misleading deprecation warnings, fix confusing comments, clearly separate three memory approaches, provide accurate examples that match implementation

* fix: Correct broken image paths in README - Update crewai_logo.png and asset.png paths to point to docs/images/ directory instead of docs/ directly

* docs: Add system prompt transparency and customization guide - Add 'Understanding Default System Instructions' section to address black-box concerns - Document what CrewAI automatically injects into prompts - Provide code examples to inspect complete system prompts - Show 3 methods to override default instructions - Include observability integration examples with Langfuse - Add best practices for production prompt management

* docs: Fix implementation accuracy issues in memory documentation - Fix Ollama embedding URL parameter and remove unsupported Cohere input_type parameter

* docs: Reference observability docs instead of showing specific tool examples

* docs: Reorganize knowledge documentation for better developer experience - Move quickstart examples right after overview for immediate hands-on experience - Create logical learning progression: basics → configuration → advanced → troubleshooting - Add comprehensive agent vs crew knowledge guide with working examples - Consolidate debugging and troubleshooting in dedicated section - Organize best practices by topic in accordion format - Improve content flow from simple concepts to advanced features - Ensure all examples are grounded in actual codebase implementation

* docs: enhance custom LLM documentation with comprehensive examples and accurate imports

* docs: reorganize observability tools into dedicated section with comprehensive overview and improved navigation

* docs: rename how-to section to learn and add comprehensive overview page

* docs: finalize documentation reorganization and update navigation labels

* docs: enhance README with comprehensive badges, navigation links, and getting started video
2025-05-27 10:08:40 -07:00

50 lines
1.8 KiB
Plaintext

---
title: Force Tool Output as Result
description: Learn how to force tool output as the result in an Agent's task in CrewAI.
icon: wrench-simple
---
## Introduction
In CrewAI, you can force the output of a tool as the result of an agent's task.
This feature is useful when you want to ensure that the tool output is captured and returned as the task result, avoiding any agent modification during the task execution.
## Forcing Tool Output as Result
To force the tool output as the result of an agent's task, you need to set the `result_as_answer` parameter to `True` when adding a tool to the agent.
This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
Here's an example of how to force the tool output as the result of an agent's task:
```python Code
from crewai.agent import Agent
from my_tool import MyCustomTool
# Create a coding agent with the custom tool
coding_agent = Agent(
role="Data Scientist",
goal="Produce amazing reports on AI",
backstory="You work with data and AI",
tools=[MyCustomTool(result_as_answer=True)],
)
# Assuming the tool's execution and result population occurs within the system
task_result = coding_agent.execute_task(task)
```
## Workflow in Action
<Steps>
<Step title="Task Execution">
The agent executes the task using the tool provided.
</Step>
<Step title="Tool Output">
The tool generates the output, which is captured as the task result.
</Step>
<Step title="Agent Interaction">
The agent may reflect and take learnings from the tool but the output is not modified.
</Step>
<Step title="Result Return">
The tool output is returned as the task result without any modifications.
</Step>
</Steps>