Files
crewAI/docs/tools/ai-ml/overview.mdx
Tony Kipkemboi dfc4255f2f docs: Add transparency features for prompts and memory systems (#2902)
* docs: Fix major memory system documentation issues - Remove misleading deprecation warnings, fix confusing comments, clearly separate three memory approaches, provide accurate examples that match implementation

* fix: Correct broken image paths in README - Update crewai_logo.png and asset.png paths to point to docs/images/ directory instead of docs/ directly

* docs: Add system prompt transparency and customization guide - Add 'Understanding Default System Instructions' section to address black-box concerns - Document what CrewAI automatically injects into prompts - Provide code examples to inspect complete system prompts - Show 3 methods to override default instructions - Include observability integration examples with Langfuse - Add best practices for production prompt management

* docs: Fix implementation accuracy issues in memory documentation - Fix Ollama embedding URL parameter and remove unsupported Cohere input_type parameter

* docs: Reference observability docs instead of showing specific tool examples

* docs: Reorganize knowledge documentation for better developer experience - Move quickstart examples right after overview for immediate hands-on experience - Create logical learning progression: basics → configuration → advanced → troubleshooting - Add comprehensive agent vs crew knowledge guide with working examples - Consolidate debugging and troubleshooting in dedicated section - Organize best practices by topic in accordion format - Improve content flow from simple concepts to advanced features - Ensure all examples are grounded in actual codebase implementation

* docs: enhance custom LLM documentation with comprehensive examples and accurate imports

* docs: reorganize observability tools into dedicated section with comprehensive overview and improved navigation

* docs: rename how-to section to learn and add comprehensive overview page

* docs: finalize documentation reorganization and update navigation labels

* docs: enhance README with comprehensive badges, navigation links, and getting started video
2025-05-27 10:08:40 -07:00

64 lines
2.1 KiB
Plaintext

---
title: "Overview"
description: "Leverage AI services, generate images, process vision, and build intelligent systems"
icon: "face-smile"
---
These tools integrate with AI and machine learning services to enhance your agents with advanced capabilities like image generation, vision processing, and intelligent code execution.
## **Available Tools**
<CardGroup cols={2}>
<Card title="DALL-E Tool" icon="image" href="/tools/ai-ml/dalletool">
Generate AI images using OpenAI's DALL-E model.
</Card>
<Card title="Vision Tool" icon="eye" href="/tools/ai-ml/visiontool">
Process and analyze images with computer vision capabilities.
</Card>
<Card title="AI Mind Tool" icon="brain" href="/tools/ai-ml/aimindtool">
Advanced AI reasoning and decision-making capabilities.
</Card>
<Card title="LlamaIndex Tool" icon="llama" href="/tools/ai-ml/llamaindextool">
Build knowledge bases and retrieval systems with LlamaIndex.
</Card>
<Card title="LangChain Tool" icon="link" href="/tools/ai-ml/langchaintool">
Integrate with LangChain for complex AI workflows.
</Card>
<Card title="RAG Tool" icon="database" href="/tools/ai-ml/ragtool">
Implement Retrieval-Augmented Generation systems.
</Card>
<Card title="Code Interpreter Tool" icon="code" href="/tools/ai-ml/codeinterpretertool">
Execute Python code and perform data analysis.
</Card>
</CardGroup>
## **Common Use Cases**
- **Content Generation**: Create images, text, and multimedia content
- **Data Analysis**: Execute code and analyze complex datasets
- **Knowledge Systems**: Build RAG systems and intelligent databases
- **Computer Vision**: Process and understand visual content
- **AI Safety**: Implement content moderation and safety checks
```python
from crewai_tools import DallETool, VisionTool, CodeInterpreterTool
# Create AI tools
image_generator = DallETool()
vision_processor = VisionTool()
code_executor = CodeInterpreterTool()
# Add to your agent
agent = Agent(
role="AI Specialist",
tools=[image_generator, vision_processor, code_executor],
goal="Create and analyze content using AI capabilities"
)