mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-24 07:38:14 +00:00
chore: native files and openai responses docs
Some checks are pending
Build uv cache / build-cache (3.10) (push) Waiting to run
Build uv cache / build-cache (3.11) (push) Waiting to run
Build uv cache / build-cache (3.12) (push) Waiting to run
Build uv cache / build-cache (3.13) (push) Waiting to run
CodeQL Advanced / Analyze (actions) (push) Waiting to run
CodeQL Advanced / Analyze (python) (push) Waiting to run
Check Documentation Broken Links / Check broken links (push) Waiting to run
Notify Downstream / notify-downstream (push) Waiting to run
Some checks are pending
Build uv cache / build-cache (3.10) (push) Waiting to run
Build uv cache / build-cache (3.11) (push) Waiting to run
Build uv cache / build-cache (3.12) (push) Waiting to run
Build uv cache / build-cache (3.13) (push) Waiting to run
CodeQL Advanced / Analyze (actions) (push) Waiting to run
CodeQL Advanced / Analyze (python) (push) Waiting to run
Check Documentation Broken Links / Check broken links (push) Waiting to run
Notify Downstream / notify-downstream (push) Waiting to run
This commit is contained in:
@@ -207,6 +207,37 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
| o3-mini | 200,000 tokens | Lightweight reasoning model |
|
||||
| o4-mini | 200,000 tokens | Next-gen efficient reasoning |
|
||||
|
||||
**Responses API:**
|
||||
|
||||
OpenAI offers two APIs: Chat Completions (default) and the newer Responses API. The Responses API was designed from the ground up with native multimodal support—text, images, audio, and function calls are all first-class citizens. It provides better performance with reasoning models and supports additional features like auto-chaining and built-in tools.
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
# Use the Responses API instead of Chat Completions
|
||||
llm = LLM(
|
||||
model="openai/gpt-4o",
|
||||
api="responses", # Enable Responses API
|
||||
store=True, # Store responses for multi-turn (optional)
|
||||
auto_chain=True, # Auto-chain for reasoning models (optional)
|
||||
)
|
||||
```
|
||||
|
||||
**Responses API Parameters:**
|
||||
- `api`: Set to `"responses"` to use the Responses API (default: `"completions"`)
|
||||
- `instructions`: System-level instructions (Responses API only)
|
||||
- `store`: Whether to store responses for multi-turn conversations
|
||||
- `previous_response_id`: ID of previous response for multi-turn
|
||||
- `include`: Additional data to include in response (e.g., `["reasoning.encrypted_content"]`)
|
||||
- `builtin_tools`: List of OpenAI built-in tools: `"web_search"`, `"file_search"`, `"code_interpreter"`, `"computer_use"`
|
||||
- `parse_tool_outputs`: Return structured `ResponsesAPIResult` with parsed built-in tool outputs
|
||||
- `auto_chain`: Automatically track and use response IDs for multi-turn conversations
|
||||
- `auto_chain_reasoning`: Track encrypted reasoning items for ZDR (Zero Data Retention) compliance
|
||||
|
||||
<Tip>
|
||||
Use the Responses API for new projects, especially when working with reasoning models (o1, o3, o4) or when you need native multimodal support for [files](/en/concepts/files).
|
||||
</Tip>
|
||||
|
||||
**Note:** To use OpenAI, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[openai]"
|
||||
|
||||
Reference in New Issue
Block a user