mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-11 00:58:30 +00:00
update docs
This commit is contained in:
@@ -693,6 +693,30 @@ Learn how to get the most out of your LLM configuration:
|
|||||||
</Accordion>
|
</Accordion>
|
||||||
</AccordionGroup>
|
</AccordionGroup>
|
||||||
|
|
||||||
|
## Structured LLM Calls
|
||||||
|
|
||||||
|
CrewAI supports structured responses from LLM calls by allowing you to define a `response_format` using a Pydantic model. This enables the framework to automatically parse and validate the output, making it easier to integrate the response into your application without manual post-processing.
|
||||||
|
|
||||||
|
For example, you can define a Pydantic model to represent the expected response structure and pass it as the `response_format` when instantiating the LLM. The model will then be used to convert the LLM output into a structured Python object.
|
||||||
|
|
||||||
|
```python Code
|
||||||
|
from crewai import LLM
|
||||||
|
|
||||||
|
class Dog(BaseModel):
|
||||||
|
name: str
|
||||||
|
age: int
|
||||||
|
breed: str
|
||||||
|
|
||||||
|
|
||||||
|
llm = LLM(model="gpt-4o", response_format=Dog)
|
||||||
|
|
||||||
|
response = llm.call(
|
||||||
|
"Analyze the following messages and return the name, age, and breed. "
|
||||||
|
"Meet Kona! She is 3 years old and is a black german shepherd."
|
||||||
|
)
|
||||||
|
print(response)
|
||||||
|
```
|
||||||
|
|
||||||
## Common Issues and Solutions
|
## Common Issues and Solutions
|
||||||
|
|
||||||
<Tabs>
|
<Tabs>
|
||||||
|
|||||||
Reference in New Issue
Block a user