diff --git a/docs/concepts/llms.mdx b/docs/concepts/llms.mdx index 0358308f4..baba89ed0 100644 --- a/docs/concepts/llms.mdx +++ b/docs/concepts/llms.mdx @@ -693,6 +693,30 @@ Learn how to get the most out of your LLM configuration: +## Structured LLM Calls + +CrewAI supports structured responses from LLM calls by allowing you to define a `response_format` using a Pydantic model. This enables the framework to automatically parse and validate the output, making it easier to integrate the response into your application without manual post-processing. + +For example, you can define a Pydantic model to represent the expected response structure and pass it as the `response_format` when instantiating the LLM. The model will then be used to convert the LLM output into a structured Python object. + +```python Code +from crewai import LLM + +class Dog(BaseModel): + name: str + age: int + breed: str + + +llm = LLM(model="gpt-4o", response_format=Dog) + +response = llm.call( + "Analyze the following messages and return the name, age, and breed. " + "Meet Kona! She is 3 years old and is a black german shepherd." +) +print(response) +``` + ## Common Issues and Solutions