From bef59715987ba52c94710b0c0c77745fb4ce5185 Mon Sep 17 00:00:00 2001 From: Vidit Ostwal <110953813+Vidit-Ostwal@users.noreply.github.com> Date: Sun, 18 May 2025 03:11:12 +0530 Subject: [PATCH] Added Stop parameter docs (#2854) --- docs/concepts/llms.mdx | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/docs/concepts/llms.mdx b/docs/concepts/llms.mdx index cef763146..eb2bf38ee 100644 --- a/docs/concepts/llms.mdx +++ b/docs/concepts/llms.mdx @@ -791,6 +791,24 @@ Learn how to get the most out of your LLM configuration: Remember to regularly monitor your token usage and adjust your configuration as needed to optimize costs and performance. + + + CrewAI internally uses Litellm for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration. + For example, if you don't need to send the stop parameter, you can simply omit it from your LLM call: + + ```python + from crewai import LLM + import os + + os.environ["OPENAI_API_KEY"] = "" + + o3_llm = LLM( + model="o3", + drop_params=True, + additional_drop_params=["stop"] + ) + ``` + ## Common Issues and Solutions