From bb3829a9ed4886da7a284bc89fc8bd7eda168ce2 Mon Sep 17 00:00:00 2001 From: Matisse <54796148+AMatisse@users.noreply.github.com> Date: Fri, 21 Mar 2025 15:12:26 -0400 Subject: [PATCH] docs: Update model reference in LLM configuration (#2267) Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com> --- docs/concepts/llms.mdx | 31 ++++++++++++++++++------------- 1 file changed, 18 insertions(+), 13 deletions(-) diff --git a/docs/concepts/llms.mdx b/docs/concepts/llms.mdx index e17098f6a..2aada5fac 100644 --- a/docs/concepts/llms.mdx +++ b/docs/concepts/llms.mdx @@ -59,7 +59,7 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi goal: Conduct comprehensive research and analysis backstory: A dedicated research professional with years of experience verbose: true - llm: openai/gpt-4o-mini # your model here + llm: openai/gpt-4o-mini # your model here # (see provider configuration examples below for more) ``` @@ -111,7 +111,7 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi ## Provider Configuration Examples -CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities. +CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities. In this section, you'll find detailed examples that help you select, configure, and optimize the LLM that best fits your project's needs. @@ -121,7 +121,7 @@ In this section, you'll find detailed examples that help you select, configure, ```toml Code # Required OPENAI_API_KEY=sk-... - + # Optional OPENAI_API_BASE= OPENAI_ORGANIZATION= @@ -226,7 +226,7 @@ In this section, you'll find detailed examples that help you select, configure, AZURE_API_KEY= AZURE_API_BASE= AZURE_API_VERSION= - + # Optional AZURE_AD_TOKEN= AZURE_API_TYPE= @@ -289,7 +289,7 @@ In this section, you'll find detailed examples that help you select, configure, | Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. | - + ```toml Code AWS_ACCESS_KEY_ID= @@ -474,7 +474,7 @@ In this section, you'll find detailed examples that help you select, configure, WATSONX_URL= WATSONX_APIKEY= WATSONX_PROJECT_ID= - + # Optional WATSONX_TOKEN= WATSONX_DEPLOYMENT_SPACE_ID= @@ -491,7 +491,7 @@ In this section, you'll find detailed examples that help you select, configure, 1. Install Ollama: [ollama.ai](https://ollama.ai/) - 2. Run a model: `ollama run llama2` + 2. Run a model: `ollama run llama3` 3. Configure: ```python Code @@ -600,7 +600,7 @@ In this section, you'll find detailed examples that help you select, configure, ```toml Code OPENROUTER_API_KEY= ``` - + Example usage in your CrewAI project: ```python Code llm = LLM( @@ -723,7 +723,7 @@ Learn how to get the most out of your LLM configuration: - Small tasks (up to 4K tokens): Standard models - Medium tasks (between 4K-32K): Enhanced models - Large tasks (over 32K): Large context models - + ```python # Configure model with appropriate settings llm = LLM( @@ -760,11 +760,11 @@ Learn how to get the most out of your LLM configuration: Most authentication issues can be resolved by checking API key format and environment variable names. - + ```bash # OpenAI OPENAI_API_KEY=sk-... - + # Anthropic ANTHROPIC_API_KEY=sk-ant-... ``` @@ -773,11 +773,11 @@ Learn how to get the most out of your LLM configuration: Always include the provider prefix in model names - + ```python # Correct llm = LLM(model="openai/gpt-4") - + # Incorrect llm = LLM(model="gpt-4") ``` @@ -786,5 +786,10 @@ Learn how to get the most out of your LLM configuration: Use larger context models for extensive tasks + + ```python + # Large context model + llm = LLM(model="openai/gpt-4o") # 128K tokens + ```