From 836e9fc5450d26eeb4d32ccbecc158da03d5d9b8 Mon Sep 17 00:00:00 2001 From: Mark McDonald Date: Tue, 6 May 2025 21:27:14 +0800 Subject: [PATCH] Removes model provider defaults from LLM Setup (#2766) This removes any specific model from the "Setting up your LLM" guide, but provides examples for the top-3 providers. This section also conflated "model selection" with "model configuration", where configuration is provider-specific, so I've focused this first section on just model selection, deferring the config to the "provider" section that follows. Co-authored-by: Tony Kipkemboi --- docs/concepts/llms.mdx | 65 +++++++++++++++++++----------------------- 1 file changed, 30 insertions(+), 35 deletions(-) diff --git a/docs/concepts/llms.mdx b/docs/concepts/llms.mdx index 560448f21..cefc2705a 100644 --- a/docs/concepts/llms.mdx +++ b/docs/concepts/llms.mdx @@ -27,23 +27,19 @@ Large Language Models (LLMs) are the core intelligence behind CrewAI agents. The -## Setting Up Your LLM +## Setting up your LLM -There are three ways to configure LLMs in CrewAI. Choose the method that best fits your workflow: +There are different places in CrewAI code where you can specify the model to use. Once you specify the model you are using, you will need to provide the configuration (like an API key) for each of the model providers you use. See the [provider configuration examples](#provider-configuration-examples) section for your provider. - The simplest way to get started. Set these variables in your environment: + The simplest way to get started. Set the model in your environment directly, through an `.env` file or in your app code. If you used `crewai create` to bootstrap your project, it will be set already. - ```bash - # Required: Your API key for authentication - OPENAI_API_KEY= + ```bash .env + MODEL=model-id # e.g. gpt-4o, gemini-2.0-flash, claude-3-sonnet-... - # Optional: Default model selection - OPENAI_MODEL_NAME=gpt-4o-mini # Default if not set - - # Optional: Organization ID (if applicable) - OPENAI_ORGANIZATION_ID= + # Be sure to set your API keys here too. See the Provider + # section below. ``` @@ -53,13 +49,13 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi Create a YAML file to define your agent configurations. This method is great for version control and team collaboration: - ```yaml + ```yaml agents.yaml {6} researcher: role: Research Specialist goal: Conduct comprehensive research and analysis backstory: A dedicated research professional with years of experience verbose: true - llm: openai/gpt-4o-mini # your model here + llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude... # (see provider configuration examples below for more) ``` @@ -74,23 +70,23 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi For maximum flexibility, configure LLMs directly in your Python code: - ```python + ```python {4,8} from crewai import LLM # Basic configuration - llm = LLM(model="gpt-4") + llm = LLM(model="model-id-here") # gpt-4o, gemini-2.0-flash, anthropic/claude... # Advanced configuration with detailed parameters llm = LLM( - model="gpt-4o-mini", + model="model-id-here", # gpt-4o, gemini-2.0-flash, anthropic/claude... temperature=0.7, # Higher for more creative outputs - timeout=120, # Seconds to wait for response - max_tokens=4000, # Maximum length of response - top_p=0.9, # Nucleus sampling parameter - frequency_penalty=0.1, # Reduce repetition - presence_penalty=0.1, # Encourage topic diversity + timeout=120, # Seconds to wait for response + max_tokens=4000, # Maximum length of response + top_p=0.9, # Nucleus sampling parameter + frequency_penalty=0.1 , # Reduce repetition + presence_penalty=0.1, # Encourage topic diversity response_format={"type": "json"}, # For structured outputs - seed=42 # For reproducible results + seed=42 # For reproducible results ) ``` @@ -110,7 +106,6 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi ## Provider Configuration Examples - CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities. In this section, you'll find detailed examples that help you select, configure, and optimize the LLM that best fits your project's needs. @@ -407,19 +402,19 @@ In this section, you'll find detailed examples that help you select, configure, - - NVIDIA NIM enables you to run powerful LLMs locally on your Windows machine using WSL2 (Windows Subsystem for Linux). - This approach allows you to leverage your NVIDIA GPU for private, secure, and cost-effective AI inference without relying on cloud services. + + NVIDIA NIM enables you to run powerful LLMs locally on your Windows machine using WSL2 (Windows Subsystem for Linux). + This approach allows you to leverage your NVIDIA GPU for private, secure, and cost-effective AI inference without relying on cloud services. Perfect for development, testing, or production scenarios where data privacy or offline capabilities are required. - + Here is a step-by-step guide to setting up a local NVIDIA NIM model: - + 1. Follow installation instructions from [NVIDIA Website](https://docs.nvidia.com/nim/wsl2/latest/getting-started.html) 2. Install the local model. For Llama 3.1-8b follow [instructions](https://build.nvidia.com/meta/llama-3_1-8b-instruct/deploy) 3. Configure your crewai local models: - + ```python Code from crewai.llm import LLM @@ -441,7 +436,7 @@ In this section, you'll find detailed examples that help you select, configure, config=self.agents_config['researcher'], # type: ignore[index] llm=local_nvidia_nim_llm ) - + # ... ``` @@ -637,19 +632,19 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece When streaming is enabled, responses are delivered in chunks as they're generated, creating a more responsive user experience. - + CrewAI emits events for each chunk received during streaming: - + ```python from crewai import LLM from crewai.utilities.events import EventHandler, LLMStreamChunkEvent - + class MyEventHandler(EventHandler): def on_llm_stream_chunk(self, event: LLMStreamChunkEvent): # Process each chunk as it arrives print(f"Received chunk: {event.chunk}") - + # Register the event handler from crewai.utilities.events import crewai_event_bus crewai_event_bus.register_handler(MyEventHandler()) @@ -785,7 +780,7 @@ Learn how to get the most out of your LLM configuration: Use larger context models for extensive tasks - + ```python # Large context model llm = LLM(model="openai/gpt-4o") # 128K tokens