From be245596302d64f9d40a4f89b3caf1d76f79bb5e Mon Sep 17 00:00:00 2001 From: Young Han <110819238+seyeong-han@users.noreply.github.com> Date: Fri, 23 May 2025 11:57:59 -0700 Subject: [PATCH] Support Llama API in crewAI (#2825) * init: support llama-api in crewAI * docs: add comments for clarity --------- Co-authored-by: Lucas Gomide Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com> --- docs/concepts/llms.mdx | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/docs/concepts/llms.mdx b/docs/concepts/llms.mdx index 7802c4964..80b69ce11 100644 --- a/docs/concepts/llms.mdx +++ b/docs/concepts/llms.mdx @@ -152,6 +152,39 @@ In this section, you'll find detailed examples that help you select, configure, | o1 | 200,000 tokens | Fast reasoning, complex reasoning | + + Meta's Llama API provides access to Meta's family of large language models. + The API is available through the [Meta Llama API](https://llama.developer.meta.com?utm_source=partner-crewai&utm_medium=website). + Set the following environment variables in your `.env` file: + + ```toml Code + # Meta Llama API Key Configuration + LLAMA_API_KEY=LLM|your_api_key_here + ``` + + Example usage in your CrewAI project: + ```python Code + from crewai import LLM + + # Initialize Meta Llama LLM + llm = LLM( + model="meta_llama/Llama-4-Scout-17B-16E-Instruct-FP8", + temperature=0.8, + stop=["END"], + seed=42 + ) + ``` + + All models listed here https://llama.developer.meta.com/docs/models/ are supported. + + | Model ID | Input context length | Output context length | Input Modalities | Output Modalities | + | --- | --- | --- | --- | --- | + | `meta_llama/Llama-4-Scout-17B-16E-Instruct-FP8` | 128k | 4028 | Text, Image | Text | + | `meta_llama/Llama-4-Maverick-17B-128E-Instruct-FP8` | 128k | 4028 | Text, Image | Text | + | `meta_llama/Llama-3.3-70B-Instruct` | 128k | 4028 | Text | Text | + | `meta_llama/Llama-3.3-8B-Instruct` | 128k | 4028 | Text | Text | + + ```toml Code # Required