Lorenze/fix google vertex api using api keys (#4243)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled

* supporting vertex through api key use - expo mode

* docs update here

* docs translations

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
This commit is contained in:
Lorenze Jay
2026-01-20 09:34:36 -08:00
committed by GitHub
parent ceef062426
commit b267bb4054
6 changed files with 293 additions and 30 deletions

View File

@@ -375,10 +375,13 @@ In this section, you'll find detailed examples that help you select, configure,
GOOGLE_API_KEY=<your-api-key>
GEMINI_API_KEY=<your-api-key>
# Optional - for Vertex AI
# For Vertex AI Express mode (API key authentication)
GOOGLE_GENAI_USE_VERTEXAI=true
GOOGLE_API_KEY=<your-api-key>
# For Vertex AI with service account
GOOGLE_CLOUD_PROJECT=<your-project-id>
GOOGLE_CLOUD_LOCATION=<location> # Defaults to us-central1
GOOGLE_GENAI_USE_VERTEXAI=true # Set to use Vertex AI
```
**Basic Usage:**
@@ -412,7 +415,35 @@ In this section, you'll find detailed examples that help you select, configure,
)
```
**Vertex AI Configuration:**
**Vertex AI Express Mode (API Key Authentication):**
Vertex AI Express mode allows you to use Vertex AI with simple API key authentication instead of service account credentials. This is the quickest way to get started with Vertex AI.
To enable Express mode, set both environment variables in your `.env` file:
```toml .env
GOOGLE_GENAI_USE_VERTEXAI=true
GOOGLE_API_KEY=<your-api-key>
```
Then use the LLM as usual:
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.0-flash",
temperature=0.7
)
```
<Info>
To get an Express mode API key:
- New Google Cloud users: Get an [express mode API key](https://cloud.google.com/vertex-ai/generative-ai/docs/start/quickstart?usertype=apikey)
- Existing Google Cloud users: Get a [Google Cloud API key bound to a service account](https://cloud.google.com/docs/authentication/api-keys)
For more details, see the [Vertex AI Express mode documentation](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/start/quickstart?usertype=apikey).
</Info>
**Vertex AI Configuration (Service Account):**
```python Code
from crewai import LLM
@@ -424,10 +455,10 @@ In this section, you'll find detailed examples that help you select, configure,
```
**Supported Environment Variables:**
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: Your Google API key (required for Gemini API)
- `GOOGLE_CLOUD_PROJECT`: Google Cloud project ID (for Vertex AI)
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: Your Google API key (required for Gemini API and Vertex AI Express mode)
- `GOOGLE_GENAI_USE_VERTEXAI`: Set to `true` to use Vertex AI (required for Express mode)
- `GOOGLE_CLOUD_PROJECT`: Google Cloud project ID (for Vertex AI with service account)
- `GOOGLE_CLOUD_LOCATION`: GCP location (defaults to `us-central1`)
- `GOOGLE_GENAI_USE_VERTEXAI`: Set to `true` to use Vertex AI
**Features:**
- Native function calling support for Gemini 1.5+ and 2.x models