Compare commits

..

1 Commits

Author SHA1 Message Date
Rip&Tear
80837b145f Update security.md
update policy for better readability
2025-05-08 12:20:03 +08:00
12 changed files with 23 additions and 136 deletions

View File

@@ -13,9 +13,8 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 15
strategy:
fail-fast: true # Stop testing on subsequent Python versions if an earlier one fails
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
python-version: ['3.10', '3.11', '3.12']
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -29,12 +28,7 @@ jobs:
run: uv python install ${{ matrix.python-version }}
- name: Install the project
if: matrix.python-version != '3.13'
run: uv sync --dev --all-extras
- name: Install the project (Python 3.13)
if: matrix.python-version == '3.13'
run: uv sync --dev # Skip all-extras for Python 3.13 due to onnxruntime compatibility
- name: Run tests
run: uv run pytest --block-network --timeout=60 -vv

View File

@@ -1,22 +0,0 @@
FROM python:3.13-slim-bookworm
WORKDIR /app
# Health check
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 CMD python -c "import crewai" || exit 1
# Improved installation process
RUN pip install --upgrade pip && pip install --no-cache-dir -U pip setuptools wheel && rm -rf /root/.cache/pip/* \
|| { echo 'Installation failed'; exit 1; }
# Copy the current directory contents into the container
COPY . /app/
# Install the package with error handling
RUN pip install -e . || { echo 'Package installation failed'; exit 1; }
# Version confirmation
RUN python -c "import crewai; assert crewai.__version__, 'Version check failed'"
# Test importing the package
CMD ["python", "-c", "import crewai; print(f'Successfully imported crewai version {crewai.__version__}')"]

View File

@@ -169,55 +169,19 @@ In this section, you'll find detailed examples that help you select, configure,
```
</Accordion>
<Accordion title="Google (Gemini API)">
Set your API key in your `.env` file. If you need a key, or need to find an
existing key, check [AI Studio](https://aistudio.google.com/apikey).
<Accordion title="Google">
Set the following environment variables in your `.env` file:
```toml .env
```toml Code
# Option 1: Gemini accessed with an API key.
# https://ai.google.dev/gemini-api/docs/api-key
GEMINI_API_KEY=<your-api-key>
# Option 2: Vertex AI IAM credentials for Gemini, Anthropic, and Model Garden.
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
```
Example usage in your CrewAI project:
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.0-flash",
temperature=0.7,
)
```
### Gemini models
Google offers a range of powerful models optimized for different use cases.
| Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
### Gemma
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window |
|----------------|----------------|
| gemma-3-1b-it | 32k tokens |
| gemma-3-4b-it | 32k tokens |
| gemma-3-12b-it | 32k tokens |
| gemma-3-27b-it | 128k tokens |
</Accordion>
<Accordion title="Google (Vertex AI)">
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
Get credentials from your Google Cloud Console and save it to a JSON file with the following code:
```python Code
import json
@@ -241,18 +205,14 @@ In this section, you'll find detailed examples that help you select, configure,
vertex_credentials=vertex_credentials_json
)
```
Google offers a range of powerful models optimized for different use cases:
| Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
| Model | Context Window | Best For |
|-----------------------|----------------|------------------------------------------------------------------|
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
</Accordion>
<Accordion title="Azure">

View File

@@ -68,13 +68,7 @@ We'll create a CrewAI application where two agents collaborate to research and w
```python
from crewai import Agent, Crew, Process, Task
from crewai_tools import SerperDevTool
from openinference.instrumentation.crewai import CrewAIInstrumentor
from phoenix.otel import register
# setup monitoring for your crew
tracer_provider = register(
endpoint="http://localhost:6006/v1/traces")
CrewAIInstrumentor().instrument(skip_dep_check=True, tracer_provider=tracer_provider)
search_tool = SerperDevTool()
# Define your agents with roles and goals

View File

@@ -3,7 +3,7 @@ name = "crewai"
version = "0.119.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<3.14"
requires-python = ">=3.10,<3.13"
authors = [
{ name = "Joao Moura", email = "joao@crewai.com" }
]

View File

@@ -13,7 +13,7 @@ ENV_VARS = {
],
"gemini": [
{
"prompt": "Enter your GEMINI API key from https://ai.dev/apikey (press Enter to skip)",
"prompt": "Enter your GEMINI API key (press Enter to skip)",
"key_name": "GEMINI_API_KEY",
}
],

View File

@@ -3,7 +3,7 @@ name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.119.0,<1.0.0"
]

View File

@@ -3,7 +3,7 @@ name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.119.0,<1.0.0",
]

View File

@@ -3,7 +3,7 @@ name = "{{folder_name}}"
version = "0.1.0"
description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.119.0"
]

View File

@@ -231,7 +231,7 @@ class TestDeployCommand(unittest.TestCase):
[project]
name = "test_project"
version = "0.1.0"
requires-python = ">=3.10,<3.14"
requires-python = ">=3.10,<3.13"
dependencies = ["crewai"]
""",
)
@@ -250,7 +250,7 @@ class TestDeployCommand(unittest.TestCase):
[project]
name = "test_project"
version = "0.1.0"
requires-python = ">=3.10,<3.14"
requires-python = ">=3.10,<3.13"
dependencies = ["crewai"]
""",
)

View File

@@ -1,39 +0,0 @@
"""Tests for Python version compatibility."""
import sys
import pytest
from packaging import version
def validate_python_version():
"""Validate that the current Python version is supported."""
min_version = (3, 10)
max_version = (3, 14)
current = sys.version_info[:2]
if not (min_version <= current < max_version):
raise RuntimeError(
f"This package requires Python {min_version[0]}.{min_version[1]} to "
f"{max_version[0]}.{max_version[1]-1}. You have Python {current[0]}.{current[1]}"
)
def test_python_version_compatibility():
"""Test that the package supports the current Python version."""
assert isinstance(sys.version_info, tuple), "Version Information must be a tuple"
current_version = version.parse(f"{sys.version_info.major}.{sys.version_info.minor}")
assert current_version >= version.parse("3.10"), "Python version too old"
assert current_version < version.parse("3.14"), "Python version too new"
# This test will fail if the package doesn't support the current Python version
import crewai
# Print the Python version for debugging
print(f"Python version: {sys.version}")
# Print the crewai version for debugging
print(f"CrewAI version: {crewai.__version__}")
# Validate Python version
validate_python_version()

2
uv.lock generated
View File

@@ -1,5 +1,5 @@
version = 1
requires-python = ">=3.10, <3.14"
requires-python = ">=3.10, <3.13"
resolution-markers = [
"python_full_version < '3.11' and platform_python_implementation == 'PyPy' and platform_system == 'Darwin' and sys_platform == 'darwin'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_python_implementation == 'PyPy' and platform_system == 'Linux' and sys_platform == 'darwin'",