mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-16 11:38:31 +00:00
Compare commits
23 Commits
1.8.1
...
lorenze/en
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
64052745b7 | ||
|
|
7f7b5094cc | ||
|
|
ad83e8a2bf | ||
|
|
e44d778e0e | ||
|
|
601eda9095 | ||
|
|
83c62a65dd | ||
|
|
5645cbb22e | ||
|
|
3a1deb193a | ||
|
|
09185acc0d | ||
|
|
6541f01b1b | ||
|
|
3a6702e9c8 | ||
|
|
e4bd7889fd | ||
|
|
842a1db16f | ||
|
|
e9b86100c7 | ||
|
|
341812d58e | ||
|
|
38db734561 | ||
|
|
5048d54981 | ||
|
|
ae17178e86 | ||
|
|
b7a13e15ff | ||
|
|
13dc7e25e0 | ||
|
|
5cef85c643 | ||
|
|
dc3ae9396d | ||
|
|
0029f8193c |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -26,3 +26,4 @@ plan.md
|
||||
conceptual_plan.md
|
||||
build_image
|
||||
chromadb-*.lock
|
||||
.claude
|
||||
|
||||
@@ -429,7 +429,8 @@
|
||||
"group": "How-To Guides",
|
||||
"pages": [
|
||||
"en/enterprise/guides/build-crew",
|
||||
"en/enterprise/guides/deploy-crew",
|
||||
"en/enterprise/guides/prepare-for-deployment",
|
||||
"en/enterprise/guides/deploy-to-amp",
|
||||
"en/enterprise/guides/kickoff-crew",
|
||||
"en/enterprise/guides/update-crew",
|
||||
"en/enterprise/guides/enable-crew-studio",
|
||||
@@ -864,7 +865,8 @@
|
||||
"group": "Guias",
|
||||
"pages": [
|
||||
"pt-BR/enterprise/guides/build-crew",
|
||||
"pt-BR/enterprise/guides/deploy-crew",
|
||||
"pt-BR/enterprise/guides/prepare-for-deployment",
|
||||
"pt-BR/enterprise/guides/deploy-to-amp",
|
||||
"pt-BR/enterprise/guides/kickoff-crew",
|
||||
"pt-BR/enterprise/guides/update-crew",
|
||||
"pt-BR/enterprise/guides/enable-crew-studio",
|
||||
@@ -1326,7 +1328,8 @@
|
||||
"group": "How-To Guides",
|
||||
"pages": [
|
||||
"ko/enterprise/guides/build-crew",
|
||||
"ko/enterprise/guides/deploy-crew",
|
||||
"ko/enterprise/guides/prepare-for-deployment",
|
||||
"ko/enterprise/guides/deploy-to-amp",
|
||||
"ko/enterprise/guides/kickoff-crew",
|
||||
"ko/enterprise/guides/update-crew",
|
||||
"ko/enterprise/guides/enable-crew-studio",
|
||||
@@ -1514,6 +1517,18 @@
|
||||
"source": "/enterprise/:path*",
|
||||
"destination": "/en/enterprise/:path*"
|
||||
},
|
||||
{
|
||||
"source": "/en/enterprise/guides/deploy-crew",
|
||||
"destination": "/en/enterprise/guides/deploy-to-amp"
|
||||
},
|
||||
{
|
||||
"source": "/ko/enterprise/guides/deploy-crew",
|
||||
"destination": "/ko/enterprise/guides/deploy-to-amp"
|
||||
},
|
||||
{
|
||||
"source": "/pt-BR/enterprise/guides/deploy-crew",
|
||||
"destination": "/pt-BR/enterprise/guides/deploy-to-amp"
|
||||
},
|
||||
{
|
||||
"source": "/api-reference/:path*",
|
||||
"destination": "/en/api-reference/:path*"
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
---
|
||||
title: "Deploy Crew"
|
||||
description: "Deploying a Crew on CrewAI AMP"
|
||||
title: "Deploy to AMP"
|
||||
description: "Deploy your Crew or Flow to CrewAI AMP"
|
||||
icon: "rocket"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Note>
|
||||
After creating a crew locally or through Crew Studio, the next step is
|
||||
After creating a Crew or Flow locally (or through Crew Studio), the next step is
|
||||
deploying it to the CrewAI AMP platform. This guide covers multiple deployment
|
||||
methods to help you choose the best approach for your workflow.
|
||||
</Note>
|
||||
@@ -14,19 +14,26 @@ mode: "wide"
|
||||
## Prerequisites
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Crew Ready for Deployment" icon="users">
|
||||
You should have a working crew either built locally or created through Crew
|
||||
Studio
|
||||
<Card title="Project Ready for Deployment" icon="check-circle">
|
||||
You should have a working Crew or Flow that runs successfully locally.
|
||||
Follow our [preparation guide](/en/enterprise/guides/prepare-for-deployment) to verify your project structure.
|
||||
</Card>
|
||||
<Card title="GitHub Repository" icon="github">
|
||||
Your crew code should be in a GitHub repository (for GitHub integration
|
||||
Your code should be in a GitHub repository (for GitHub integration
|
||||
method)
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<Info>
|
||||
**Crews vs Flows**: Both project types can be deployed as "automations" on CrewAI AMP.
|
||||
The deployment process is the same, but they have different project structures.
|
||||
See [Prepare for Deployment](/en/enterprise/guides/prepare-for-deployment) for details.
|
||||
</Info>
|
||||
|
||||
## Option 1: Deploy Using CrewAI CLI
|
||||
|
||||
The CLI provides the fastest way to deploy locally developed crews to the Enterprise platform.
|
||||
The CLI provides the fastest way to deploy locally developed Crews or Flows to the AMP platform.
|
||||
The CLI automatically detects your project type from `pyproject.toml` and builds accordingly.
|
||||
|
||||
<Steps>
|
||||
<Step title="Install CrewAI CLI">
|
||||
@@ -128,7 +135,7 @@ crewai deploy remove <deployment_id>
|
||||
|
||||
## Option 2: Deploy Directly via Web Interface
|
||||
|
||||
You can also deploy your crews directly through the CrewAI AMP web interface by connecting your GitHub account. This approach doesn't require using the CLI on your local machine.
|
||||
You can also deploy your Crews or Flows directly through the CrewAI AMP web interface by connecting your GitHub account. This approach doesn't require using the CLI on your local machine. The platform automatically detects your project type and handles the build appropriately.
|
||||
|
||||
<Steps>
|
||||
|
||||
@@ -282,68 +289,7 @@ For automated deployments in CI/CD pipelines, you can use the CrewAI API to trig
|
||||
|
||||
</Steps>
|
||||
|
||||
## ⚠️ Environment Variable Security Requirements
|
||||
|
||||
<Warning>
|
||||
**Important**: CrewAI AMP has security restrictions on environment variable
|
||||
names that can cause deployment failures if not followed.
|
||||
</Warning>
|
||||
|
||||
### Blocked Environment Variable Patterns
|
||||
|
||||
For security reasons, the following environment variable naming patterns are **automatically filtered** and will cause deployment issues:
|
||||
|
||||
**Blocked Patterns:**
|
||||
|
||||
- Variables ending with `_TOKEN` (e.g., `MY_API_TOKEN`)
|
||||
- Variables ending with `_PASSWORD` (e.g., `DB_PASSWORD`)
|
||||
- Variables ending with `_SECRET` (e.g., `API_SECRET`)
|
||||
- Variables ending with `_KEY` in certain contexts
|
||||
|
||||
**Specific Blocked Variables:**
|
||||
|
||||
- `GITHUB_USER`, `GITHUB_TOKEN`
|
||||
- `AWS_REGION`, `AWS_DEFAULT_REGION`
|
||||
- Various internal CrewAI system variables
|
||||
|
||||
### Allowed Exceptions
|
||||
|
||||
Some variables are explicitly allowed despite matching blocked patterns:
|
||||
|
||||
- `AZURE_AD_TOKEN`
|
||||
- `AZURE_OPENAI_AD_TOKEN`
|
||||
- `ENTERPRISE_ACTION_TOKEN`
|
||||
- `CREWAI_ENTEPRISE_TOOLS_TOKEN`
|
||||
|
||||
### How to Fix Naming Issues
|
||||
|
||||
If your deployment fails due to environment variable restrictions:
|
||||
|
||||
```bash
|
||||
# ❌ These will cause deployment failures
|
||||
OPENAI_TOKEN=sk-...
|
||||
DATABASE_PASSWORD=mypassword
|
||||
API_SECRET=secret123
|
||||
|
||||
# ✅ Use these naming patterns instead
|
||||
OPENAI_API_KEY=sk-...
|
||||
DATABASE_CREDENTIALS=mypassword
|
||||
API_CONFIG=secret123
|
||||
```
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. **Use standard naming conventions**: `PROVIDER_API_KEY` instead of `PROVIDER_TOKEN`
|
||||
2. **Test locally first**: Ensure your crew works with the renamed variables
|
||||
3. **Update your code**: Change any references to the old variable names
|
||||
4. **Document changes**: Keep track of renamed variables for your team
|
||||
|
||||
<Tip>
|
||||
If you encounter deployment failures with cryptic environment variable errors,
|
||||
check your variable names against these patterns first.
|
||||
</Tip>
|
||||
|
||||
### Interact with Your Deployed Crew
|
||||
## Interact with Your Deployed Automation
|
||||
|
||||
Once deployment is complete, you can access your crew through:
|
||||
|
||||
@@ -387,7 +333,108 @@ The Enterprise platform also offers:
|
||||
- **Custom Tools Repository**: Create, share, and install tools
|
||||
- **Crew Studio**: Build crews through a chat interface without writing code
|
||||
|
||||
## Troubleshooting Deployment Failures
|
||||
|
||||
If your deployment fails, check these common issues:
|
||||
|
||||
### Build Failures
|
||||
|
||||
#### Missing uv.lock File
|
||||
|
||||
**Symptom**: Build fails early with dependency resolution errors
|
||||
|
||||
**Solution**: Generate and commit the lock file:
|
||||
|
||||
```bash
|
||||
uv lock
|
||||
git add uv.lock
|
||||
git commit -m "Add uv.lock for deployment"
|
||||
git push
|
||||
```
|
||||
|
||||
<Warning>
|
||||
The `uv.lock` file is required for all deployments. Without it, the platform
|
||||
cannot reliably install your dependencies.
|
||||
</Warning>
|
||||
|
||||
#### Wrong Project Structure
|
||||
|
||||
**Symptom**: "Could not find entry point" or "Module not found" errors
|
||||
|
||||
**Solution**: Verify your project matches the expected structure:
|
||||
|
||||
- **Both Crews and Flows**: Must have entry point at `src/project_name/main.py`
|
||||
- **Crews**: Use a `run()` function as entry point
|
||||
- **Flows**: Use a `kickoff()` function as entry point
|
||||
|
||||
See [Prepare for Deployment](/en/enterprise/guides/prepare-for-deployment) for detailed structure diagrams.
|
||||
|
||||
#### Missing CrewBase Decorator
|
||||
|
||||
**Symptom**: "Crew not found", "Config not found", or agent/task configuration errors
|
||||
|
||||
**Solution**: Ensure **all** crew classes use the `@CrewBase` decorator:
|
||||
|
||||
```python
|
||||
from crewai.project import CrewBase, agent, crew, task
|
||||
|
||||
@CrewBase # This decorator is REQUIRED
|
||||
class YourCrew():
|
||||
"""Your crew description"""
|
||||
|
||||
@agent
|
||||
def my_agent(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['my_agent'], # type: ignore[index]
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# ... rest of crew definition
|
||||
```
|
||||
|
||||
<Info>
|
||||
This applies to standalone Crews AND crews embedded inside Flow projects.
|
||||
Every crew class needs the decorator.
|
||||
</Info>
|
||||
|
||||
#### Incorrect pyproject.toml Type
|
||||
|
||||
**Symptom**: Build succeeds but runtime fails, or unexpected behavior
|
||||
|
||||
**Solution**: Verify the `[tool.crewai]` section matches your project type:
|
||||
|
||||
```toml
|
||||
# For Crew projects:
|
||||
[tool.crewai]
|
||||
type = "crew"
|
||||
|
||||
# For Flow projects:
|
||||
[tool.crewai]
|
||||
type = "flow"
|
||||
```
|
||||
|
||||
### Runtime Failures
|
||||
|
||||
#### LLM Connection Failures
|
||||
|
||||
**Symptom**: API key errors, "model not found", or authentication failures
|
||||
|
||||
**Solution**:
|
||||
1. Verify your LLM provider's API key is correctly set in environment variables
|
||||
2. Ensure the environment variable names match what your code expects
|
||||
3. Test locally with the exact same environment variables before deploying
|
||||
|
||||
#### Crew Execution Errors
|
||||
|
||||
**Symptom**: Crew starts but fails during execution
|
||||
|
||||
**Solution**:
|
||||
1. Check the execution logs in the AMP dashboard (Traces tab)
|
||||
2. Verify all tools have required API keys configured
|
||||
3. Ensure agent configurations in `agents.yaml` are valid
|
||||
4. Check task configurations in `tasks.yaml` for syntax errors
|
||||
|
||||
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
|
||||
Contact our support team for assistance with deployment issues or questions
|
||||
about the Enterprise platform.
|
||||
about the AMP platform.
|
||||
</Card>
|
||||
305
docs/en/enterprise/guides/prepare-for-deployment.mdx
Normal file
305
docs/en/enterprise/guides/prepare-for-deployment.mdx
Normal file
@@ -0,0 +1,305 @@
|
||||
---
|
||||
title: "Prepare for Deployment"
|
||||
description: "Ensure your Crew or Flow is ready for deployment to CrewAI AMP"
|
||||
icon: "clipboard-check"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Note>
|
||||
Before deploying to CrewAI AMP, it's crucial to verify your project is correctly structured.
|
||||
Both Crews and Flows can be deployed as "automations," but they have different project structures
|
||||
and requirements that must be met for successful deployment.
|
||||
</Note>
|
||||
|
||||
## Understanding Automations
|
||||
|
||||
In CrewAI AMP, **automations** is the umbrella term for deployable Agentic AI projects. An automation can be either:
|
||||
|
||||
- **A Crew**: A standalone team of AI agents working together on tasks
|
||||
- **A Flow**: An orchestrated workflow that can combine multiple crews, direct LLM calls, and procedural logic
|
||||
|
||||
Understanding which type you're deploying is essential because they have different project structures and entry points.
|
||||
|
||||
## Crews vs Flows: Key Differences
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Crew Projects" icon="users">
|
||||
Standalone AI agent teams with `crew.py` defining agents and tasks. Best for focused, collaborative tasks.
|
||||
</Card>
|
||||
<Card title="Flow Projects" icon="diagram-project">
|
||||
Orchestrated workflows with embedded crews in a `crews/` folder. Best for complex, multi-stage processes.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
| Aspect | Crew | Flow |
|
||||
|--------|------|------|
|
||||
| **Project structure** | `src/project_name/` with `crew.py` | `src/project_name/` with `crews/` folder |
|
||||
| **Main logic location** | `src/project_name/crew.py` | `src/project_name/main.py` (Flow class) |
|
||||
| **Entry point function** | `run()` in `main.py` | `kickoff()` in `main.py` |
|
||||
| **pyproject.toml type** | `type = "crew"` | `type = "flow"` |
|
||||
| **CLI create command** | `crewai create crew name` | `crewai create flow name` |
|
||||
| **Config location** | `src/project_name/config/` | `src/project_name/crews/crew_name/config/` |
|
||||
| **Can contain other crews** | No | Yes (in `crews/` folder) |
|
||||
|
||||
## Project Structure Reference
|
||||
|
||||
### Crew Project Structure
|
||||
|
||||
When you run `crewai create crew my_crew`, you get this structure:
|
||||
|
||||
```
|
||||
my_crew/
|
||||
├── .gitignore
|
||||
├── pyproject.toml # Must have type = "crew"
|
||||
├── README.md
|
||||
├── .env
|
||||
├── uv.lock # REQUIRED for deployment
|
||||
└── src/
|
||||
└── my_crew/
|
||||
├── __init__.py
|
||||
├── main.py # Entry point with run() function
|
||||
├── crew.py # Crew class with @CrewBase decorator
|
||||
├── tools/
|
||||
│ ├── custom_tool.py
|
||||
│ └── __init__.py
|
||||
└── config/
|
||||
├── agents.yaml # Agent definitions
|
||||
└── tasks.yaml # Task definitions
|
||||
```
|
||||
|
||||
<Warning>
|
||||
The nested `src/project_name/` structure is critical for Crews.
|
||||
Placing files at the wrong level will cause deployment failures.
|
||||
</Warning>
|
||||
|
||||
### Flow Project Structure
|
||||
|
||||
When you run `crewai create flow my_flow`, you get this structure:
|
||||
|
||||
```
|
||||
my_flow/
|
||||
├── .gitignore
|
||||
├── pyproject.toml # Must have type = "flow"
|
||||
├── README.md
|
||||
├── .env
|
||||
├── uv.lock # REQUIRED for deployment
|
||||
└── src/
|
||||
└── my_flow/
|
||||
├── __init__.py
|
||||
├── main.py # Entry point with kickoff() function + Flow class
|
||||
├── crews/ # Embedded crews folder
|
||||
│ └── poem_crew/
|
||||
│ ├── __init__.py
|
||||
│ ├── poem_crew.py # Crew with @CrewBase decorator
|
||||
│ └── config/
|
||||
│ ├── agents.yaml
|
||||
│ └── tasks.yaml
|
||||
└── tools/
|
||||
├── __init__.py
|
||||
└── custom_tool.py
|
||||
```
|
||||
|
||||
<Info>
|
||||
Both Crews and Flows use the `src/project_name/` structure.
|
||||
The key difference is that Flows have a `crews/` folder for embedded crews,
|
||||
while Crews have `crew.py` directly in the project folder.
|
||||
</Info>
|
||||
|
||||
## Pre-Deployment Checklist
|
||||
|
||||
Use this checklist to verify your project is ready for deployment.
|
||||
|
||||
### 1. Verify pyproject.toml Configuration
|
||||
|
||||
Your `pyproject.toml` must include the correct `[tool.crewai]` section:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="For Crews">
|
||||
```toml
|
||||
[tool.crewai]
|
||||
type = "crew"
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="For Flows">
|
||||
```toml
|
||||
[tool.crewai]
|
||||
type = "flow"
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Warning>
|
||||
If the `type` doesn't match your project structure, the build will fail or
|
||||
the automation won't run correctly.
|
||||
</Warning>
|
||||
|
||||
### 2. Ensure uv.lock File Exists
|
||||
|
||||
CrewAI uses `uv` for dependency management. The `uv.lock` file ensures reproducible builds and is **required** for deployment.
|
||||
|
||||
```bash
|
||||
# Generate or update the lock file
|
||||
uv lock
|
||||
|
||||
# Verify it exists
|
||||
ls -la uv.lock
|
||||
```
|
||||
|
||||
If the file doesn't exist, run `uv lock` and commit it to your repository:
|
||||
|
||||
```bash
|
||||
uv lock
|
||||
git add uv.lock
|
||||
git commit -m "Add uv.lock for deployment"
|
||||
git push
|
||||
```
|
||||
|
||||
### 3. Validate CrewBase Decorator Usage
|
||||
|
||||
**Every crew class must use the `@CrewBase` decorator.** This applies to:
|
||||
|
||||
- Standalone crew projects
|
||||
- Crews embedded inside Flow projects
|
||||
|
||||
```python
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from crewai.project import CrewBase, agent, crew, task
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from typing import List
|
||||
|
||||
@CrewBase # This decorator is REQUIRED
|
||||
class MyCrew():
|
||||
"""My crew description"""
|
||||
|
||||
agents: List[BaseAgent]
|
||||
tasks: List[Task]
|
||||
|
||||
@agent
|
||||
def my_agent(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['my_agent'], # type: ignore[index]
|
||||
verbose=True
|
||||
)
|
||||
|
||||
@task
|
||||
def my_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['my_task'] # type: ignore[index]
|
||||
)
|
||||
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
return Crew(
|
||||
agents=self.agents,
|
||||
tasks=self.tasks,
|
||||
process=Process.sequential,
|
||||
verbose=True,
|
||||
)
|
||||
```
|
||||
|
||||
<Warning>
|
||||
If you forget the `@CrewBase` decorator, your deployment will fail with
|
||||
errors about missing agents or tasks configurations.
|
||||
</Warning>
|
||||
|
||||
### 4. Check Project Entry Points
|
||||
|
||||
Both Crews and Flows have their entry point in `src/project_name/main.py`:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="For Crews">
|
||||
The entry point uses a `run()` function:
|
||||
|
||||
```python
|
||||
# src/my_crew/main.py
|
||||
from my_crew.crew import MyCrew
|
||||
|
||||
def run():
|
||||
"""Run the crew."""
|
||||
inputs = {'topic': 'AI in Healthcare'}
|
||||
result = MyCrew().crew().kickoff(inputs=inputs)
|
||||
return result
|
||||
|
||||
if __name__ == "__main__":
|
||||
run()
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="For Flows">
|
||||
The entry point uses a `kickoff()` function with a Flow class:
|
||||
|
||||
```python
|
||||
# src/my_flow/main.py
|
||||
from crewai.flow import Flow, listen, start
|
||||
from my_flow.crews.poem_crew.poem_crew import PoemCrew
|
||||
|
||||
class MyFlow(Flow):
|
||||
@start()
|
||||
def begin(self):
|
||||
# Flow logic here
|
||||
result = PoemCrew().crew().kickoff(inputs={...})
|
||||
return result
|
||||
|
||||
def kickoff():
|
||||
"""Run the flow."""
|
||||
MyFlow().kickoff()
|
||||
|
||||
if __name__ == "__main__":
|
||||
kickoff()
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### 5. Prepare Environment Variables
|
||||
|
||||
Before deployment, ensure you have:
|
||||
|
||||
1. **LLM API keys** ready (OpenAI, Anthropic, Google, etc.)
|
||||
2. **Tool API keys** if using external tools (Serper, etc.)
|
||||
|
||||
<Tip>
|
||||
Test your project locally with the same environment variables before deploying
|
||||
to catch configuration issues early.
|
||||
</Tip>
|
||||
|
||||
## Quick Validation Commands
|
||||
|
||||
Run these commands from your project root to quickly verify your setup:
|
||||
|
||||
```bash
|
||||
# 1. Check project type in pyproject.toml
|
||||
grep -A2 "\[tool.crewai\]" pyproject.toml
|
||||
|
||||
# 2. Verify uv.lock exists
|
||||
ls -la uv.lock || echo "ERROR: uv.lock missing! Run 'uv lock'"
|
||||
|
||||
# 3. Verify src/ structure exists
|
||||
ls -la src/*/main.py 2>/dev/null || echo "No main.py found in src/"
|
||||
|
||||
# 4. For Crews - verify crew.py exists
|
||||
ls -la src/*/crew.py 2>/dev/null || echo "No crew.py (expected for Crews)"
|
||||
|
||||
# 5. For Flows - verify crews/ folder exists
|
||||
ls -la src/*/crews/ 2>/dev/null || echo "No crews/ folder (expected for Flows)"
|
||||
|
||||
# 6. Check for CrewBase usage
|
||||
grep -r "@CrewBase" . --include="*.py"
|
||||
```
|
||||
|
||||
## Common Setup Mistakes
|
||||
|
||||
| Mistake | Symptom | Fix |
|
||||
|---------|---------|-----|
|
||||
| Missing `uv.lock` | Build fails during dependency resolution | Run `uv lock` and commit |
|
||||
| Wrong `type` in pyproject.toml | Build succeeds but runtime fails | Change to correct type |
|
||||
| Missing `@CrewBase` decorator | "Config not found" errors | Add decorator to all crew classes |
|
||||
| Files at root instead of `src/` | Entry point not found | Move to `src/project_name/` |
|
||||
| Missing `run()` or `kickoff()` | Cannot start automation | Add correct entry function |
|
||||
|
||||
## Next Steps
|
||||
|
||||
Once your project passes all checklist items, you're ready to deploy:
|
||||
|
||||
<Card title="Deploy to AMP" icon="rocket" href="/en/enterprise/guides/deploy-to-amp">
|
||||
Follow the deployment guide to deploy your Crew or Flow to CrewAI AMP using
|
||||
the CLI, web interface, or CI/CD integration.
|
||||
</Card>
|
||||
@@ -128,7 +128,7 @@ Flow를 배포할 때 다음을 고려하세요:
|
||||
### CrewAI Enterprise
|
||||
Flow를 배포하는 가장 쉬운 방법은 CrewAI Enterprise를 사용하는 것입니다. 인프라, 인증 및 모니터링을 대신 처리합니다.
|
||||
|
||||
시작하려면 [배포 가이드](/ko/enterprise/guides/deploy-crew)를 확인하세요.
|
||||
시작하려면 [배포 가이드](/ko/enterprise/guides/deploy-to-amp)를 확인하세요.
|
||||
|
||||
```bash
|
||||
crewai deploy create
|
||||
|
||||
@@ -91,7 +91,7 @@ Git 없이 빠르게 배포 — 프로젝트 ZIP 패키지를 업로드하세요
|
||||
## 관련 문서
|
||||
|
||||
<CardGroup cols={3}>
|
||||
<Card title="크루 배포" href="/ko/enterprise/guides/deploy-crew" icon="rocket">
|
||||
<Card title="크루 배포" href="/ko/enterprise/guides/deploy-to-amp" icon="rocket">
|
||||
GitHub 또는 ZIP 파일로 크루 배포
|
||||
</Card>
|
||||
<Card title="자동화 트리거" href="/ko/enterprise/guides/automation-triggers" icon="trigger">
|
||||
|
||||
@@ -79,7 +79,7 @@ Crew Studio는 자연어와 시각적 워크플로 에디터로 처음부터 자
|
||||
<Card title="크루 빌드" href="/ko/enterprise/guides/build-crew" icon="paintbrush">
|
||||
크루를 빌드하세요.
|
||||
</Card>
|
||||
<Card title="크루 배포" href="/ko/enterprise/guides/deploy-crew" icon="rocket">
|
||||
<Card title="크루 배포" href="/ko/enterprise/guides/deploy-to-amp" icon="rocket">
|
||||
GitHub 또는 ZIP 파일로 크루 배포.
|
||||
</Card>
|
||||
<Card title="React 컴포넌트 내보내기" href="/ko/enterprise/guides/react-component-export" icon="download">
|
||||
|
||||
@@ -1,305 +0,0 @@
|
||||
---
|
||||
title: "Crew 배포"
|
||||
description: "CrewAI 엔터프라이즈에서 Crew 배포하기"
|
||||
icon: "rocket"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Note>
|
||||
로컬에서 또는 Crew Studio를 통해 crew를 생성한 후, 다음 단계는 이를 CrewAI AMP
|
||||
플랫폼에 배포하는 것입니다. 본 가이드에서는 다양한 배포 방법을 다루며,
|
||||
여러분의 워크플로우에 가장 적합한 방식을 선택할 수 있도록 안내합니다.
|
||||
</Note>
|
||||
|
||||
## 사전 준비 사항
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="배포 준비가 된 Crew" icon="users">
|
||||
작동 중인 crew가 로컬에서 빌드되었거나 Crew Studio를 통해 생성되어 있어야
|
||||
합니다.
|
||||
</Card>
|
||||
<Card title="GitHub 저장소" icon="github">
|
||||
crew 코드가 GitHub 저장소에 있어야 합니다(GitHub 연동 방식의 경우).
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## 옵션 1: CrewAI CLI를 사용한 배포
|
||||
|
||||
CLI는 로컬에서 개발된 crew를 Enterprise 플랫폼에 가장 빠르게 배포할 수 있는 방법을 제공합니다.
|
||||
|
||||
<Steps>
|
||||
<Step title="CrewAI CLI 설치">
|
||||
아직 설치하지 않았다면 CrewAI CLI를 설치하세요:
|
||||
|
||||
```bash
|
||||
pip install crewai[tools]
|
||||
```
|
||||
|
||||
<Tip>
|
||||
CLI는 기본 CrewAI 패키지에 포함되어 있지만, `[tools]` 추가 옵션을 사용하면 모든 배포 종속성을 함께 설치할 수 있습니다.
|
||||
</Tip>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Enterprise 플랫폼에 인증">
|
||||
먼저, CrewAI AMP 플랫폼에 CLI를 인증해야 합니다:
|
||||
|
||||
```bash
|
||||
# 이미 CrewAI AMP 계정이 있거나 새로 생성하고 싶을 때:
|
||||
crewai login
|
||||
```
|
||||
|
||||
위 명령어를 실행하면 CLI가 다음을 진행합니다:
|
||||
1. URL과 고유 기기 코드를 표시합니다
|
||||
2. 브라우저를 열어 인증 페이지로 이동합니다
|
||||
3. 기기 확인을 요청합니다
|
||||
4. 인증 과정을 완료합니다
|
||||
|
||||
인증이 성공적으로 완료되면 터미널에 확인 메시지가 표시됩니다!
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="배포 생성">
|
||||
|
||||
프로젝트 디렉터리에서 다음 명령어를 실행하세요:
|
||||
|
||||
```bash
|
||||
crewai deploy create
|
||||
```
|
||||
|
||||
이 명령어는 다음을 수행합니다:
|
||||
1. GitHub 저장소 정보를 감지합니다
|
||||
2. 로컬 `.env` 파일의 환경 변수를 식별합니다
|
||||
3. 이러한 변수를 Enterprise 플랫폼으로 안전하게 전송합니다
|
||||
4. 고유 식별자가 부여된 새 배포를 만듭니다
|
||||
|
||||
성공적으로 생성되면 다음과 같은 메시지가 표시됩니다:
|
||||
```shell
|
||||
Deployment created successfully!
|
||||
Name: your_project_name
|
||||
Deployment ID: 01234567-89ab-cdef-0123-456789abcdef
|
||||
Current Status: Deploy Enqueued
|
||||
```
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="배포 진행 상황 모니터링">
|
||||
|
||||
다음 명령어로 배포 상태를 추적할 수 있습니다:
|
||||
|
||||
```bash
|
||||
crewai deploy status
|
||||
```
|
||||
|
||||
빌드 과정의 상세 로그가 필요하다면:
|
||||
|
||||
```bash
|
||||
crewai deploy logs
|
||||
```
|
||||
|
||||
<Tip>
|
||||
첫 배포는 컨테이너 이미지를 빌드하므로 일반적으로 10~15분 정도 소요됩니다. 이후 배포는 훨씬 빠릅니다.
|
||||
</Tip>
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## 추가 CLI 명령어
|
||||
|
||||
CrewAI CLI는 배포를 관리하기 위한 여러 명령어를 제공합니다:
|
||||
|
||||
```bash
|
||||
# 모든 배포 목록 확인
|
||||
crewai deploy list
|
||||
|
||||
# 배포 상태 확인
|
||||
crewai deploy status
|
||||
|
||||
# 배포 로그 보기
|
||||
crewai deploy logs
|
||||
|
||||
# 코드 변경 후 업데이트 푸시
|
||||
crewai deploy push
|
||||
|
||||
# 배포 삭제
|
||||
crewai deploy remove <deployment_id>
|
||||
```
|
||||
|
||||
## 옵션 2: 웹 인터페이스를 통한 직접 배포
|
||||
|
||||
GitHub 계정을 연결하여 CrewAI AMP 웹 인터페이스를 통해 crews를 직접 배포할 수도 있습니다. 이 방법은 로컬 머신에서 CLI를 사용할 필요가 없습니다.
|
||||
|
||||
<Steps>
|
||||
|
||||
<Step title="GitHub로 푸시하기">
|
||||
|
||||
crew를 GitHub 저장소에 푸시해야 합니다. 아직 crew를 만들지 않았다면, [이 튜토리얼](/ko/quickstart)을 따라할 수 있습니다.
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="GitHub를 CrewAI AOP에 연결하기">
|
||||
|
||||
1. [CrewAI AMP](https://app.crewai.com)에 로그인합니다.
|
||||
2. "Connect GitHub" 버튼을 클릭합니다.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="저장소 선택하기">
|
||||
|
||||
GitHub 계정을 연결한 후 배포할 저장소를 선택할 수 있습니다:
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="환경 변수 설정하기">
|
||||
|
||||
배포 전에, LLM 제공업체 또는 기타 서비스에 연결할 환경 변수를 설정해야 합니다:
|
||||
|
||||
1. 변수를 개별적으로 또는 일괄적으로 추가할 수 있습니다.
|
||||
2. 환경 변수는 `KEY=VALUE` 형식(한 줄에 하나씩)으로 입력합니다.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Crew 배포하기">
|
||||
|
||||
1. "Deploy" 버튼을 클릭하여 배포 프로세스를 시작합니다.
|
||||
2. 진행 바를 통해 진행 상황을 모니터링할 수 있습니다.
|
||||
3. 첫 번째 배포에는 일반적으로 약 10-15분 정도 소요되며, 이후 배포는 더 빠릅니다.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
배포가 완료되면 다음을 확인할 수 있습니다:
|
||||
- crew의 고유 URL
|
||||
- crew API를 보호할 Bearer 토큰
|
||||
- 배포를 삭제해야 하는 경우 "Delete" 버튼
|
||||
|
||||
</Step>
|
||||
|
||||
</Steps>
|
||||
|
||||
## ⚠️ 환경 변수 보안 요구사항
|
||||
|
||||
<Warning>
|
||||
**중요**: CrewAI AOP는 환경 변수 이름에 대한 보안 제한이 있으며, 이를 따르지
|
||||
않을 경우 배포가 실패할 수 있습니다.
|
||||
</Warning>
|
||||
|
||||
### 차단된 환경 변수 패턴
|
||||
|
||||
보안상의 이유로, 다음과 같은 환경 변수 명명 패턴은 **자동으로 필터링**되며 배포에 문제가 발생할 수 있습니다:
|
||||
|
||||
**차단된 패턴:**
|
||||
|
||||
- `_TOKEN`으로 끝나는 변수 (예: `MY_API_TOKEN`)
|
||||
- `_PASSWORD`로 끝나는 변수 (예: `DB_PASSWORD`)
|
||||
- `_SECRET`로 끝나는 변수 (예: `API_SECRET`)
|
||||
- 특정 상황에서 `_KEY`로 끝나는 변수
|
||||
|
||||
**특정 차단 변수:**
|
||||
|
||||
- `GITHUB_USER`, `GITHUB_TOKEN`
|
||||
- `AWS_REGION`, `AWS_DEFAULT_REGION`
|
||||
- 다양한 내부 CrewAI 시스템 변수
|
||||
|
||||
### 허용된 예외
|
||||
|
||||
일부 변수는 차단된 패턴과 일치하더라도 명시적으로 허용됩니다:
|
||||
|
||||
- `AZURE_AD_TOKEN`
|
||||
- `AZURE_OPENAI_AD_TOKEN`
|
||||
- `ENTERPRISE_ACTION_TOKEN`
|
||||
- `CREWAI_ENTEPRISE_TOOLS_TOKEN`
|
||||
|
||||
### 네이밍 문제 해결 방법
|
||||
|
||||
환경 변수 제한으로 인해 배포가 실패하는 경우:
|
||||
|
||||
```bash
|
||||
# ❌ 이러한 이름은 배포 실패를 초래합니다
|
||||
OPENAI_TOKEN=sk-...
|
||||
DATABASE_PASSWORD=mypassword
|
||||
API_SECRET=secret123
|
||||
|
||||
# ✅ 대신 다음과 같은 네이밍 패턴을 사용하세요
|
||||
OPENAI_API_KEY=sk-...
|
||||
DATABASE_CREDENTIALS=mypassword
|
||||
API_CONFIG=secret123
|
||||
```
|
||||
|
||||
### 모범 사례
|
||||
|
||||
1. **표준 명명 규칙 사용**: `PROVIDER_TOKEN` 대신 `PROVIDER_API_KEY` 사용
|
||||
2. **먼저 로컬에서 테스트**: crew가 이름이 변경된 변수로 제대로 동작하는지 확인
|
||||
3. **코드 업데이트**: 이전 변수 이름을 참조하는 부분을 모두 변경
|
||||
4. **변경 내용 문서화**: 팀을 위해 이름이 변경된 변수를 기록
|
||||
|
||||
<Tip>
|
||||
배포 실패 시, 환경 변수 에러 메시지가 난해하다면 먼저 변수 이름이 이 패턴을
|
||||
따르는지 확인하세요.
|
||||
</Tip>
|
||||
|
||||
### 배포된 Crew와 상호작용하기
|
||||
|
||||
배포가 완료되면 다음을 통해 crew에 접근할 수 있습니다:
|
||||
|
||||
1. **REST API**: 플랫폼에서 아래의 주요 경로가 포함된 고유한 HTTPS 엔드포인트를 생성합니다:
|
||||
|
||||
- `/inputs`: 필요한 입력 파라미터 목록
|
||||
- `/kickoff`: 제공된 입력값으로 실행 시작
|
||||
- `/status/{kickoff_id}`: 실행 상태 확인
|
||||
|
||||
2. **웹 인터페이스**: [app.crewai.com](https://app.crewai.com)에 방문하여 다음을 확인할 수 있습니다:
|
||||
- **Status 탭**: 배포 정보, API 엔드포인트 세부 정보 및 인증 토큰 확인
|
||||
- **Run 탭**: crew 구조의 시각적 표현
|
||||
- **Executions 탭**: 모든 실행 내역
|
||||
- **Metrics 탭**: 성능 분석
|
||||
- **Traces 탭**: 상세 실행 인사이트
|
||||
|
||||
### 실행 트리거하기
|
||||
|
||||
Enterprise 대시보드에서 다음 작업을 수행할 수 있습니다:
|
||||
|
||||
1. crew 이름을 클릭하여 상세 정보를 엽니다
|
||||
2. 관리 인터페이스에서 "Trigger Crew"를 선택합니다
|
||||
3. 나타나는 모달에 필요한 입력값을 입력합니다
|
||||
4. 파이프라인을 따라 실행의 진행 상황을 모니터링합니다
|
||||
|
||||
### 모니터링 및 분석
|
||||
|
||||
Enterprise 플랫폼은 포괄적인 가시성 기능을 제공합니다:
|
||||
|
||||
- **실행 관리**: 활성 및 완료된 실행 추적
|
||||
- **트레이스**: 각 실행의 상세 분해
|
||||
- **메트릭**: 토큰 사용량, 실행 시간, 비용
|
||||
- **타임라인 보기**: 작업 시퀀스의 시각적 표현
|
||||
|
||||
### 고급 기능
|
||||
|
||||
Enterprise 플랫폼은 또한 다음을 제공합니다:
|
||||
|
||||
- **환경 변수 관리**: API 키를 안전하게 저장 및 관리
|
||||
- **LLM 연결**: 다양한 LLM 공급자와의 통합 구성
|
||||
- **Custom Tools Repository**: 도구 생성, 공유 및 설치
|
||||
- **Crew Studio**: 코드를 작성하지 않고 채팅 인터페이스를 통해 crew 빌드
|
||||
|
||||
<Card
|
||||
title="도움이 필요하신가요?"
|
||||
icon="headset"
|
||||
href="mailto:support@crewai.com"
|
||||
>
|
||||
Enterprise 플랫폼의 배포 문제 또는 문의 사항이 있으시면 지원팀에 연락해
|
||||
주십시오.
|
||||
</Card>
|
||||
438
docs/ko/enterprise/guides/deploy-to-amp.mdx
Normal file
438
docs/ko/enterprise/guides/deploy-to-amp.mdx
Normal file
@@ -0,0 +1,438 @@
|
||||
---
|
||||
title: "AMP에 배포하기"
|
||||
description: "Crew 또는 Flow를 CrewAI AMP에 배포하기"
|
||||
icon: "rocket"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Note>
|
||||
로컬에서 또는 Crew Studio를 통해 Crew나 Flow를 생성한 후, 다음 단계는 이를 CrewAI AMP
|
||||
플랫폼에 배포하는 것입니다. 본 가이드에서는 다양한 배포 방법을 다루며,
|
||||
여러분의 워크플로우에 가장 적합한 방식을 선택할 수 있도록 안내합니다.
|
||||
</Note>
|
||||
|
||||
## 사전 준비 사항
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="배포 준비가 완료된 프로젝트" icon="check-circle">
|
||||
로컬에서 성공적으로 실행되는 Crew 또는 Flow가 있어야 합니다.
|
||||
[배포 준비 가이드](/ko/enterprise/guides/prepare-for-deployment)를 따라 프로젝트 구조를 확인하세요.
|
||||
</Card>
|
||||
<Card title="GitHub 저장소" icon="github">
|
||||
코드가 GitHub 저장소에 있어야 합니다(GitHub 연동 방식의 경우).
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<Info>
|
||||
**Crews vs Flows**: 두 프로젝트 유형 모두 CrewAI AMP에서 "자동화"로 배포할 수 있습니다.
|
||||
배포 과정은 동일하지만, 프로젝트 구조가 다릅니다.
|
||||
자세한 내용은 [배포 준비하기](/ko/enterprise/guides/prepare-for-deployment)를 참조하세요.
|
||||
</Info>
|
||||
|
||||
## 옵션 1: CrewAI CLI를 사용한 배포
|
||||
|
||||
CLI는 로컬에서 개발된 Crew 또는 Flow를 AMP 플랫폼에 가장 빠르게 배포할 수 있는 방법을 제공합니다.
|
||||
CLI는 `pyproject.toml`에서 프로젝트 유형을 자동으로 감지하고 그에 맞게 빌드합니다.
|
||||
|
||||
<Steps>
|
||||
<Step title="CrewAI CLI 설치">
|
||||
아직 설치하지 않았다면 CrewAI CLI를 설치하세요:
|
||||
|
||||
```bash
|
||||
pip install crewai[tools]
|
||||
```
|
||||
|
||||
<Tip>
|
||||
CLI는 기본 CrewAI 패키지에 포함되어 있지만, `[tools]` 추가 옵션을 사용하면 모든 배포 종속성을 함께 설치할 수 있습니다.
|
||||
</Tip>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Enterprise 플랫폼에 인증">
|
||||
먼저, CrewAI AMP 플랫폼에 CLI를 인증해야 합니다:
|
||||
|
||||
```bash
|
||||
# 이미 CrewAI AMP 계정이 있거나 새로 생성하고 싶을 때:
|
||||
crewai login
|
||||
```
|
||||
|
||||
위 명령어를 실행하면 CLI가 다음을 진행합니다:
|
||||
1. URL과 고유 기기 코드를 표시합니다
|
||||
2. 브라우저를 열어 인증 페이지로 이동합니다
|
||||
3. 기기 확인을 요청합니다
|
||||
4. 인증 과정을 완료합니다
|
||||
|
||||
인증이 성공적으로 완료되면 터미널에 확인 메시지가 표시됩니다!
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="배포 생성">
|
||||
|
||||
프로젝트 디렉터리에서 다음 명령어를 실행하세요:
|
||||
|
||||
```bash
|
||||
crewai deploy create
|
||||
```
|
||||
|
||||
이 명령어는 다음을 수행합니다:
|
||||
1. GitHub 저장소 정보를 감지합니다
|
||||
2. 로컬 `.env` 파일의 환경 변수를 식별합니다
|
||||
3. 이러한 변수를 Enterprise 플랫폼으로 안전하게 전송합니다
|
||||
4. 고유 식별자가 부여된 새 배포를 만듭니다
|
||||
|
||||
성공적으로 생성되면 다음과 같은 메시지가 표시됩니다:
|
||||
```shell
|
||||
Deployment created successfully!
|
||||
Name: your_project_name
|
||||
Deployment ID: 01234567-89ab-cdef-0123-456789abcdef
|
||||
Current Status: Deploy Enqueued
|
||||
```
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="배포 진행 상황 모니터링">
|
||||
|
||||
다음 명령어로 배포 상태를 추적할 수 있습니다:
|
||||
|
||||
```bash
|
||||
crewai deploy status
|
||||
```
|
||||
|
||||
빌드 과정의 상세 로그가 필요하다면:
|
||||
|
||||
```bash
|
||||
crewai deploy logs
|
||||
```
|
||||
|
||||
<Tip>
|
||||
첫 배포는 컨테이너 이미지를 빌드하므로 일반적으로 10~15분 정도 소요됩니다. 이후 배포는 훨씬 빠릅니다.
|
||||
</Tip>
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## 추가 CLI 명령어
|
||||
|
||||
CrewAI CLI는 배포를 관리하기 위한 여러 명령어를 제공합니다:
|
||||
|
||||
```bash
|
||||
# 모든 배포 목록 확인
|
||||
crewai deploy list
|
||||
|
||||
# 배포 상태 확인
|
||||
crewai deploy status
|
||||
|
||||
# 배포 로그 보기
|
||||
crewai deploy logs
|
||||
|
||||
# 코드 변경 후 업데이트 푸시
|
||||
crewai deploy push
|
||||
|
||||
# 배포 삭제
|
||||
crewai deploy remove <deployment_id>
|
||||
```
|
||||
|
||||
## 옵션 2: 웹 인터페이스를 통한 직접 배포
|
||||
|
||||
GitHub 계정을 연결하여 CrewAI AMP 웹 인터페이스를 통해 Crew 또는 Flow를 직접 배포할 수도 있습니다. 이 방법은 로컬 머신에서 CLI를 사용할 필요가 없습니다. 플랫폼은 자동으로 프로젝트 유형을 감지하고 적절하게 빌드를 처리합니다.
|
||||
|
||||
<Steps>
|
||||
|
||||
<Step title="GitHub로 푸시하기">
|
||||
|
||||
Crew를 GitHub 저장소에 푸시해야 합니다. 아직 Crew를 만들지 않았다면, [이 튜토리얼](/ko/quickstart)을 따라할 수 있습니다.
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="GitHub를 CrewAI AMP에 연결하기">
|
||||
|
||||
1. [CrewAI AMP](https://app.crewai.com)에 로그인합니다.
|
||||
2. "Connect GitHub" 버튼을 클릭합니다.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="저장소 선택하기">
|
||||
|
||||
GitHub 계정을 연결한 후 배포할 저장소를 선택할 수 있습니다:
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="환경 변수 설정하기">
|
||||
|
||||
배포 전에, LLM 제공업체 또는 기타 서비스에 연결할 환경 변수를 설정해야 합니다:
|
||||
|
||||
1. 변수를 개별적으로 또는 일괄적으로 추가할 수 있습니다.
|
||||
2. 환경 변수는 `KEY=VALUE` 형식(한 줄에 하나씩)으로 입력합니다.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Crew 배포하기">
|
||||
|
||||
1. "Deploy" 버튼을 클릭하여 배포 프로세스를 시작합니다.
|
||||
2. 진행 바를 통해 진행 상황을 모니터링할 수 있습니다.
|
||||
3. 첫 번째 배포에는 일반적으로 약 10-15분 정도 소요되며, 이후 배포는 더 빠릅니다.
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
배포가 완료되면 다음을 확인할 수 있습니다:
|
||||
- Crew의 고유 URL
|
||||
- Crew API를 보호할 Bearer 토큰
|
||||
- 배포를 삭제해야 하는 경우 "Delete" 버튼
|
||||
|
||||
</Step>
|
||||
|
||||
</Steps>
|
||||
|
||||
## 옵션 3: API를 통한 재배포 (CI/CD 통합)
|
||||
|
||||
CI/CD 파이프라인에서 자동화된 배포를 위해 CrewAI API를 사용하여 기존 crew의 재배포를 트리거할 수 있습니다. 이 방법은 GitHub Actions, Jenkins 또는 기타 자동화 워크플로우에 특히 유용합니다.
|
||||
|
||||
<Steps>
|
||||
<Step title="개인 액세스 토큰 발급">
|
||||
|
||||
CrewAI AMP 계정 설정에서 API 토큰을 생성합니다:
|
||||
|
||||
1. [app.crewai.com](https://app.crewai.com)으로 이동합니다
|
||||
2. **Settings** → **Account** → **Personal Access Token**을 클릭합니다
|
||||
3. 새 토큰을 생성하고 안전하게 복사합니다
|
||||
4. 이 토큰을 CI/CD 시스템의 시크릿으로 저장합니다
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Automation UUID 찾기">
|
||||
|
||||
배포된 crew의 고유 식별자를 찾습니다:
|
||||
|
||||
1. CrewAI AMP 대시보드에서 **Automations**로 이동합니다
|
||||
2. 기존 automation/crew를 선택합니다
|
||||
3. **Additional Details**를 클릭합니다
|
||||
4. **UUID**를 복사합니다 - 이것이 특정 crew 배포를 식별합니다
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="API를 통한 재배포 트리거">
|
||||
|
||||
Deploy API 엔드포인트를 사용하여 재배포를 트리거합니다:
|
||||
|
||||
```bash
|
||||
curl -i -X POST \
|
||||
-H "Authorization: Bearer YOUR_PERSONAL_ACCESS_TOKEN" \
|
||||
https://app.crewai.com/crewai_plus/api/v1/crews/YOUR-AUTOMATION-UUID/deploy
|
||||
|
||||
# HTTP/2 200
|
||||
# content-type: application/json
|
||||
#
|
||||
# {
|
||||
# "uuid": "your-automation-uuid",
|
||||
# "status": "Deploy Enqueued",
|
||||
# "public_url": "https://your-crew-deployment.crewai.com",
|
||||
# "token": "your-bearer-token"
|
||||
# }
|
||||
```
|
||||
|
||||
<Info>
|
||||
Git에 연결되어 처음 생성된 automation의 경우, API가 재배포 전에 자동으로 저장소에서 최신 변경 사항을 가져옵니다.
|
||||
</Info>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="GitHub Actions 통합 예시">
|
||||
|
||||
더 복잡한 배포 트리거가 있는 GitHub Actions 워크플로우 예시입니다:
|
||||
|
||||
```yaml
|
||||
name: Deploy CrewAI Automation
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
pull_request:
|
||||
types: [ labeled ]
|
||||
release:
|
||||
types: [ published ]
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
(github.event_name == 'push' && github.ref == 'refs/heads/main') ||
|
||||
(github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'deploy')) ||
|
||||
(github.event_name == 'release')
|
||||
steps:
|
||||
- name: Trigger CrewAI Redeployment
|
||||
run: |
|
||||
curl -X POST \
|
||||
-H "Authorization: Bearer ${{ secrets.CREWAI_PAT }}" \
|
||||
https://app.crewai.com/crewai_plus/api/v1/crews/${{ secrets.CREWAI_AUTOMATION_UUID }}/deploy
|
||||
```
|
||||
|
||||
<Tip>
|
||||
`CREWAI_PAT`와 `CREWAI_AUTOMATION_UUID`를 저장소 시크릿으로 추가하세요. PR 배포의 경우 "deploy" 라벨을 추가하여 워크플로우를 트리거합니다.
|
||||
</Tip>
|
||||
|
||||
</Step>
|
||||
|
||||
</Steps>
|
||||
|
||||
## 배포된 Automation과 상호작용하기
|
||||
|
||||
배포가 완료되면 다음을 통해 crew에 접근할 수 있습니다:
|
||||
|
||||
1. **REST API**: 플랫폼에서 아래의 주요 경로가 포함된 고유한 HTTPS 엔드포인트를 생성합니다:
|
||||
|
||||
- `/inputs`: 필요한 입력 파라미터 목록
|
||||
- `/kickoff`: 제공된 입력값으로 실행 시작
|
||||
- `/status/{kickoff_id}`: 실행 상태 확인
|
||||
|
||||
2. **웹 인터페이스**: [app.crewai.com](https://app.crewai.com)에 방문하여 다음을 확인할 수 있습니다:
|
||||
- **Status 탭**: 배포 정보, API 엔드포인트 세부 정보 및 인증 토큰 확인
|
||||
- **Run 탭**: Crew 구조의 시각적 표현
|
||||
- **Executions 탭**: 모든 실행 내역
|
||||
- **Metrics 탭**: 성능 분석
|
||||
- **Traces 탭**: 상세 실행 인사이트
|
||||
|
||||
### 실행 트리거하기
|
||||
|
||||
Enterprise 대시보드에서 다음 작업을 수행할 수 있습니다:
|
||||
|
||||
1. Crew 이름을 클릭하여 상세 정보를 엽니다
|
||||
2. 관리 인터페이스에서 "Trigger Crew"를 선택합니다
|
||||
3. 나타나는 모달에 필요한 입력값을 입력합니다
|
||||
4. 파이프라인을 따라 실행의 진행 상황을 모니터링합니다
|
||||
|
||||
### 모니터링 및 분석
|
||||
|
||||
Enterprise 플랫폼은 포괄적인 가시성 기능을 제공합니다:
|
||||
|
||||
- **실행 관리**: 활성 및 완료된 실행 추적
|
||||
- **트레이스**: 각 실행의 상세 분해
|
||||
- **메트릭**: 토큰 사용량, 실행 시간, 비용
|
||||
- **타임라인 보기**: 작업 시퀀스의 시각적 표현
|
||||
|
||||
### 고급 기능
|
||||
|
||||
Enterprise 플랫폼은 또한 다음을 제공합니다:
|
||||
|
||||
- **환경 변수 관리**: API 키를 안전하게 저장 및 관리
|
||||
- **LLM 연결**: 다양한 LLM 공급자와의 통합 구성
|
||||
- **Custom Tools Repository**: 도구 생성, 공유 및 설치
|
||||
- **Crew Studio**: 코드를 작성하지 않고 채팅 인터페이스를 통해 crew 빌드
|
||||
|
||||
## 배포 실패 문제 해결
|
||||
|
||||
배포가 실패하면 다음과 같은 일반적인 문제를 확인하세요:
|
||||
|
||||
### 빌드 실패
|
||||
|
||||
#### uv.lock 파일 누락
|
||||
|
||||
**증상**: 의존성 해결 오류와 함께 빌드 초기에 실패
|
||||
|
||||
**해결책**: lock 파일을 생성하고 커밋합니다:
|
||||
|
||||
```bash
|
||||
uv lock
|
||||
git add uv.lock
|
||||
git commit -m "Add uv.lock for deployment"
|
||||
git push
|
||||
```
|
||||
|
||||
<Warning>
|
||||
`uv.lock` 파일은 모든 배포에 필수입니다. 이 파일이 없으면 플랫폼에서
|
||||
의존성을 안정적으로 설치할 수 없습니다.
|
||||
</Warning>
|
||||
|
||||
#### 잘못된 프로젝트 구조
|
||||
|
||||
**증상**: "Could not find entry point" 또는 "Module not found" 오류
|
||||
|
||||
**해결책**: 프로젝트가 예상 구조와 일치하는지 확인합니다:
|
||||
|
||||
- **Crews와 Flows 모두**: 진입점이 `src/project_name/main.py`에 있어야 합니다
|
||||
- **Crews**: 진입점으로 `run()` 함수 사용
|
||||
- **Flows**: 진입점으로 `kickoff()` 함수 사용
|
||||
|
||||
자세한 구조 다이어그램은 [배포 준비하기](/ko/enterprise/guides/prepare-for-deployment)를 참조하세요.
|
||||
|
||||
#### CrewBase 데코레이터 누락
|
||||
|
||||
**증상**: "Crew not found", "Config not found" 또는 agent/task 구성 오류
|
||||
|
||||
**해결책**: **모든** crew 클래스가 `@CrewBase` 데코레이터를 사용하는지 확인합니다:
|
||||
|
||||
```python
|
||||
from crewai.project import CrewBase, agent, crew, task
|
||||
|
||||
@CrewBase # 이 데코레이터는 필수입니다
|
||||
class YourCrew():
|
||||
"""Crew 설명"""
|
||||
|
||||
@agent
|
||||
def my_agent(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['my_agent'], # type: ignore[index]
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# ... 나머지 crew 정의
|
||||
```
|
||||
|
||||
<Info>
|
||||
이것은 독립 실행형 Crews와 Flow 프로젝트 내에 포함된 crews 모두에 적용됩니다.
|
||||
모든 crew 클래스에 데코레이터가 필요합니다.
|
||||
</Info>
|
||||
|
||||
#### 잘못된 pyproject.toml 타입
|
||||
|
||||
**증상**: 빌드는 성공하지만 런타임에서 실패하거나 예상치 못한 동작
|
||||
|
||||
**해결책**: `[tool.crewai]` 섹션이 프로젝트 유형과 일치하는지 확인합니다:
|
||||
|
||||
```toml
|
||||
# Crew 프로젝트의 경우:
|
||||
[tool.crewai]
|
||||
type = "crew"
|
||||
|
||||
# Flow 프로젝트의 경우:
|
||||
[tool.crewai]
|
||||
type = "flow"
|
||||
```
|
||||
|
||||
### 런타임 실패
|
||||
|
||||
#### LLM 연결 실패
|
||||
|
||||
**증상**: API 키 오류, "model not found" 또는 인증 실패
|
||||
|
||||
**해결책**:
|
||||
1. LLM 제공업체의 API 키가 환경 변수에 올바르게 설정되어 있는지 확인합니다
|
||||
2. 환경 변수 이름이 코드에서 예상하는 것과 일치하는지 확인합니다
|
||||
3. 배포 전에 동일한 환경 변수로 로컬에서 테스트합니다
|
||||
|
||||
#### Crew 실행 오류
|
||||
|
||||
**증상**: Crew가 시작되지만 실행 중에 실패
|
||||
|
||||
**해결책**:
|
||||
1. AMP 대시보드에서 실행 로그를 확인합니다 (Traces 탭)
|
||||
2. 모든 도구에 필요한 API 키가 구성되어 있는지 확인합니다
|
||||
3. `agents.yaml`의 agent 구성이 유효한지 확인합니다
|
||||
4. `tasks.yaml`의 task 구성에 구문 오류가 없는지 확인합니다
|
||||
|
||||
<Card title="도움이 필요하신가요?" icon="headset" href="mailto:support@crewai.com">
|
||||
배포 문제 또는 AMP 플랫폼에 대한 문의 사항이 있으시면 지원팀에 연락해 주세요.
|
||||
</Card>
|
||||
305
docs/ko/enterprise/guides/prepare-for-deployment.mdx
Normal file
305
docs/ko/enterprise/guides/prepare-for-deployment.mdx
Normal file
@@ -0,0 +1,305 @@
|
||||
---
|
||||
title: "배포 준비하기"
|
||||
description: "Crew 또는 Flow가 CrewAI AMP에 배포될 준비가 되었는지 확인하기"
|
||||
icon: "clipboard-check"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Note>
|
||||
CrewAI AMP에 배포하기 전에, 프로젝트가 올바르게 구성되어 있는지 확인하는 것이 중요합니다.
|
||||
Crews와 Flows 모두 "자동화"로 배포할 수 있지만, 성공적인 배포를 위해 충족해야 하는
|
||||
서로 다른 프로젝트 구조와 요구 사항이 있습니다.
|
||||
</Note>
|
||||
|
||||
## 자동화 이해하기
|
||||
|
||||
CrewAI AMP에서 **자동화(automations)**는 배포 가능한 Agentic AI 프로젝트의 총칭입니다. 자동화는 다음 중 하나일 수 있습니다:
|
||||
|
||||
- **Crew**: 작업을 함께 수행하는 AI 에이전트들의 독립 실행형 팀
|
||||
- **Flow**: 여러 crew, 직접 LLM 호출 및 절차적 로직을 결합할 수 있는 오케스트레이션된 워크플로우
|
||||
|
||||
배포하는 유형을 이해하는 것은 프로젝트 구조와 진입점이 다르기 때문에 필수적입니다.
|
||||
|
||||
## Crews vs Flows: 주요 차이점
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Crew 프로젝트" icon="users">
|
||||
에이전트와 작업을 정의하는 `crew.py`가 있는 독립 실행형 AI 에이전트 팀. 집중적이고 협업적인 작업에 적합합니다.
|
||||
</Card>
|
||||
<Card title="Flow 프로젝트" icon="diagram-project">
|
||||
`crews/` 폴더에 포함된 crew가 있는 오케스트레이션된 워크플로우. 복잡한 다단계 프로세스에 적합합니다.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
| 측면 | Crew | Flow |
|
||||
|------|------|------|
|
||||
| **프로젝트 구조** | `crew.py`가 있는 `src/project_name/` | `crews/` 폴더가 있는 `src/project_name/` |
|
||||
| **메인 로직 위치** | `src/project_name/crew.py` | `src/project_name/main.py` (Flow 클래스) |
|
||||
| **진입점 함수** | `main.py`의 `run()` | `main.py`의 `kickoff()` |
|
||||
| **pyproject.toml 타입** | `type = "crew"` | `type = "flow"` |
|
||||
| **CLI 생성 명령어** | `crewai create crew name` | `crewai create flow name` |
|
||||
| **설정 위치** | `src/project_name/config/` | `src/project_name/crews/crew_name/config/` |
|
||||
| **다른 crew 포함 가능** | 아니오 | 예 (`crews/` 폴더 내) |
|
||||
|
||||
## 프로젝트 구조 참조
|
||||
|
||||
### Crew 프로젝트 구조
|
||||
|
||||
`crewai create crew my_crew`를 실행하면 다음 구조를 얻습니다:
|
||||
|
||||
```
|
||||
my_crew/
|
||||
├── .gitignore
|
||||
├── pyproject.toml # type = "crew"여야 함
|
||||
├── README.md
|
||||
├── .env
|
||||
├── uv.lock # 배포에 필수
|
||||
└── src/
|
||||
└── my_crew/
|
||||
├── __init__.py
|
||||
├── main.py # run() 함수가 있는 진입점
|
||||
├── crew.py # @CrewBase 데코레이터가 있는 Crew 클래스
|
||||
├── tools/
|
||||
│ ├── custom_tool.py
|
||||
│ └── __init__.py
|
||||
└── config/
|
||||
├── agents.yaml # 에이전트 정의
|
||||
└── tasks.yaml # 작업 정의
|
||||
```
|
||||
|
||||
<Warning>
|
||||
중첩된 `src/project_name/` 구조는 Crews에 매우 중요합니다.
|
||||
잘못된 레벨에 파일을 배치하면 배포 실패의 원인이 됩니다.
|
||||
</Warning>
|
||||
|
||||
### Flow 프로젝트 구조
|
||||
|
||||
`crewai create flow my_flow`를 실행하면 다음 구조를 얻습니다:
|
||||
|
||||
```
|
||||
my_flow/
|
||||
├── .gitignore
|
||||
├── pyproject.toml # type = "flow"여야 함
|
||||
├── README.md
|
||||
├── .env
|
||||
├── uv.lock # 배포에 필수
|
||||
└── src/
|
||||
└── my_flow/
|
||||
├── __init__.py
|
||||
├── main.py # kickoff() 함수 + Flow 클래스가 있는 진입점
|
||||
├── crews/ # 포함된 crews 폴더
|
||||
│ └── poem_crew/
|
||||
│ ├── __init__.py
|
||||
│ ├── poem_crew.py # @CrewBase 데코레이터가 있는 Crew
|
||||
│ └── config/
|
||||
│ ├── agents.yaml
|
||||
│ └── tasks.yaml
|
||||
└── tools/
|
||||
├── __init__.py
|
||||
└── custom_tool.py
|
||||
```
|
||||
|
||||
<Info>
|
||||
Crews와 Flows 모두 `src/project_name/` 구조를 사용합니다.
|
||||
핵심 차이점은 Flows는 포함된 crews를 위한 `crews/` 폴더가 있고,
|
||||
Crews는 프로젝트 폴더에 직접 `crew.py`가 있다는 것입니다.
|
||||
</Info>
|
||||
|
||||
## 배포 전 체크리스트
|
||||
|
||||
이 체크리스트를 사용하여 프로젝트가 배포 준비가 되었는지 확인하세요.
|
||||
|
||||
### 1. pyproject.toml 설정 확인
|
||||
|
||||
`pyproject.toml`에 올바른 `[tool.crewai]` 섹션이 포함되어야 합니다:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Crews의 경우">
|
||||
```toml
|
||||
[tool.crewai]
|
||||
type = "crew"
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Flows의 경우">
|
||||
```toml
|
||||
[tool.crewai]
|
||||
type = "flow"
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Warning>
|
||||
`type`이 프로젝트 구조와 일치하지 않으면 빌드가 실패하거나
|
||||
자동화가 올바르게 실행되지 않습니다.
|
||||
</Warning>
|
||||
|
||||
### 2. uv.lock 파일 존재 확인
|
||||
|
||||
CrewAI는 의존성 관리를 위해 `uv`를 사용합니다. `uv.lock` 파일은 재현 가능한 빌드를 보장하며 배포에 **필수**입니다.
|
||||
|
||||
```bash
|
||||
# lock 파일 생성 또는 업데이트
|
||||
uv lock
|
||||
|
||||
# 존재 여부 확인
|
||||
ls -la uv.lock
|
||||
```
|
||||
|
||||
파일이 존재하지 않으면 `uv lock`을 실행하고 저장소에 커밋하세요:
|
||||
|
||||
```bash
|
||||
uv lock
|
||||
git add uv.lock
|
||||
git commit -m "Add uv.lock for deployment"
|
||||
git push
|
||||
```
|
||||
|
||||
### 3. CrewBase 데코레이터 사용 확인
|
||||
|
||||
**모든 crew 클래스는 `@CrewBase` 데코레이터를 사용해야 합니다.** 이것은 다음에 적용됩니다:
|
||||
|
||||
- 독립 실행형 crew 프로젝트
|
||||
- Flow 프로젝트 내에 포함된 crews
|
||||
|
||||
```python
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from crewai.project import CrewBase, agent, crew, task
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from typing import List
|
||||
|
||||
@CrewBase # 이 데코레이터는 필수입니다
|
||||
class MyCrew():
|
||||
"""내 crew 설명"""
|
||||
|
||||
agents: List[BaseAgent]
|
||||
tasks: List[Task]
|
||||
|
||||
@agent
|
||||
def my_agent(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['my_agent'], # type: ignore[index]
|
||||
verbose=True
|
||||
)
|
||||
|
||||
@task
|
||||
def my_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['my_task'] # type: ignore[index]
|
||||
)
|
||||
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
return Crew(
|
||||
agents=self.agents,
|
||||
tasks=self.tasks,
|
||||
process=Process.sequential,
|
||||
verbose=True,
|
||||
)
|
||||
```
|
||||
|
||||
<Warning>
|
||||
`@CrewBase` 데코레이터를 잊으면 에이전트나 작업 구성이 누락되었다는
|
||||
오류와 함께 배포가 실패합니다.
|
||||
</Warning>
|
||||
|
||||
### 4. 프로젝트 진입점 확인
|
||||
|
||||
Crews와 Flows 모두 `src/project_name/main.py`에 진입점이 있습니다:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Crews의 경우">
|
||||
진입점은 `run()` 함수를 사용합니다:
|
||||
|
||||
```python
|
||||
# src/my_crew/main.py
|
||||
from my_crew.crew import MyCrew
|
||||
|
||||
def run():
|
||||
"""crew를 실행합니다."""
|
||||
inputs = {'topic': 'AI in Healthcare'}
|
||||
result = MyCrew().crew().kickoff(inputs=inputs)
|
||||
return result
|
||||
|
||||
if __name__ == "__main__":
|
||||
run()
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Flows의 경우">
|
||||
진입점은 Flow 클래스와 함께 `kickoff()` 함수를 사용합니다:
|
||||
|
||||
```python
|
||||
# src/my_flow/main.py
|
||||
from crewai.flow import Flow, listen, start
|
||||
from my_flow.crews.poem_crew.poem_crew import PoemCrew
|
||||
|
||||
class MyFlow(Flow):
|
||||
@start()
|
||||
def begin(self):
|
||||
# Flow 로직
|
||||
result = PoemCrew().crew().kickoff(inputs={...})
|
||||
return result
|
||||
|
||||
def kickoff():
|
||||
"""flow를 실행합니다."""
|
||||
MyFlow().kickoff()
|
||||
|
||||
if __name__ == "__main__":
|
||||
kickoff()
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### 5. 환경 변수 준비
|
||||
|
||||
배포 전에 다음을 준비해야 합니다:
|
||||
|
||||
1. **LLM API 키** (OpenAI, Anthropic, Google 등)
|
||||
2. **도구 API 키** - 외부 도구를 사용하는 경우 (Serper 등)
|
||||
|
||||
<Tip>
|
||||
구성 문제를 조기에 발견하기 위해 배포 전에 동일한 환경 변수로
|
||||
로컬에서 프로젝트를 테스트하세요.
|
||||
</Tip>
|
||||
|
||||
## 빠른 검증 명령어
|
||||
|
||||
프로젝트 루트에서 다음 명령어를 실행하여 설정을 빠르게 확인하세요:
|
||||
|
||||
```bash
|
||||
# 1. pyproject.toml에서 프로젝트 타입 확인
|
||||
grep -A2 "\[tool.crewai\]" pyproject.toml
|
||||
|
||||
# 2. uv.lock 존재 확인
|
||||
ls -la uv.lock || echo "오류: uv.lock이 없습니다! 'uv lock'을 실행하세요"
|
||||
|
||||
# 3. src/ 구조 존재 확인
|
||||
ls -la src/*/main.py 2>/dev/null || echo "src/에서 main.py를 찾을 수 없습니다"
|
||||
|
||||
# 4. Crews의 경우 - crew.py 존재 확인
|
||||
ls -la src/*/crew.py 2>/dev/null || echo "crew.py가 없습니다 (Crews에서 예상됨)"
|
||||
|
||||
# 5. Flows의 경우 - crews/ 폴더 존재 확인
|
||||
ls -la src/*/crews/ 2>/dev/null || echo "crews/ 폴더가 없습니다 (Flows에서 예상됨)"
|
||||
|
||||
# 6. CrewBase 사용 확인
|
||||
grep -r "@CrewBase" . --include="*.py"
|
||||
```
|
||||
|
||||
## 일반적인 설정 실수
|
||||
|
||||
| 실수 | 증상 | 해결 방법 |
|
||||
|------|------|----------|
|
||||
| `uv.lock` 누락 | 의존성 해결 중 빌드 실패 | `uv lock` 실행 후 커밋 |
|
||||
| pyproject.toml의 잘못된 `type` | 빌드 성공하지만 런타임 실패 | 올바른 타입으로 변경 |
|
||||
| `@CrewBase` 데코레이터 누락 | "Config not found" 오류 | 모든 crew 클래스에 데코레이터 추가 |
|
||||
| `src/` 대신 루트에 파일 배치 | 진입점을 찾을 수 없음 | `src/project_name/`으로 이동 |
|
||||
| `run()` 또는 `kickoff()` 누락 | 자동화를 시작할 수 없음 | 올바른 진입 함수 추가 |
|
||||
|
||||
## 다음 단계
|
||||
|
||||
프로젝트가 모든 체크리스트 항목을 통과하면 배포할 준비가 된 것입니다:
|
||||
|
||||
<Card title="AMP에 배포하기" icon="rocket" href="/ko/enterprise/guides/deploy-to-amp">
|
||||
CLI, 웹 인터페이스 또는 CI/CD 통합을 사용하여 Crew 또는 Flow를 CrewAI AMP에
|
||||
배포하려면 배포 가이드를 따르세요.
|
||||
</Card>
|
||||
@@ -79,7 +79,7 @@ CrewAI AOP는 오픈 소스 프레임워크의 강력함에 프로덕션 배포,
|
||||
<Card
|
||||
title="Crew 배포"
|
||||
icon="rocket"
|
||||
href="/ko/enterprise/guides/deploy-crew"
|
||||
href="/ko/enterprise/guides/deploy-to-amp"
|
||||
>
|
||||
Crew 배포
|
||||
</Card>
|
||||
@@ -96,4 +96,4 @@ CrewAI AOP는 오픈 소스 프레임워크의 강력함에 프로덕션 배포,
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
자세한 안내를 원하시면 [배포 가이드](/ko/enterprise/guides/deploy-crew)를 확인하거나 아래 버튼을 클릭해 시작하세요.
|
||||
자세한 안내를 원하시면 [배포 가이드](/ko/enterprise/guides/deploy-to-amp)를 확인하거나 아래 버튼을 클릭해 시작하세요.
|
||||
|
||||
@@ -128,7 +128,7 @@ Ao implantar seu Flow, considere o seguinte:
|
||||
### CrewAI Enterprise
|
||||
A maneira mais fácil de implantar seu Flow é usando o CrewAI Enterprise. Ele lida com a infraestrutura, autenticação e monitoramento para você.
|
||||
|
||||
Confira o [Guia de Implantação](/pt-BR/enterprise/guides/deploy-crew) para começar.
|
||||
Confira o [Guia de Implantação](/pt-BR/enterprise/guides/deploy-to-amp) para começar.
|
||||
|
||||
```bash
|
||||
crewai deploy create
|
||||
|
||||
@@ -91,7 +91,7 @@ Após implantar, você pode ver os detalhes da automação e usar o menu **Optio
|
||||
## Relacionados
|
||||
|
||||
<CardGroup cols={3}>
|
||||
<Card title="Implantar um Crew" href="/pt-BR/enterprise/guides/deploy-crew" icon="rocket">
|
||||
<Card title="Implantar um Crew" href="/pt-BR/enterprise/guides/deploy-to-amp" icon="rocket">
|
||||
Implante um Crew via GitHub ou arquivo ZIP.
|
||||
</Card>
|
||||
<Card title="Gatilhos de Automação" href="/pt-BR/enterprise/guides/automation-triggers" icon="trigger">
|
||||
|
||||
@@ -79,7 +79,7 @@ Após publicar, você pode visualizar os detalhes da automação e usar o menu *
|
||||
<Card title="Criar um Crew" href="/pt-BR/enterprise/guides/build-crew" icon="paintbrush">
|
||||
Crie um Crew.
|
||||
</Card>
|
||||
<Card title="Implantar um Crew" href="/pt-BR/enterprise/guides/deploy-crew" icon="rocket">
|
||||
<Card title="Implantar um Crew" href="/pt-BR/enterprise/guides/deploy-to-amp" icon="rocket">
|
||||
Implante um Crew via GitHub ou ZIP.
|
||||
</Card>
|
||||
<Card title="Exportar um Componente React" href="/pt-BR/enterprise/guides/react-component-export" icon="download">
|
||||
|
||||
@@ -1,304 +0,0 @@
|
||||
---
|
||||
title: "Deploy Crew"
|
||||
description: "Implantando um Crew na CrewAI AMP"
|
||||
icon: "rocket"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Note>
|
||||
Depois de criar um crew localmente ou pelo Crew Studio, o próximo passo é
|
||||
implantá-lo na plataforma CrewAI AMP. Este guia cobre múltiplos métodos de
|
||||
implantação para ajudá-lo a escolher a melhor abordagem para o seu fluxo de
|
||||
trabalho.
|
||||
</Note>
|
||||
|
||||
## Pré-requisitos
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Crew Pronto para Implantação" icon="users">
|
||||
Você deve ter um crew funcional, criado localmente ou pelo Crew Studio
|
||||
</Card>
|
||||
<Card title="Repositório GitHub" icon="github">
|
||||
O código do seu crew deve estar em um repositório do GitHub (para o método
|
||||
de integração com GitHub)
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Opção 1: Implantar Usando o CrewAI CLI
|
||||
|
||||
A CLI fornece a maneira mais rápida de implantar crews desenvolvidos localmente na plataforma Enterprise.
|
||||
|
||||
<Steps>
|
||||
<Step title="Instale o CrewAI CLI">
|
||||
Se ainda não tiver, instale o CrewAI CLI:
|
||||
|
||||
```bash
|
||||
pip install crewai[tools]
|
||||
```
|
||||
|
||||
<Tip>
|
||||
A CLI vem com o pacote principal CrewAI, mas o extra `[tools]` garante todas as dependências de implantação.
|
||||
</Tip>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Autentique-se na Plataforma Enterprise">
|
||||
Primeiro, você precisa autenticar sua CLI com a plataforma CrewAI AMP:
|
||||
|
||||
```bash
|
||||
# Se já possui uma conta CrewAI AMP, ou deseja criar uma:
|
||||
crewai login
|
||||
```
|
||||
|
||||
Ao executar qualquer um dos comandos, a CLI irá:
|
||||
1. Exibir uma URL e um código de dispositivo único
|
||||
2. Abrir seu navegador para a página de autenticação
|
||||
3. Solicitar a confirmação do dispositivo
|
||||
4. Completar o processo de autenticação
|
||||
|
||||
Após a autenticação bem-sucedida, você verá uma mensagem de confirmação no terminal!
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Criar uma Implantação">
|
||||
|
||||
No diretório do seu projeto, execute:
|
||||
|
||||
```bash
|
||||
crewai deploy create
|
||||
```
|
||||
|
||||
Este comando irá:
|
||||
1. Detectar informações do seu repositório GitHub
|
||||
2. Identificar variáveis de ambiente no seu arquivo `.env` local
|
||||
3. Transferir essas variáveis com segurança para a plataforma Enterprise
|
||||
4. Criar uma nova implantação com um identificador único
|
||||
|
||||
Com a criação bem-sucedida, você verá uma mensagem como:
|
||||
```shell
|
||||
Deployment created successfully!
|
||||
Name: your_project_name
|
||||
Deployment ID: 01234567-89ab-cdef-0123-456789abcdef
|
||||
Current Status: Deploy Enqueued
|
||||
```
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Acompanhe o Progresso da Implantação">
|
||||
|
||||
Acompanhe o status da implantação com:
|
||||
|
||||
```bash
|
||||
crewai deploy status
|
||||
```
|
||||
|
||||
Para ver logs detalhados do processo de build:
|
||||
|
||||
```bash
|
||||
crewai deploy logs
|
||||
```
|
||||
|
||||
<Tip>
|
||||
A primeira implantação normalmente leva de 10 a 15 minutos, pois as imagens dos containers são construídas. As próximas implantações são bem mais rápidas.
|
||||
</Tip>
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Comandos Adicionais da CLI
|
||||
|
||||
O CrewAI CLI oferece vários comandos para gerenciar suas implantações:
|
||||
|
||||
```bash
|
||||
# Liste todas as suas implantações
|
||||
crewai deploy list
|
||||
|
||||
# Consulte o status de uma implantação
|
||||
crewai deploy status
|
||||
|
||||
# Veja os logs da implantação
|
||||
crewai deploy logs
|
||||
|
||||
# Envie atualizações após alterações no código
|
||||
crewai deploy push
|
||||
|
||||
# Remova uma implantação
|
||||
crewai deploy remove <deployment_id>
|
||||
```
|
||||
|
||||
## Opção 2: Implantar Diretamente pela Interface Web
|
||||
|
||||
Você também pode implantar seus crews diretamente pela interface web da CrewAI AMP conectando sua conta do GitHub. Esta abordagem não requer utilizar a CLI na sua máquina local.
|
||||
|
||||
<Steps>
|
||||
|
||||
<Step title="Enviar no GitHub">
|
||||
|
||||
Você precisa subir seu crew para um repositório do GitHub. Caso ainda não tenha criado um crew, você pode [seguir este tutorial](/pt-BR/quickstart).
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Conectando o GitHub ao CrewAI AMP">
|
||||
|
||||
1. Faça login em [CrewAI AMP](https://app.crewai.com)
|
||||
2. Clique no botão "Connect GitHub"
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Selecionar o Repositório">
|
||||
|
||||
Após conectar sua conta GitHub, você poderá selecionar qual repositório deseja implantar:
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Definir as Variáveis de Ambiente">
|
||||
|
||||
Antes de implantar, você precisará configurar as variáveis de ambiente para conectar ao seu provedor de LLM ou outros serviços:
|
||||
|
||||
1. Você pode adicionar variáveis individualmente ou em lote
|
||||
2. Digite suas variáveis no formato `KEY=VALUE` (uma por linha)
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Implante Seu Crew">
|
||||
|
||||
1. Clique no botão "Deploy" para iniciar o processo de implantação
|
||||
2. Você pode monitorar o progresso pela barra de progresso
|
||||
3. A primeira implantação geralmente demora de 10 a 15 minutos; as próximas serão mais rápidas
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
Após a conclusão, você verá:
|
||||
- A URL exclusiva do seu crew
|
||||
- Um Bearer token para proteger sua API crew
|
||||
- Um botão "Delete" caso precise remover a implantação
|
||||
|
||||
</Step>
|
||||
|
||||
</Steps>
|
||||
|
||||
## ⚠️ Requisitos de Segurança para Variáveis de Ambiente
|
||||
|
||||
<Warning>
|
||||
**Importante**: A CrewAI AMP possui restrições de segurança sobre os nomes de
|
||||
variáveis de ambiente que podem causar falha na implantação caso não sejam
|
||||
seguidas.
|
||||
</Warning>
|
||||
|
||||
### Padrões de Variáveis de Ambiente Bloqueados
|
||||
|
||||
Por motivos de segurança, os seguintes padrões de nome de variável de ambiente são **automaticamente filtrados** e causarão problemas de implantação:
|
||||
|
||||
**Padrões Bloqueados:**
|
||||
|
||||
- Variáveis terminando em `_TOKEN` (ex: `MY_API_TOKEN`)
|
||||
- Variáveis terminando em `_PASSWORD` (ex: `DB_PASSWORD`)
|
||||
- Variáveis terminando em `_SECRET` (ex: `API_SECRET`)
|
||||
- Variáveis terminando em `_KEY` em certos contextos
|
||||
|
||||
**Variáveis Bloqueadas Específicas:**
|
||||
|
||||
- `GITHUB_USER`, `GITHUB_TOKEN`
|
||||
- `AWS_REGION`, `AWS_DEFAULT_REGION`
|
||||
- Diversas variáveis internas do sistema CrewAI
|
||||
|
||||
### Exceções Permitidas
|
||||
|
||||
Algumas variáveis são explicitamente permitidas mesmo coincidindo com os padrões bloqueados:
|
||||
|
||||
- `AZURE_AD_TOKEN`
|
||||
- `AZURE_OPENAI_AD_TOKEN`
|
||||
- `ENTERPRISE_ACTION_TOKEN`
|
||||
- `CREWAI_ENTEPRISE_TOOLS_TOKEN`
|
||||
|
||||
### Como Corrigir Problemas de Nomeação
|
||||
|
||||
Se sua implantação falhar devido a restrições de variáveis de ambiente:
|
||||
|
||||
```bash
|
||||
# ❌ Estas irão causar falhas na implantação
|
||||
OPENAI_TOKEN=sk-...
|
||||
DATABASE_PASSWORD=mysenha
|
||||
API_SECRET=segredo123
|
||||
|
||||
# ✅ Utilize estes padrões de nomeação
|
||||
OPENAI_API_KEY=sk-...
|
||||
DATABASE_CREDENTIALS=mysenha
|
||||
API_CONFIG=segredo123
|
||||
```
|
||||
|
||||
### Melhores Práticas
|
||||
|
||||
1. **Use convenções padrão de nomenclatura**: `PROVIDER_API_KEY` em vez de `PROVIDER_TOKEN`
|
||||
2. **Teste localmente primeiro**: Certifique-se de que seu crew funciona com as variáveis renomeadas
|
||||
3. **Atualize seu código**: Altere todas as referências aos nomes antigos das variáveis
|
||||
4. **Documente as mudanças**: Mantenha registro das variáveis renomeadas para seu time
|
||||
|
||||
<Tip>
|
||||
Se você se deparar com falhas de implantação com erros enigmáticos de
|
||||
variáveis de ambiente, confira primeiro os nomes das variáveis em relação a
|
||||
esses padrões.
|
||||
</Tip>
|
||||
|
||||
### Interaja com Seu Crew Implantado
|
||||
|
||||
Após a implantação, você pode acessar seu crew por meio de:
|
||||
|
||||
1. **REST API**: A plataforma gera um endpoint HTTPS exclusivo com estas rotas principais:
|
||||
|
||||
- `/inputs`: Lista os parâmetros de entrada requeridos
|
||||
- `/kickoff`: Inicia uma execução com os inputs fornecidos
|
||||
- `/status/{kickoff_id}`: Consulta o status da execução
|
||||
|
||||
2. **Interface Web**: Acesse [app.crewai.com](https://app.crewai.com) para visualizar:
|
||||
- **Aba Status**: Informações da implantação, detalhes do endpoint da API e token de autenticação
|
||||
- **Aba Run**: Visualização da estrutura do seu crew
|
||||
- **Aba Executions**: Histórico de todas as execuções
|
||||
- **Aba Metrics**: Análises de desempenho
|
||||
- **Aba Traces**: Insights detalhados das execuções
|
||||
|
||||
### Dispare uma Execução
|
||||
|
||||
No dashboard Enterprise, você pode:
|
||||
|
||||
1. Clicar no nome do seu crew para abrir seus detalhes
|
||||
2. Selecionar "Trigger Crew" na interface de gerenciamento
|
||||
3. Inserir os inputs necessários no modal exibido
|
||||
4. Monitorar o progresso à medida que a execução avança pelo pipeline
|
||||
|
||||
### Monitoramento e Análises
|
||||
|
||||
A plataforma Enterprise oferece recursos abrangentes de observabilidade:
|
||||
|
||||
- **Gestão das Execuções**: Acompanhe execuções ativas e concluídas
|
||||
- **Traces**: Quebra detalhada de cada execução
|
||||
- **Métricas**: Uso de tokens, tempos de execução e custos
|
||||
- **Visualização em Linha do Tempo**: Representação visual das sequências de tarefas
|
||||
|
||||
### Funcionalidades Avançadas
|
||||
|
||||
A plataforma Enterprise também oferece:
|
||||
|
||||
- **Gerenciamento de Variáveis de Ambiente**: Armazene e gerencie com segurança as chaves de API
|
||||
- **Conexões com LLM**: Configure integrações com diversos provedores de LLM
|
||||
- **Repositório Custom Tools**: Crie, compartilhe e instale ferramentas
|
||||
- **Crew Studio**: Monte crews via interface de chat sem escrever código
|
||||
|
||||
<Card title="Precisa de Ajuda?" icon="headset" href="mailto:support@crewai.com">
|
||||
Entre em contato com nossa equipe de suporte para ajuda com questões de
|
||||
implantação ou dúvidas sobre a plataforma Enterprise.
|
||||
</Card>
|
||||
439
docs/pt-BR/enterprise/guides/deploy-to-amp.mdx
Normal file
439
docs/pt-BR/enterprise/guides/deploy-to-amp.mdx
Normal file
@@ -0,0 +1,439 @@
|
||||
---
|
||||
title: "Deploy para AMP"
|
||||
description: "Implante seu Crew ou Flow no CrewAI AMP"
|
||||
icon: "rocket"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Note>
|
||||
Depois de criar um Crew ou Flow localmente (ou pelo Crew Studio), o próximo passo é
|
||||
implantá-lo na plataforma CrewAI AMP. Este guia cobre múltiplos métodos de
|
||||
implantação para ajudá-lo a escolher a melhor abordagem para o seu fluxo de trabalho.
|
||||
</Note>
|
||||
|
||||
## Pré-requisitos
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Projeto Pronto para Implantação" icon="check-circle">
|
||||
Você deve ter um Crew ou Flow funcionando localmente com sucesso.
|
||||
Siga nosso [guia de preparação](/pt-BR/enterprise/guides/prepare-for-deployment) para verificar a estrutura do seu projeto.
|
||||
</Card>
|
||||
<Card title="Repositório GitHub" icon="github">
|
||||
Seu código deve estar em um repositório do GitHub (para o método de integração com GitHub).
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
<Info>
|
||||
**Crews vs Flows**: Ambos os tipos de projeto podem ser implantados como "automações" no CrewAI AMP.
|
||||
O processo de implantação é o mesmo, mas eles têm estruturas de projeto diferentes.
|
||||
Veja [Preparar para Implantação](/pt-BR/enterprise/guides/prepare-for-deployment) para detalhes.
|
||||
</Info>
|
||||
|
||||
## Opção 1: Implantar Usando o CrewAI CLI
|
||||
|
||||
A CLI fornece a maneira mais rápida de implantar Crews ou Flows desenvolvidos localmente na plataforma AMP.
|
||||
A CLI detecta automaticamente o tipo do seu projeto a partir do `pyproject.toml` e faz o build adequadamente.
|
||||
|
||||
<Steps>
|
||||
<Step title="Instale o CrewAI CLI">
|
||||
Se ainda não tiver, instale o CrewAI CLI:
|
||||
|
||||
```bash
|
||||
pip install crewai[tools]
|
||||
```
|
||||
|
||||
<Tip>
|
||||
A CLI vem com o pacote principal CrewAI, mas o extra `[tools]` garante todas as dependências de implantação.
|
||||
</Tip>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Autentique-se na Plataforma Enterprise">
|
||||
Primeiro, você precisa autenticar sua CLI com a plataforma CrewAI AMP:
|
||||
|
||||
```bash
|
||||
# Se já possui uma conta CrewAI AMP, ou deseja criar uma:
|
||||
crewai login
|
||||
```
|
||||
|
||||
Ao executar qualquer um dos comandos, a CLI irá:
|
||||
1. Exibir uma URL e um código de dispositivo único
|
||||
2. Abrir seu navegador para a página de autenticação
|
||||
3. Solicitar a confirmação do dispositivo
|
||||
4. Completar o processo de autenticação
|
||||
|
||||
Após a autenticação bem-sucedida, você verá uma mensagem de confirmação no terminal!
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Criar uma Implantação">
|
||||
|
||||
No diretório do seu projeto, execute:
|
||||
|
||||
```bash
|
||||
crewai deploy create
|
||||
```
|
||||
|
||||
Este comando irá:
|
||||
1. Detectar informações do seu repositório GitHub
|
||||
2. Identificar variáveis de ambiente no seu arquivo `.env` local
|
||||
3. Transferir essas variáveis com segurança para a plataforma Enterprise
|
||||
4. Criar uma nova implantação com um identificador único
|
||||
|
||||
Com a criação bem-sucedida, você verá uma mensagem como:
|
||||
```shell
|
||||
Deployment created successfully!
|
||||
Name: your_project_name
|
||||
Deployment ID: 01234567-89ab-cdef-0123-456789abcdef
|
||||
Current Status: Deploy Enqueued
|
||||
```
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Acompanhe o Progresso da Implantação">
|
||||
|
||||
Acompanhe o status da implantação com:
|
||||
|
||||
```bash
|
||||
crewai deploy status
|
||||
```
|
||||
|
||||
Para ver logs detalhados do processo de build:
|
||||
|
||||
```bash
|
||||
crewai deploy logs
|
||||
```
|
||||
|
||||
<Tip>
|
||||
A primeira implantação normalmente leva de 10 a 15 minutos, pois as imagens dos containers são construídas. As próximas implantações são bem mais rápidas.
|
||||
</Tip>
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Comandos Adicionais da CLI
|
||||
|
||||
O CrewAI CLI oferece vários comandos para gerenciar suas implantações:
|
||||
|
||||
```bash
|
||||
# Liste todas as suas implantações
|
||||
crewai deploy list
|
||||
|
||||
# Consulte o status de uma implantação
|
||||
crewai deploy status
|
||||
|
||||
# Veja os logs da implantação
|
||||
crewai deploy logs
|
||||
|
||||
# Envie atualizações após alterações no código
|
||||
crewai deploy push
|
||||
|
||||
# Remova uma implantação
|
||||
crewai deploy remove <deployment_id>
|
||||
```
|
||||
|
||||
## Opção 2: Implantar Diretamente pela Interface Web
|
||||
|
||||
Você também pode implantar seus Crews ou Flows diretamente pela interface web do CrewAI AMP conectando sua conta do GitHub. Esta abordagem não requer utilizar a CLI na sua máquina local. A plataforma detecta automaticamente o tipo do seu projeto e trata o build adequadamente.
|
||||
|
||||
<Steps>
|
||||
|
||||
<Step title="Enviar para o GitHub">
|
||||
|
||||
Você precisa enviar seu crew para um repositório do GitHub. Caso ainda não tenha criado um crew, você pode [seguir este tutorial](/pt-BR/quickstart).
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Conectando o GitHub ao CrewAI AMP">
|
||||
|
||||
1. Faça login em [CrewAI AMP](https://app.crewai.com)
|
||||
2. Clique no botão "Connect GitHub"
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Selecionar o Repositório">
|
||||
|
||||
Após conectar sua conta GitHub, você poderá selecionar qual repositório deseja implantar:
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Definir as Variáveis de Ambiente">
|
||||
|
||||
Antes de implantar, você precisará configurar as variáveis de ambiente para conectar ao seu provedor de LLM ou outros serviços:
|
||||
|
||||
1. Você pode adicionar variáveis individualmente ou em lote
|
||||
2. Digite suas variáveis no formato `KEY=VALUE` (uma por linha)
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Implante Seu Crew">
|
||||
|
||||
1. Clique no botão "Deploy" para iniciar o processo de implantação
|
||||
2. Você pode monitorar o progresso pela barra de progresso
|
||||
3. A primeira implantação geralmente demora de 10 a 15 minutos; as próximas serão mais rápidas
|
||||
|
||||
<Frame>
|
||||

|
||||
</Frame>
|
||||
|
||||
Após a conclusão, você verá:
|
||||
- A URL exclusiva do seu crew
|
||||
- Um Bearer token para proteger sua API crew
|
||||
- Um botão "Delete" caso precise remover a implantação
|
||||
|
||||
</Step>
|
||||
|
||||
</Steps>
|
||||
|
||||
## Opção 3: Reimplantar Usando API (Integração CI/CD)
|
||||
|
||||
Para implantações automatizadas em pipelines CI/CD, você pode usar a API do CrewAI para acionar reimplantações de crews existentes. Isso é particularmente útil para GitHub Actions, Jenkins ou outros workflows de automação.
|
||||
|
||||
<Steps>
|
||||
<Step title="Obtenha Seu Token de Acesso Pessoal">
|
||||
|
||||
Navegue até as configurações da sua conta CrewAI AMP para gerar um token de API:
|
||||
|
||||
1. Acesse [app.crewai.com](https://app.crewai.com)
|
||||
2. Clique em **Settings** → **Account** → **Personal Access Token**
|
||||
3. Gere um novo token e copie-o com segurança
|
||||
4. Armazene este token como um secret no seu sistema CI/CD
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Encontre o UUID da Sua Automação">
|
||||
|
||||
Localize o identificador único do seu crew implantado:
|
||||
|
||||
1. Acesse **Automations** no seu dashboard CrewAI AMP
|
||||
2. Selecione sua automação/crew existente
|
||||
3. Clique em **Additional Details**
|
||||
4. Copie o **UUID** - este identifica sua implantação específica do crew
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Acione a Reimplantação via API">
|
||||
|
||||
Use o endpoint da API de Deploy para acionar uma reimplantação:
|
||||
|
||||
```bash
|
||||
curl -i -X POST \
|
||||
-H "Authorization: Bearer YOUR_PERSONAL_ACCESS_TOKEN" \
|
||||
https://app.crewai.com/crewai_plus/api/v1/crews/YOUR-AUTOMATION-UUID/deploy
|
||||
|
||||
# HTTP/2 200
|
||||
# content-type: application/json
|
||||
#
|
||||
# {
|
||||
# "uuid": "your-automation-uuid",
|
||||
# "status": "Deploy Enqueued",
|
||||
# "public_url": "https://your-crew-deployment.crewai.com",
|
||||
# "token": "your-bearer-token"
|
||||
# }
|
||||
```
|
||||
|
||||
<Info>
|
||||
Se sua automação foi criada originalmente conectada ao Git, a API automaticamente puxará as últimas alterações do seu repositório antes de reimplantar.
|
||||
</Info>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Exemplo de Integração com GitHub Actions">
|
||||
|
||||
Aqui está um workflow do GitHub Actions com gatilhos de implantação mais complexos:
|
||||
|
||||
```yaml
|
||||
name: Deploy CrewAI Automation
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
pull_request:
|
||||
types: [ labeled ]
|
||||
release:
|
||||
types: [ published ]
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
(github.event_name == 'push' && github.ref == 'refs/heads/main') ||
|
||||
(github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'deploy')) ||
|
||||
(github.event_name == 'release')
|
||||
steps:
|
||||
- name: Trigger CrewAI Redeployment
|
||||
run: |
|
||||
curl -X POST \
|
||||
-H "Authorization: Bearer ${{ secrets.CREWAI_PAT }}" \
|
||||
https://app.crewai.com/crewai_plus/api/v1/crews/${{ secrets.CREWAI_AUTOMATION_UUID }}/deploy
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Adicione `CREWAI_PAT` e `CREWAI_AUTOMATION_UUID` como secrets do repositório. Para implantações de PR, adicione um label "deploy" para acionar o workflow.
|
||||
</Tip>
|
||||
|
||||
</Step>
|
||||
|
||||
</Steps>
|
||||
|
||||
## Interaja com Sua Automação Implantada
|
||||
|
||||
Após a implantação, você pode acessar seu crew através de:
|
||||
|
||||
1. **REST API**: A plataforma gera um endpoint HTTPS exclusivo com estas rotas principais:
|
||||
|
||||
- `/inputs`: Lista os parâmetros de entrada requeridos
|
||||
- `/kickoff`: Inicia uma execução com os inputs fornecidos
|
||||
- `/status/{kickoff_id}`: Consulta o status da execução
|
||||
|
||||
2. **Interface Web**: Acesse [app.crewai.com](https://app.crewai.com) para visualizar:
|
||||
- **Aba Status**: Informações da implantação, detalhes do endpoint da API e token de autenticação
|
||||
- **Aba Run**: Visualização da estrutura do seu crew
|
||||
- **Aba Executions**: Histórico de todas as execuções
|
||||
- **Aba Metrics**: Análises de desempenho
|
||||
- **Aba Traces**: Insights detalhados das execuções
|
||||
|
||||
### Dispare uma Execução
|
||||
|
||||
No dashboard Enterprise, você pode:
|
||||
|
||||
1. Clicar no nome do seu crew para abrir seus detalhes
|
||||
2. Selecionar "Trigger Crew" na interface de gerenciamento
|
||||
3. Inserir os inputs necessários no modal exibido
|
||||
4. Monitorar o progresso à medida que a execução avança pelo pipeline
|
||||
|
||||
### Monitoramento e Análises
|
||||
|
||||
A plataforma Enterprise oferece recursos abrangentes de observabilidade:
|
||||
|
||||
- **Gestão das Execuções**: Acompanhe execuções ativas e concluídas
|
||||
- **Traces**: Quebra detalhada de cada execução
|
||||
- **Métricas**: Uso de tokens, tempos de execução e custos
|
||||
- **Visualização em Linha do Tempo**: Representação visual das sequências de tarefas
|
||||
|
||||
### Funcionalidades Avançadas
|
||||
|
||||
A plataforma Enterprise também oferece:
|
||||
|
||||
- **Gerenciamento de Variáveis de Ambiente**: Armazene e gerencie com segurança as chaves de API
|
||||
- **Conexões com LLM**: Configure integrações com diversos provedores de LLM
|
||||
- **Repositório Custom Tools**: Crie, compartilhe e instale ferramentas
|
||||
- **Crew Studio**: Monte crews via interface de chat sem escrever código
|
||||
|
||||
## Solução de Problemas em Falhas de Implantação
|
||||
|
||||
Se sua implantação falhar, verifique estes problemas comuns:
|
||||
|
||||
### Falhas de Build
|
||||
|
||||
#### Arquivo uv.lock Ausente
|
||||
|
||||
**Sintoma**: Build falha no início com erros de resolução de dependências
|
||||
|
||||
**Solução**: Gere e faça commit do arquivo lock:
|
||||
|
||||
```bash
|
||||
uv lock
|
||||
git add uv.lock
|
||||
git commit -m "Add uv.lock for deployment"
|
||||
git push
|
||||
```
|
||||
|
||||
<Warning>
|
||||
O arquivo `uv.lock` é obrigatório para todas as implantações. Sem ele, a plataforma
|
||||
não consegue instalar suas dependências de forma confiável.
|
||||
</Warning>
|
||||
|
||||
#### Estrutura de Projeto Incorreta
|
||||
|
||||
**Sintoma**: Erros "Could not find entry point" ou "Module not found"
|
||||
|
||||
**Solução**: Verifique se seu projeto corresponde à estrutura esperada:
|
||||
|
||||
- **Tanto Crews quanto Flows**: Devem ter ponto de entrada em `src/project_name/main.py`
|
||||
- **Crews**: Usam uma função `run()` como ponto de entrada
|
||||
- **Flows**: Usam uma função `kickoff()` como ponto de entrada
|
||||
|
||||
Veja [Preparar para Implantação](/pt-BR/enterprise/guides/prepare-for-deployment) para diagramas de estrutura detalhados.
|
||||
|
||||
#### Decorador CrewBase Ausente
|
||||
|
||||
**Sintoma**: Erros "Crew not found", "Config not found" ou erros de configuração de agent/task
|
||||
|
||||
**Solução**: Certifique-se de que **todas** as classes crew usam o decorador `@CrewBase`:
|
||||
|
||||
```python
|
||||
from crewai.project import CrewBase, agent, crew, task
|
||||
|
||||
@CrewBase # Este decorador é OBRIGATÓRIO
|
||||
class YourCrew():
|
||||
"""Descrição do seu crew"""
|
||||
|
||||
@agent
|
||||
def my_agent(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['my_agent'], # type: ignore[index]
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# ... resto da definição do crew
|
||||
```
|
||||
|
||||
<Info>
|
||||
Isso se aplica a Crews independentes E crews embutidos dentro de projetos Flow.
|
||||
Toda classe crew precisa do decorador.
|
||||
</Info>
|
||||
|
||||
#### Tipo Incorreto no pyproject.toml
|
||||
|
||||
**Sintoma**: Build tem sucesso mas falha em runtime, ou comportamento inesperado
|
||||
|
||||
**Solução**: Verifique se a seção `[tool.crewai]` corresponde ao tipo do seu projeto:
|
||||
|
||||
```toml
|
||||
# Para projetos Crew:
|
||||
[tool.crewai]
|
||||
type = "crew"
|
||||
|
||||
# Para projetos Flow:
|
||||
[tool.crewai]
|
||||
type = "flow"
|
||||
```
|
||||
|
||||
### Falhas de Runtime
|
||||
|
||||
#### Falhas de Conexão com LLM
|
||||
|
||||
**Sintoma**: Erros de chave API, "model not found" ou falhas de autenticação
|
||||
|
||||
**Solução**:
|
||||
1. Verifique se a chave API do seu provedor LLM está corretamente definida nas variáveis de ambiente
|
||||
2. Certifique-se de que os nomes das variáveis de ambiente correspondem ao que seu código espera
|
||||
3. Teste localmente com exatamente as mesmas variáveis de ambiente antes de implantar
|
||||
|
||||
#### Erros de Execução do Crew
|
||||
|
||||
**Sintoma**: Crew inicia mas falha durante a execução
|
||||
|
||||
**Solução**:
|
||||
1. Verifique os logs de execução no dashboard AMP (aba Traces)
|
||||
2. Verifique se todas as ferramentas têm as chaves API necessárias configuradas
|
||||
3. Certifique-se de que as configurações de agents em `agents.yaml` são válidas
|
||||
4. Verifique se há erros de sintaxe nas configurações de tasks em `tasks.yaml`
|
||||
|
||||
<Card title="Precisa de Ajuda?" icon="headset" href="mailto:support@crewai.com">
|
||||
Entre em contato com nossa equipe de suporte para ajuda com questões de
|
||||
implantação ou dúvidas sobre a plataforma AMP.
|
||||
</Card>
|
||||
305
docs/pt-BR/enterprise/guides/prepare-for-deployment.mdx
Normal file
305
docs/pt-BR/enterprise/guides/prepare-for-deployment.mdx
Normal file
@@ -0,0 +1,305 @@
|
||||
---
|
||||
title: "Preparar para Implantação"
|
||||
description: "Certifique-se de que seu Crew ou Flow está pronto para implantação no CrewAI AMP"
|
||||
icon: "clipboard-check"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Note>
|
||||
Antes de implantar no CrewAI AMP, é crucial verificar se seu projeto está estruturado corretamente.
|
||||
Tanto Crews quanto Flows podem ser implantados como "automações", mas eles têm estruturas de projeto
|
||||
e requisitos diferentes que devem ser atendidos para uma implantação bem-sucedida.
|
||||
</Note>
|
||||
|
||||
## Entendendo Automações
|
||||
|
||||
No CrewAI AMP, **automações** é o termo geral para projetos de IA Agêntica implantáveis. Uma automação pode ser:
|
||||
|
||||
- **Um Crew**: Uma equipe independente de agentes de IA trabalhando juntos em tarefas
|
||||
- **Um Flow**: Um workflow orquestrado que pode combinar múltiplos crews, chamadas diretas de LLM e lógica procedural
|
||||
|
||||
Entender qual tipo você está implantando é essencial porque eles têm estruturas de projeto e pontos de entrada diferentes.
|
||||
|
||||
## Crews vs Flows: Principais Diferenças
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Projetos Crew" icon="users">
|
||||
Equipes de agentes de IA independentes com `crew.py` definindo agentes e tarefas. Ideal para tarefas focadas e colaborativas.
|
||||
</Card>
|
||||
<Card title="Projetos Flow" icon="diagram-project">
|
||||
Workflows orquestrados com crews embutidos em uma pasta `crews/`. Ideal para processos complexos de múltiplas etapas.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
| Aspecto | Crew | Flow |
|
||||
|---------|------|------|
|
||||
| **Estrutura do projeto** | `src/project_name/` com `crew.py` | `src/project_name/` com pasta `crews/` |
|
||||
| **Localização da lógica principal** | `src/project_name/crew.py` | `src/project_name/main.py` (classe Flow) |
|
||||
| **Função de ponto de entrada** | `run()` em `main.py` | `kickoff()` em `main.py` |
|
||||
| **Tipo no pyproject.toml** | `type = "crew"` | `type = "flow"` |
|
||||
| **Comando CLI de criação** | `crewai create crew name` | `crewai create flow name` |
|
||||
| **Localização da configuração** | `src/project_name/config/` | `src/project_name/crews/crew_name/config/` |
|
||||
| **Pode conter outros crews** | Não | Sim (na pasta `crews/`) |
|
||||
|
||||
## Referência de Estrutura de Projeto
|
||||
|
||||
### Estrutura de Projeto Crew
|
||||
|
||||
Quando você executa `crewai create crew my_crew`, você obtém esta estrutura:
|
||||
|
||||
```
|
||||
my_crew/
|
||||
├── .gitignore
|
||||
├── pyproject.toml # Deve ter type = "crew"
|
||||
├── README.md
|
||||
├── .env
|
||||
├── uv.lock # OBRIGATÓRIO para implantação
|
||||
└── src/
|
||||
└── my_crew/
|
||||
├── __init__.py
|
||||
├── main.py # Ponto de entrada com função run()
|
||||
├── crew.py # Classe Crew com decorador @CrewBase
|
||||
├── tools/
|
||||
│ ├── custom_tool.py
|
||||
│ └── __init__.py
|
||||
└── config/
|
||||
├── agents.yaml # Definições de agentes
|
||||
└── tasks.yaml # Definições de tarefas
|
||||
```
|
||||
|
||||
<Warning>
|
||||
A estrutura aninhada `src/project_name/` é crítica para Crews.
|
||||
Colocar arquivos no nível errado causará falhas na implantação.
|
||||
</Warning>
|
||||
|
||||
### Estrutura de Projeto Flow
|
||||
|
||||
Quando você executa `crewai create flow my_flow`, você obtém esta estrutura:
|
||||
|
||||
```
|
||||
my_flow/
|
||||
├── .gitignore
|
||||
├── pyproject.toml # Deve ter type = "flow"
|
||||
├── README.md
|
||||
├── .env
|
||||
├── uv.lock # OBRIGATÓRIO para implantação
|
||||
└── src/
|
||||
└── my_flow/
|
||||
├── __init__.py
|
||||
├── main.py # Ponto de entrada com função kickoff() + classe Flow
|
||||
├── crews/ # Pasta de crews embutidos
|
||||
│ └── poem_crew/
|
||||
│ ├── __init__.py
|
||||
│ ├── poem_crew.py # Crew com decorador @CrewBase
|
||||
│ └── config/
|
||||
│ ├── agents.yaml
|
||||
│ └── tasks.yaml
|
||||
└── tools/
|
||||
├── __init__.py
|
||||
└── custom_tool.py
|
||||
```
|
||||
|
||||
<Info>
|
||||
Tanto Crews quanto Flows usam a estrutura `src/project_name/`.
|
||||
A diferença chave é que Flows têm uma pasta `crews/` para crews embutidos,
|
||||
enquanto Crews têm `crew.py` diretamente na pasta do projeto.
|
||||
</Info>
|
||||
|
||||
## Checklist Pré-Implantação
|
||||
|
||||
Use este checklist para verificar se seu projeto está pronto para implantação.
|
||||
|
||||
### 1. Verificar Configuração do pyproject.toml
|
||||
|
||||
Seu `pyproject.toml` deve incluir a seção `[tool.crewai]` correta:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Para Crews">
|
||||
```toml
|
||||
[tool.crewai]
|
||||
type = "crew"
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Para Flows">
|
||||
```toml
|
||||
[tool.crewai]
|
||||
type = "flow"
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Warning>
|
||||
Se o `type` não corresponder à estrutura do seu projeto, o build falhará ou
|
||||
a automação não funcionará corretamente.
|
||||
</Warning>
|
||||
|
||||
### 2. Garantir que o Arquivo uv.lock Existe
|
||||
|
||||
CrewAI usa `uv` para gerenciamento de dependências. O arquivo `uv.lock` garante builds reproduzíveis e é **obrigatório** para implantação.
|
||||
|
||||
```bash
|
||||
# Gerar ou atualizar o arquivo lock
|
||||
uv lock
|
||||
|
||||
# Verificar se existe
|
||||
ls -la uv.lock
|
||||
```
|
||||
|
||||
Se o arquivo não existir, execute `uv lock` e faça commit no seu repositório:
|
||||
|
||||
```bash
|
||||
uv lock
|
||||
git add uv.lock
|
||||
git commit -m "Add uv.lock for deployment"
|
||||
git push
|
||||
```
|
||||
|
||||
### 3. Validar Uso do Decorador CrewBase
|
||||
|
||||
**Toda classe crew deve usar o decorador `@CrewBase`.** Isso se aplica a:
|
||||
|
||||
- Projetos crew independentes
|
||||
- Crews embutidos dentro de projetos Flow
|
||||
|
||||
```python
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from crewai.project import CrewBase, agent, crew, task
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from typing import List
|
||||
|
||||
@CrewBase # Este decorador é OBRIGATÓRIO
|
||||
class MyCrew():
|
||||
"""Descrição do meu crew"""
|
||||
|
||||
agents: List[BaseAgent]
|
||||
tasks: List[Task]
|
||||
|
||||
@agent
|
||||
def my_agent(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['my_agent'], # type: ignore[index]
|
||||
verbose=True
|
||||
)
|
||||
|
||||
@task
|
||||
def my_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['my_task'] # type: ignore[index]
|
||||
)
|
||||
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
return Crew(
|
||||
agents=self.agents,
|
||||
tasks=self.tasks,
|
||||
process=Process.sequential,
|
||||
verbose=True,
|
||||
)
|
||||
```
|
||||
|
||||
<Warning>
|
||||
Se você esquecer o decorador `@CrewBase`, sua implantação falhará com
|
||||
erros sobre configurações de agents ou tasks ausentes.
|
||||
</Warning>
|
||||
|
||||
### 4. Verificar Pontos de Entrada do Projeto
|
||||
|
||||
Tanto Crews quanto Flows têm seu ponto de entrada em `src/project_name/main.py`:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Para Crews">
|
||||
O ponto de entrada usa uma função `run()`:
|
||||
|
||||
```python
|
||||
# src/my_crew/main.py
|
||||
from my_crew.crew import MyCrew
|
||||
|
||||
def run():
|
||||
"""Executa o crew."""
|
||||
inputs = {'topic': 'AI in Healthcare'}
|
||||
result = MyCrew().crew().kickoff(inputs=inputs)
|
||||
return result
|
||||
|
||||
if __name__ == "__main__":
|
||||
run()
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Para Flows">
|
||||
O ponto de entrada usa uma função `kickoff()` com uma classe Flow:
|
||||
|
||||
```python
|
||||
# src/my_flow/main.py
|
||||
from crewai.flow import Flow, listen, start
|
||||
from my_flow.crews.poem_crew.poem_crew import PoemCrew
|
||||
|
||||
class MyFlow(Flow):
|
||||
@start()
|
||||
def begin(self):
|
||||
# Lógica do Flow aqui
|
||||
result = PoemCrew().crew().kickoff(inputs={...})
|
||||
return result
|
||||
|
||||
def kickoff():
|
||||
"""Executa o flow."""
|
||||
MyFlow().kickoff()
|
||||
|
||||
if __name__ == "__main__":
|
||||
kickoff()
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### 5. Preparar Variáveis de Ambiente
|
||||
|
||||
Antes da implantação, certifique-se de ter:
|
||||
|
||||
1. **Chaves de API de LLM** prontas (OpenAI, Anthropic, Google, etc.)
|
||||
2. **Chaves de API de ferramentas** se estiver usando ferramentas externas (Serper, etc.)
|
||||
|
||||
<Tip>
|
||||
Teste seu projeto localmente com as mesmas variáveis de ambiente antes de implantar
|
||||
para detectar problemas de configuração antecipadamente.
|
||||
</Tip>
|
||||
|
||||
## Comandos de Validação Rápida
|
||||
|
||||
Execute estes comandos a partir da raiz do seu projeto para verificar rapidamente sua configuração:
|
||||
|
||||
```bash
|
||||
# 1. Verificar tipo do projeto no pyproject.toml
|
||||
grep -A2 "\[tool.crewai\]" pyproject.toml
|
||||
|
||||
# 2. Verificar se uv.lock existe
|
||||
ls -la uv.lock || echo "ERRO: uv.lock ausente! Execute 'uv lock'"
|
||||
|
||||
# 3. Verificar se estrutura src/ existe
|
||||
ls -la src/*/main.py 2>/dev/null || echo "Nenhum main.py encontrado em src/"
|
||||
|
||||
# 4. Para Crews - verificar se crew.py existe
|
||||
ls -la src/*/crew.py 2>/dev/null || echo "Nenhum crew.py (esperado para Crews)"
|
||||
|
||||
# 5. Para Flows - verificar se pasta crews/ existe
|
||||
ls -la src/*/crews/ 2>/dev/null || echo "Nenhuma pasta crews/ (esperado para Flows)"
|
||||
|
||||
# 6. Verificar uso do CrewBase
|
||||
grep -r "@CrewBase" . --include="*.py"
|
||||
```
|
||||
|
||||
## Erros Comuns de Configuração
|
||||
|
||||
| Erro | Sintoma | Correção |
|
||||
|------|---------|----------|
|
||||
| `uv.lock` ausente | Build falha durante resolução de dependências | Execute `uv lock` e faça commit |
|
||||
| `type` errado no pyproject.toml | Build bem-sucedido mas falha em runtime | Altere para o tipo correto |
|
||||
| Decorador `@CrewBase` ausente | Erros "Config not found" | Adicione decorador a todas as classes crew |
|
||||
| Arquivos na raiz ao invés de `src/` | Ponto de entrada não encontrado | Mova para `src/project_name/` |
|
||||
| `run()` ou `kickoff()` ausente | Não é possível iniciar automação | Adicione a função de entrada correta |
|
||||
|
||||
## Próximos Passos
|
||||
|
||||
Uma vez que seu projeto passar por todos os itens do checklist, você está pronto para implantar:
|
||||
|
||||
<Card title="Deploy para AMP" icon="rocket" href="/pt-BR/enterprise/guides/deploy-to-amp">
|
||||
Siga o guia de implantação para implantar seu Crew ou Flow no CrewAI AMP usando
|
||||
a CLI, interface web ou integração CI/CD.
|
||||
</Card>
|
||||
@@ -82,7 +82,7 @@ CrewAI AMP expande o poder do framework open-source com funcionalidades projetad
|
||||
<Card
|
||||
title="Implantar Crew"
|
||||
icon="rocket"
|
||||
href="/pt-BR/enterprise/guides/deploy-crew"
|
||||
href="/pt-BR/enterprise/guides/deploy-to-amp"
|
||||
>
|
||||
Implantar Crew
|
||||
</Card>
|
||||
@@ -92,11 +92,11 @@ CrewAI AMP expande o poder do framework open-source com funcionalidades projetad
|
||||
<Card
|
||||
title="Acesso via API"
|
||||
icon="code"
|
||||
href="/pt-BR/enterprise/guides/deploy-crew"
|
||||
href="/pt-BR/enterprise/guides/kickoff-crew"
|
||||
>
|
||||
Usar a API do Crew
|
||||
</Card>
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
Para instruções detalhadas, consulte nosso [guia de implantação](/pt-BR/enterprise/guides/deploy-crew) ou clique no botão abaixo para começar.
|
||||
Para instruções detalhadas, consulte nosso [guia de implantação](/pt-BR/enterprise/guides/deploy-to-amp) ou clique no botão abaixo para começar.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Callable, Sequence
|
||||
from collections.abc import Callable, Coroutine, Sequence
|
||||
import shutil
|
||||
import subprocess
|
||||
import time
|
||||
@@ -34,6 +34,11 @@ from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.agents.cache.cache_handler import CacheHandler
|
||||
from crewai.agents.crew_agent_executor import CrewAgentExecutor
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.agent_events import (
|
||||
LiteAgentExecutionCompletedEvent,
|
||||
LiteAgentExecutionErrorEvent,
|
||||
LiteAgentExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.knowledge_events import (
|
||||
KnowledgeQueryCompletedEvent,
|
||||
KnowledgeQueryFailedEvent,
|
||||
@@ -43,10 +48,10 @@ from crewai.events.types.memory_events import (
|
||||
MemoryRetrievalCompletedEvent,
|
||||
MemoryRetrievalStartedEvent,
|
||||
)
|
||||
from crewai.experimental.crew_agent_executor_flow import CrewAgentExecutorFlow
|
||||
from crewai.experimental.agent_executor import AgentExecutor
|
||||
from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
|
||||
from crewai.lite_agent import LiteAgent
|
||||
from crewai.lite_agent_output import LiteAgentOutput
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.mcp import (
|
||||
MCPClient,
|
||||
@@ -64,15 +69,18 @@ from crewai.security.fingerprint import Fingerprint
|
||||
from crewai.tools.agent_tools.agent_tools import AgentTools
|
||||
from crewai.utilities.agent_utils import (
|
||||
get_tool_names,
|
||||
is_inside_event_loop,
|
||||
load_agent_from_repository,
|
||||
parse_tools,
|
||||
render_text_description_and_args,
|
||||
)
|
||||
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
|
||||
from crewai.utilities.converter import Converter
|
||||
from crewai.utilities.converter import Converter, ConverterError
|
||||
from crewai.utilities.guardrail import process_guardrail
|
||||
from crewai.utilities.guardrail_types import GuardrailType
|
||||
from crewai.utilities.llm_utils import create_llm
|
||||
from crewai.utilities.prompts import Prompts, StandardPromptResult, SystemPromptResult
|
||||
from crewai.utilities.pydantic_schema_utils import generate_model_description
|
||||
from crewai.utilities.token_counter_callback import TokenCalcHandler
|
||||
from crewai.utilities.training_handler import CrewTrainingHandler
|
||||
|
||||
@@ -89,9 +97,9 @@ if TYPE_CHECKING:
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
|
||||
from crewai.agents.agent_builder.base_agent import PlatformAppOrAction
|
||||
from crewai.lite_agent_output import LiteAgentOutput
|
||||
from crewai.task import Task
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.tools.structured_tool import CrewStructuredTool
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
@@ -113,7 +121,7 @@ class Agent(BaseAgent):
|
||||
The agent can also have memory, can operate in verbose mode, and can delegate tasks to other agents.
|
||||
|
||||
Attributes:
|
||||
agent_executor: An instance of the CrewAgentExecutor or CrewAgentExecutorFlow class.
|
||||
agent_executor: An instance of the CrewAgentExecutor or AgentExecutor class.
|
||||
role: The role of the agent.
|
||||
goal: The objective of the agent.
|
||||
backstory: The backstory of the agent.
|
||||
@@ -238,9 +246,9 @@ class Agent(BaseAgent):
|
||||
Can be a single A2AConfig/A2AClientConfig/A2AServerConfig, or a list of any number of A2AConfig/A2AClientConfig with a single A2AServerConfig.
|
||||
""",
|
||||
)
|
||||
executor_class: type[CrewAgentExecutor] | type[CrewAgentExecutorFlow] = Field(
|
||||
executor_class: type[CrewAgentExecutor] | type[AgentExecutor] = Field(
|
||||
default=CrewAgentExecutor,
|
||||
description="Class to use for the agent executor. Defaults to CrewAgentExecutor, can optionally use CrewAgentExecutorFlow.",
|
||||
description="Class to use for the agent executor. Defaults to CrewAgentExecutor, can optionally use AgentExecutor.",
|
||||
)
|
||||
|
||||
@model_validator(mode="before")
|
||||
@@ -1583,26 +1591,25 @@ class Agent(BaseAgent):
|
||||
)
|
||||
return None
|
||||
|
||||
def kickoff(
|
||||
def _prepare_kickoff(
|
||||
self,
|
||||
messages: str | list[LLMMessage],
|
||||
response_format: type[Any] | None = None,
|
||||
) -> LiteAgentOutput:
|
||||
"""
|
||||
Execute the agent with the given messages using a LiteAgent instance.
|
||||
) -> tuple[AgentExecutor, dict[str, str], dict[str, Any], list[CrewStructuredTool]]:
|
||||
"""Prepare common setup for kickoff execution.
|
||||
|
||||
This method is useful when you want to use the Agent configuration but
|
||||
with the simpler and more direct execution flow of LiteAgent.
|
||||
This method handles all the common preparation logic shared between
|
||||
kickoff() and kickoff_async(), including tool processing, prompt building,
|
||||
executor creation, and input formatting.
|
||||
|
||||
Args:
|
||||
messages: Either a string query or a list of message dictionaries.
|
||||
If a string is provided, it will be converted to a user message.
|
||||
If a list is provided, each dict should have 'role' and 'content' keys.
|
||||
response_format: Optional Pydantic model for structured output.
|
||||
|
||||
Returns:
|
||||
LiteAgentOutput: The result of the agent execution.
|
||||
Tuple of (executor, inputs, agent_info, parsed_tools) ready for execution.
|
||||
"""
|
||||
# Process platform apps and MCP tools
|
||||
if self.apps:
|
||||
platform_tools = self.get_platform_tools(self.apps)
|
||||
if platform_tools and self.tools is not None:
|
||||
@@ -1612,25 +1619,359 @@ class Agent(BaseAgent):
|
||||
if mcps and self.tools is not None:
|
||||
self.tools.extend(mcps)
|
||||
|
||||
lite_agent = LiteAgent(
|
||||
id=self.id,
|
||||
role=self.role,
|
||||
goal=self.goal,
|
||||
backstory=self.backstory,
|
||||
llm=self.llm,
|
||||
tools=self.tools or [],
|
||||
max_iterations=self.max_iter,
|
||||
max_execution_time=self.max_execution_time,
|
||||
respect_context_window=self.respect_context_window,
|
||||
verbose=self.verbose,
|
||||
response_format=response_format,
|
||||
# Prepare tools
|
||||
raw_tools: list[BaseTool] = self.tools or []
|
||||
parsed_tools = parse_tools(raw_tools)
|
||||
|
||||
# Build agent_info for backward-compatible event emission
|
||||
agent_info = {
|
||||
"id": self.id,
|
||||
"role": self.role,
|
||||
"goal": self.goal,
|
||||
"backstory": self.backstory,
|
||||
"tools": raw_tools,
|
||||
"verbose": self.verbose,
|
||||
}
|
||||
|
||||
# Build prompt for standalone execution
|
||||
prompt = Prompts(
|
||||
agent=self,
|
||||
has_tools=len(raw_tools) > 0,
|
||||
i18n=self.i18n,
|
||||
original_agent=self,
|
||||
guardrail=self.guardrail,
|
||||
guardrail_max_retries=self.guardrail_max_retries,
|
||||
use_system_prompt=self.use_system_prompt,
|
||||
system_template=self.system_template,
|
||||
prompt_template=self.prompt_template,
|
||||
response_template=self.response_template,
|
||||
).task_execution()
|
||||
|
||||
# Prepare stop words
|
||||
stop_words = [self.i18n.slice("observation")]
|
||||
if self.response_template:
|
||||
stop_words.append(
|
||||
self.response_template.split("{{ .Response }}")[1].strip()
|
||||
)
|
||||
|
||||
# Get RPM limit function
|
||||
rpm_limit_fn = (
|
||||
self._rpm_controller.check_or_wait if self._rpm_controller else None
|
||||
)
|
||||
|
||||
return lite_agent.kickoff(messages)
|
||||
# Create the executor for standalone mode (no crew, no task)
|
||||
executor = AgentExecutor(
|
||||
task=None,
|
||||
crew=None,
|
||||
llm=cast(BaseLLM, self.llm),
|
||||
agent=self,
|
||||
prompt=prompt,
|
||||
max_iter=self.max_iter,
|
||||
tools=parsed_tools,
|
||||
tools_names=get_tool_names(parsed_tools),
|
||||
stop_words=stop_words,
|
||||
tools_description=render_text_description_and_args(parsed_tools),
|
||||
tools_handler=self.tools_handler,
|
||||
original_tools=raw_tools,
|
||||
step_callback=self.step_callback,
|
||||
function_calling_llm=self.function_calling_llm,
|
||||
respect_context_window=self.respect_context_window,
|
||||
request_within_rpm_limit=rpm_limit_fn,
|
||||
callbacks=[TokenCalcHandler(self._token_process)],
|
||||
response_model=response_format,
|
||||
i18n=self.i18n,
|
||||
)
|
||||
|
||||
# Format messages
|
||||
if isinstance(messages, str):
|
||||
formatted_messages = messages
|
||||
else:
|
||||
formatted_messages = "\n".join(
|
||||
str(msg.get("content", "")) for msg in messages if msg.get("content")
|
||||
)
|
||||
|
||||
# Build the input dict for the executor
|
||||
inputs = {
|
||||
"input": formatted_messages,
|
||||
"tool_names": get_tool_names(parsed_tools),
|
||||
"tools": render_text_description_and_args(parsed_tools),
|
||||
}
|
||||
|
||||
return executor, inputs, agent_info, parsed_tools
|
||||
|
||||
def kickoff(
|
||||
self,
|
||||
messages: str | list[LLMMessage],
|
||||
response_format: type[Any] | None = None,
|
||||
) -> LiteAgentOutput | Coroutine[Any, Any, LiteAgentOutput]:
|
||||
"""
|
||||
Execute the agent with the given messages using the AgentExecutor.
|
||||
|
||||
This method provides standalone agent execution without requiring a Crew.
|
||||
It supports tools, response formatting, and guardrails.
|
||||
|
||||
When called from within a Flow (sync or async method), this automatically
|
||||
detects the event loop and returns a coroutine that the Flow framework
|
||||
awaits. Users don't need to handle async explicitly.
|
||||
|
||||
Args:
|
||||
messages: Either a string query or a list of message dictionaries.
|
||||
If a string is provided, it will be converted to a user message.
|
||||
If a list is provided, each dict should have 'role' and 'content' keys.
|
||||
response_format: Optional Pydantic model for structured output.
|
||||
|
||||
Returns:
|
||||
LiteAgentOutput: The result of the agent execution.
|
||||
When inside a Flow, returns a coroutine that resolves to LiteAgentOutput.
|
||||
|
||||
Note:
|
||||
For explicit async usage outside of Flow, use kickoff_async() directly.
|
||||
"""
|
||||
# Magic auto-async: if inside event loop (e.g., inside a Flow),
|
||||
# return coroutine for Flow to await
|
||||
if is_inside_event_loop():
|
||||
return self.kickoff_async(messages, response_format)
|
||||
|
||||
executor, inputs, agent_info, parsed_tools = self._prepare_kickoff(
|
||||
messages, response_format
|
||||
)
|
||||
|
||||
try:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=LiteAgentExecutionStartedEvent(
|
||||
agent_info=agent_info,
|
||||
tools=parsed_tools,
|
||||
messages=messages,
|
||||
),
|
||||
)
|
||||
|
||||
output = self._execute_and_build_output(executor, inputs, response_format)
|
||||
|
||||
if self.guardrail is not None:
|
||||
output = self._process_kickoff_guardrail(
|
||||
output=output,
|
||||
executor=executor,
|
||||
inputs=inputs,
|
||||
response_format=response_format,
|
||||
)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=LiteAgentExecutionCompletedEvent(
|
||||
agent_info=agent_info,
|
||||
output=output.raw,
|
||||
),
|
||||
)
|
||||
|
||||
return output
|
||||
|
||||
except Exception as e:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=LiteAgentExecutionErrorEvent(
|
||||
agent_info=agent_info,
|
||||
error=str(e),
|
||||
),
|
||||
)
|
||||
raise
|
||||
|
||||
def _execute_and_build_output(
|
||||
self,
|
||||
executor: AgentExecutor,
|
||||
inputs: dict[str, str],
|
||||
response_format: type[Any] | None = None,
|
||||
) -> LiteAgentOutput:
|
||||
"""Execute the agent and build the output object.
|
||||
|
||||
Args:
|
||||
executor: The executor instance.
|
||||
inputs: Input dictionary for execution.
|
||||
response_format: Optional response format.
|
||||
|
||||
Returns:
|
||||
LiteAgentOutput with raw output, formatted result, and metrics.
|
||||
"""
|
||||
import json
|
||||
|
||||
# Execute the agent (this is called from sync path, so invoke returns dict)
|
||||
result = cast(dict[str, Any], executor.invoke(inputs))
|
||||
raw_output = result.get("output", "")
|
||||
|
||||
# Handle response format conversion
|
||||
formatted_result: BaseModel | None = None
|
||||
if response_format:
|
||||
try:
|
||||
model_schema = generate_model_description(response_format)
|
||||
schema = json.dumps(model_schema, indent=2)
|
||||
instructions = self.i18n.slice("formatted_task_instructions").format(
|
||||
output_format=schema
|
||||
)
|
||||
|
||||
converter = Converter(
|
||||
llm=self.llm,
|
||||
text=raw_output,
|
||||
model=response_format,
|
||||
instructions=instructions,
|
||||
)
|
||||
|
||||
conversion_result = converter.to_pydantic()
|
||||
if isinstance(conversion_result, BaseModel):
|
||||
formatted_result = conversion_result
|
||||
except ConverterError:
|
||||
pass # Keep raw output if conversion fails
|
||||
|
||||
# Get token usage metrics
|
||||
if isinstance(self.llm, BaseLLM):
|
||||
usage_metrics = self.llm.get_token_usage_summary()
|
||||
else:
|
||||
usage_metrics = self._token_process.get_summary()
|
||||
|
||||
return LiteAgentOutput(
|
||||
raw=raw_output,
|
||||
pydantic=formatted_result,
|
||||
agent_role=self.role,
|
||||
usage_metrics=usage_metrics.model_dump() if usage_metrics else None,
|
||||
messages=executor.messages,
|
||||
)
|
||||
|
||||
async def _execute_and_build_output_async(
|
||||
self,
|
||||
executor: AgentExecutor,
|
||||
inputs: dict[str, str],
|
||||
response_format: type[Any] | None = None,
|
||||
) -> LiteAgentOutput:
|
||||
"""Execute the agent asynchronously and build the output object.
|
||||
|
||||
This is the async version of _execute_and_build_output that uses
|
||||
invoke_async() for native async execution within event loops.
|
||||
|
||||
Args:
|
||||
executor: The executor instance.
|
||||
inputs: Input dictionary for execution.
|
||||
response_format: Optional response format.
|
||||
|
||||
Returns:
|
||||
LiteAgentOutput with raw output, formatted result, and metrics.
|
||||
"""
|
||||
import json
|
||||
|
||||
# Execute the agent asynchronously
|
||||
result = await executor.invoke_async(inputs)
|
||||
raw_output = result.get("output", "")
|
||||
|
||||
# Handle response format conversion
|
||||
formatted_result: BaseModel | None = None
|
||||
if response_format:
|
||||
try:
|
||||
model_schema = generate_model_description(response_format)
|
||||
schema = json.dumps(model_schema, indent=2)
|
||||
instructions = self.i18n.slice("formatted_task_instructions").format(
|
||||
output_format=schema
|
||||
)
|
||||
|
||||
converter = Converter(
|
||||
llm=self.llm,
|
||||
text=raw_output,
|
||||
model=response_format,
|
||||
instructions=instructions,
|
||||
)
|
||||
|
||||
conversion_result = converter.to_pydantic()
|
||||
if isinstance(conversion_result, BaseModel):
|
||||
formatted_result = conversion_result
|
||||
except ConverterError:
|
||||
pass # Keep raw output if conversion fails
|
||||
|
||||
# Get token usage metrics
|
||||
if isinstance(self.llm, BaseLLM):
|
||||
usage_metrics = self.llm.get_token_usage_summary()
|
||||
else:
|
||||
usage_metrics = self._token_process.get_summary()
|
||||
|
||||
return LiteAgentOutput(
|
||||
raw=raw_output,
|
||||
pydantic=formatted_result,
|
||||
agent_role=self.role,
|
||||
usage_metrics=usage_metrics.model_dump() if usage_metrics else None,
|
||||
messages=executor.messages,
|
||||
)
|
||||
|
||||
def _process_kickoff_guardrail(
|
||||
self,
|
||||
output: LiteAgentOutput,
|
||||
executor: AgentExecutor,
|
||||
inputs: dict[str, str],
|
||||
response_format: type[Any] | None = None,
|
||||
retry_count: int = 0,
|
||||
) -> LiteAgentOutput:
|
||||
"""Process guardrail for kickoff execution with retry logic.
|
||||
|
||||
Args:
|
||||
output: Current agent output.
|
||||
executor: The executor instance.
|
||||
inputs: Input dictionary for re-execution.
|
||||
response_format: Optional response format.
|
||||
retry_count: Current retry count.
|
||||
|
||||
Returns:
|
||||
Validated/updated output.
|
||||
"""
|
||||
from crewai.utilities.guardrail_types import GuardrailCallable
|
||||
|
||||
# Ensure guardrail is callable
|
||||
guardrail_callable: GuardrailCallable
|
||||
if isinstance(self.guardrail, str):
|
||||
from crewai.tasks.llm_guardrail import LLMGuardrail
|
||||
|
||||
guardrail_callable = cast(
|
||||
GuardrailCallable,
|
||||
LLMGuardrail(description=self.guardrail, llm=cast(BaseLLM, self.llm)),
|
||||
)
|
||||
elif callable(self.guardrail):
|
||||
guardrail_callable = self.guardrail
|
||||
else:
|
||||
# Should not happen if called from kickoff with guardrail check
|
||||
return output
|
||||
|
||||
guardrail_result = process_guardrail(
|
||||
output=output,
|
||||
guardrail=guardrail_callable,
|
||||
retry_count=retry_count,
|
||||
event_source=self,
|
||||
from_agent=self,
|
||||
)
|
||||
|
||||
if not guardrail_result.success:
|
||||
if retry_count >= self.guardrail_max_retries:
|
||||
raise ValueError(
|
||||
f"Agent's guardrail failed validation after {self.guardrail_max_retries} retries. "
|
||||
f"Last error: {guardrail_result.error}"
|
||||
)
|
||||
|
||||
# Add feedback and re-execute
|
||||
executor._append_message_to_state(
|
||||
guardrail_result.error or "Guardrail validation failed",
|
||||
role="user",
|
||||
)
|
||||
|
||||
# Re-execute and build new output
|
||||
output = self._execute_and_build_output(executor, inputs, response_format)
|
||||
|
||||
# Recursively retry guardrail
|
||||
return self._process_kickoff_guardrail(
|
||||
output=output,
|
||||
executor=executor,
|
||||
inputs=inputs,
|
||||
response_format=response_format,
|
||||
retry_count=retry_count + 1,
|
||||
)
|
||||
|
||||
# Apply guardrail result if available
|
||||
if guardrail_result.result is not None:
|
||||
if isinstance(guardrail_result.result, str):
|
||||
output.raw = guardrail_result.result
|
||||
elif isinstance(guardrail_result.result, BaseModel):
|
||||
output.pydantic = guardrail_result.result
|
||||
|
||||
return output
|
||||
|
||||
async def kickoff_async(
|
||||
self,
|
||||
@@ -1638,9 +1979,11 @@ class Agent(BaseAgent):
|
||||
response_format: type[Any] | None = None,
|
||||
) -> LiteAgentOutput:
|
||||
"""
|
||||
Execute the agent asynchronously with the given messages using a LiteAgent instance.
|
||||
Execute the agent asynchronously with the given messages.
|
||||
|
||||
This is the async version of the kickoff method.
|
||||
This is the async version of the kickoff method that uses native async
|
||||
execution. It is designed for use within async contexts, such as when
|
||||
called from within an async Flow method.
|
||||
|
||||
Args:
|
||||
messages: Either a string query or a list of message dictionaries.
|
||||
@@ -1651,21 +1994,48 @@ class Agent(BaseAgent):
|
||||
Returns:
|
||||
LiteAgentOutput: The result of the agent execution.
|
||||
"""
|
||||
lite_agent = LiteAgent(
|
||||
role=self.role,
|
||||
goal=self.goal,
|
||||
backstory=self.backstory,
|
||||
llm=self.llm,
|
||||
tools=self.tools or [],
|
||||
max_iterations=self.max_iter,
|
||||
max_execution_time=self.max_execution_time,
|
||||
respect_context_window=self.respect_context_window,
|
||||
verbose=self.verbose,
|
||||
response_format=response_format,
|
||||
i18n=self.i18n,
|
||||
original_agent=self,
|
||||
guardrail=self.guardrail,
|
||||
guardrail_max_retries=self.guardrail_max_retries,
|
||||
executor, inputs, agent_info, parsed_tools = self._prepare_kickoff(
|
||||
messages, response_format
|
||||
)
|
||||
|
||||
return await lite_agent.kickoff_async(messages)
|
||||
try:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=LiteAgentExecutionStartedEvent(
|
||||
agent_info=agent_info,
|
||||
tools=parsed_tools,
|
||||
messages=messages,
|
||||
),
|
||||
)
|
||||
|
||||
output = await self._execute_and_build_output_async(
|
||||
executor, inputs, response_format
|
||||
)
|
||||
|
||||
if self.guardrail is not None:
|
||||
output = self._process_kickoff_guardrail(
|
||||
output=output,
|
||||
executor=executor,
|
||||
inputs=inputs,
|
||||
response_format=response_format,
|
||||
)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=LiteAgentExecutionCompletedEvent(
|
||||
agent_info=agent_info,
|
||||
output=output.raw,
|
||||
),
|
||||
)
|
||||
|
||||
return output
|
||||
|
||||
except Exception as e:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=LiteAgentExecutionErrorEvent(
|
||||
agent_info=agent_info,
|
||||
error=str(e),
|
||||
),
|
||||
)
|
||||
raise
|
||||
|
||||
@@ -21,9 +21,9 @@ if TYPE_CHECKING:
|
||||
|
||||
|
||||
class CrewAgentExecutorMixin:
|
||||
crew: Crew
|
||||
crew: Crew | None
|
||||
agent: Agent
|
||||
task: Task
|
||||
task: Task | None
|
||||
iterations: int
|
||||
max_iter: int
|
||||
messages: list[LLMMessage]
|
||||
|
||||
@@ -0,0 +1,32 @@
|
||||
from crewai.cli.authentication.providers.base_provider import BaseProvider
|
||||
|
||||
|
||||
class KeycloakProvider(BaseProvider):
|
||||
def get_authorize_url(self) -> str:
|
||||
return f"{self._oauth2_base_url()}/realms/{self.settings.extra.get('realm')}/protocol/openid-connect/auth/device"
|
||||
|
||||
def get_token_url(self) -> str:
|
||||
return f"{self._oauth2_base_url()}/realms/{self.settings.extra.get('realm')}/protocol/openid-connect/token"
|
||||
|
||||
def get_jwks_url(self) -> str:
|
||||
return f"{self._oauth2_base_url()}/realms/{self.settings.extra.get('realm')}/protocol/openid-connect/certs"
|
||||
|
||||
def get_issuer(self) -> str:
|
||||
return f"{self._oauth2_base_url()}/realms/{self.settings.extra.get('realm')}"
|
||||
|
||||
def get_audience(self) -> str:
|
||||
return self.settings.audience or "no-audience-provided"
|
||||
|
||||
def get_client_id(self) -> str:
|
||||
if self.settings.client_id is None:
|
||||
raise ValueError(
|
||||
"Client ID is required. Please set it in the configuration."
|
||||
)
|
||||
return self.settings.client_id
|
||||
|
||||
def get_required_fields(self) -> list[str]:
|
||||
return ["realm"]
|
||||
|
||||
def _oauth2_base_url(self) -> str:
|
||||
domain = self.settings.domain.removeprefix("https://").removeprefix("http://")
|
||||
return f"https://{domain}"
|
||||
@@ -1,4 +1,4 @@
|
||||
from crewai.experimental.crew_agent_executor_flow import CrewAgentExecutorFlow
|
||||
from crewai.experimental.agent_executor import AgentExecutor, CrewAgentExecutorFlow
|
||||
from crewai.experimental.evaluation import (
|
||||
AgentEvaluationResult,
|
||||
AgentEvaluator,
|
||||
@@ -23,8 +23,9 @@ from crewai.experimental.evaluation import (
|
||||
__all__ = [
|
||||
"AgentEvaluationResult",
|
||||
"AgentEvaluator",
|
||||
"AgentExecutor",
|
||||
"BaseEvaluator",
|
||||
"CrewAgentExecutorFlow",
|
||||
"CrewAgentExecutorFlow", # Deprecated alias for AgentExecutor
|
||||
"EvaluationScore",
|
||||
"EvaluationTraceCallback",
|
||||
"ExperimentResult",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Callable
|
||||
from collections.abc import Callable, Coroutine
|
||||
import threading
|
||||
from typing import TYPE_CHECKING, Any, Literal, cast
|
||||
from uuid import uuid4
|
||||
@@ -37,6 +37,7 @@ from crewai.utilities.agent_utils import (
|
||||
handle_unknown_error,
|
||||
has_reached_max_iterations,
|
||||
is_context_length_exceeded,
|
||||
is_inside_event_loop,
|
||||
process_llm_response,
|
||||
)
|
||||
from crewai.utilities.constants import TRAINING_DATA_FILE
|
||||
@@ -73,13 +74,17 @@ class AgentReActState(BaseModel):
|
||||
ask_for_human_input: bool = Field(default=False)
|
||||
|
||||
|
||||
class CrewAgentExecutorFlow(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
"""Flow-based executor matching CrewAgentExecutor interface.
|
||||
class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
"""Flow-based agent executor for both standalone and crew-bound execution.
|
||||
|
||||
Inherits from:
|
||||
- Flow[AgentReActState]: Provides flow orchestration capabilities
|
||||
- CrewAgentExecutorMixin: Provides memory methods (short/long/external term)
|
||||
|
||||
This executor can operate in two modes:
|
||||
- Standalone mode: When crew and task are None (used by Agent.kickoff())
|
||||
- Crew mode: When crew and task are provided (used by Agent.execute_task())
|
||||
|
||||
Note: Multiple instances may be created during agent initialization
|
||||
(cache setup, RPM controller setup, etc.) but only the final instance
|
||||
should execute tasks via invoke().
|
||||
@@ -88,8 +93,6 @@ class CrewAgentExecutorFlow(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
def __init__(
|
||||
self,
|
||||
llm: BaseLLM,
|
||||
task: Task,
|
||||
crew: Crew,
|
||||
agent: Agent,
|
||||
prompt: SystemPromptResult | StandardPromptResult,
|
||||
max_iter: int,
|
||||
@@ -98,6 +101,8 @@ class CrewAgentExecutorFlow(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
stop_words: list[str],
|
||||
tools_description: str,
|
||||
tools_handler: ToolsHandler,
|
||||
task: Task | None = None,
|
||||
crew: Crew | None = None,
|
||||
step_callback: Any = None,
|
||||
original_tools: list[BaseTool] | None = None,
|
||||
function_calling_llm: BaseLLM | Any | None = None,
|
||||
@@ -111,8 +116,6 @@ class CrewAgentExecutorFlow(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
|
||||
Args:
|
||||
llm: Language model instance.
|
||||
task: Task to execute.
|
||||
crew: Crew instance.
|
||||
agent: Agent to execute.
|
||||
prompt: Prompt templates.
|
||||
max_iter: Maximum iterations.
|
||||
@@ -121,6 +124,8 @@ class CrewAgentExecutorFlow(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
stop_words: Stop word list.
|
||||
tools_description: Tool descriptions.
|
||||
tools_handler: Tool handler instance.
|
||||
task: Optional task to execute (None for standalone agent execution).
|
||||
crew: Optional crew instance (None for standalone agent execution).
|
||||
step_callback: Optional step callback.
|
||||
original_tools: Original tool list.
|
||||
function_calling_llm: Optional function calling LLM.
|
||||
@@ -131,9 +136,9 @@ class CrewAgentExecutorFlow(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
"""
|
||||
self._i18n: I18N = i18n or get_i18n()
|
||||
self.llm = llm
|
||||
self.task = task
|
||||
self.task: Task | None = task
|
||||
self.agent = agent
|
||||
self.crew = crew
|
||||
self.crew: Crew | None = crew
|
||||
self.prompt = prompt
|
||||
self.tools = tools
|
||||
self.tools_names = tools_names
|
||||
@@ -178,7 +183,6 @@ class CrewAgentExecutorFlow(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
else self.stop
|
||||
)
|
||||
)
|
||||
|
||||
self._state = AgentReActState()
|
||||
|
||||
def _ensure_flow_initialized(self) -> None:
|
||||
@@ -264,7 +268,7 @@ class CrewAgentExecutorFlow(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
printer=self._printer,
|
||||
from_task=self.task,
|
||||
from_agent=self.agent,
|
||||
response_model=self.response_model,
|
||||
response_model=None,
|
||||
executor_context=self,
|
||||
)
|
||||
|
||||
@@ -449,9 +453,99 @@ class CrewAgentExecutorFlow(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
|
||||
return "initialized"
|
||||
|
||||
def invoke(self, inputs: dict[str, Any]) -> dict[str, Any]:
|
||||
def invoke(
|
||||
self, inputs: dict[str, Any]
|
||||
) -> dict[str, Any] | Coroutine[Any, Any, dict[str, Any]]:
|
||||
"""Execute agent with given inputs.
|
||||
|
||||
When called from within an existing event loop (e.g., inside a Flow),
|
||||
this method returns a coroutine that should be awaited. The Flow
|
||||
framework handles this automatically.
|
||||
|
||||
Args:
|
||||
inputs: Input dictionary containing prompt variables.
|
||||
|
||||
Returns:
|
||||
Dictionary with agent output, or a coroutine if inside an event loop.
|
||||
"""
|
||||
# Magic auto-async: if inside event loop, return coroutine for Flow to await
|
||||
if is_inside_event_loop():
|
||||
return self.invoke_async(inputs)
|
||||
|
||||
self._ensure_flow_initialized()
|
||||
|
||||
with self._execution_lock:
|
||||
if self._is_executing:
|
||||
raise RuntimeError(
|
||||
"Executor is already running. "
|
||||
"Cannot invoke the same executor instance concurrently."
|
||||
)
|
||||
self._is_executing = True
|
||||
self._has_been_invoked = True
|
||||
|
||||
try:
|
||||
# Reset state for fresh execution
|
||||
self.state.messages.clear()
|
||||
self.state.iterations = 0
|
||||
self.state.current_answer = None
|
||||
self.state.is_finished = False
|
||||
|
||||
if "system" in self.prompt:
|
||||
prompt = cast("SystemPromptResult", self.prompt)
|
||||
system_prompt = self._format_prompt(prompt["system"], inputs)
|
||||
user_prompt = self._format_prompt(prompt["user"], inputs)
|
||||
self.state.messages.append(
|
||||
format_message_for_llm(system_prompt, role="system")
|
||||
)
|
||||
self.state.messages.append(format_message_for_llm(user_prompt))
|
||||
else:
|
||||
user_prompt = self._format_prompt(self.prompt["prompt"], inputs)
|
||||
self.state.messages.append(format_message_for_llm(user_prompt))
|
||||
|
||||
self.state.ask_for_human_input = bool(
|
||||
inputs.get("ask_for_human_input", False)
|
||||
)
|
||||
|
||||
self.kickoff()
|
||||
|
||||
formatted_answer = self.state.current_answer
|
||||
|
||||
if not isinstance(formatted_answer, AgentFinish):
|
||||
raise RuntimeError(
|
||||
"Agent execution ended without reaching a final answer."
|
||||
)
|
||||
|
||||
if self.state.ask_for_human_input:
|
||||
formatted_answer = self._handle_human_feedback(formatted_answer)
|
||||
|
||||
self._create_short_term_memory(formatted_answer)
|
||||
self._create_long_term_memory(formatted_answer)
|
||||
self._create_external_memory(formatted_answer)
|
||||
|
||||
return {"output": formatted_answer.output}
|
||||
|
||||
except AssertionError:
|
||||
fail_text = Text()
|
||||
fail_text.append("❌ ", style="red bold")
|
||||
fail_text.append(
|
||||
"Agent failed to reach a final answer. This is likely a bug - please report it.",
|
||||
style="red",
|
||||
)
|
||||
self._console.print(fail_text)
|
||||
raise
|
||||
except Exception as e:
|
||||
handle_unknown_error(self._printer, e)
|
||||
raise
|
||||
finally:
|
||||
self._is_executing = False
|
||||
|
||||
async def invoke_async(self, inputs: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Execute agent asynchronously with given inputs.
|
||||
|
||||
This method is designed for use within async contexts, such as when
|
||||
the agent is called from within an async Flow method. It uses
|
||||
kickoff_async() directly instead of running in a separate thread.
|
||||
|
||||
Args:
|
||||
inputs: Input dictionary containing prompt variables.
|
||||
|
||||
@@ -492,7 +586,8 @@ class CrewAgentExecutorFlow(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
inputs.get("ask_for_human_input", False)
|
||||
)
|
||||
|
||||
self.kickoff()
|
||||
# Use async kickoff directly since we're already in an async context
|
||||
await self.kickoff_async()
|
||||
|
||||
formatted_answer = self.state.current_answer
|
||||
|
||||
@@ -583,11 +678,14 @@ class CrewAgentExecutorFlow(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
if self.agent is None:
|
||||
raise ValueError("Agent cannot be None")
|
||||
|
||||
if self.task is None:
|
||||
return
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self.agent,
|
||||
AgentLogsStartedEvent(
|
||||
agent_role=self.agent.role,
|
||||
task_description=(self.task.description if self.task else "Not Found"),
|
||||
task_description=self.task.description,
|
||||
verbose=self.agent.verbose
|
||||
or (hasattr(self, "crew") and getattr(self.crew, "verbose", False)),
|
||||
),
|
||||
@@ -621,10 +719,12 @@ class CrewAgentExecutorFlow(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
result: Agent's final output.
|
||||
human_feedback: Optional feedback from human.
|
||||
"""
|
||||
# Early return if no crew (standalone mode)
|
||||
if self.crew is None:
|
||||
return
|
||||
|
||||
agent_id = str(self.agent.id)
|
||||
train_iteration = (
|
||||
getattr(self.crew, "_train_iteration", None) if self.crew else None
|
||||
)
|
||||
train_iteration = getattr(self.crew, "_train_iteration", None)
|
||||
|
||||
if train_iteration is None or not isinstance(train_iteration, int):
|
||||
train_error = Text()
|
||||
@@ -806,3 +906,7 @@ class CrewAgentExecutorFlow(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
requiring arbitrary_types_allowed=True.
|
||||
"""
|
||||
return core_schema.any_schema()
|
||||
|
||||
|
||||
# Backward compatibility alias (deprecated)
|
||||
CrewAgentExecutorFlow = AgentExecutor
|
||||
@@ -73,6 +73,7 @@ from crewai.flow.utils import (
|
||||
is_simple_flow_condition,
|
||||
)
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.flow.async_feedback.types import PendingFeedbackContext
|
||||
from crewai.flow.human_feedback import HumanFeedbackResult
|
||||
@@ -519,6 +520,9 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
self._methods: dict[FlowMethodName, FlowMethod[Any, Any]] = {}
|
||||
self._method_execution_counts: dict[FlowMethodName, int] = {}
|
||||
self._pending_and_listeners: dict[PendingListenerKey, set[FlowMethodName]] = {}
|
||||
self._fired_or_listeners: set[FlowMethodName] = (
|
||||
set()
|
||||
) # Track OR listeners that already fired
|
||||
self._method_outputs: list[Any] = [] # list to store all method outputs
|
||||
self._completed_methods: set[FlowMethodName] = (
|
||||
set()
|
||||
@@ -570,7 +574,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
flow_id: str,
|
||||
persistence: FlowPersistence | None = None,
|
||||
**kwargs: Any,
|
||||
) -> "Flow[Any]":
|
||||
) -> Flow[Any]:
|
||||
"""Create a Flow instance from a pending feedback state.
|
||||
|
||||
This classmethod is used to restore a flow that was paused waiting
|
||||
@@ -631,7 +635,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
return instance
|
||||
|
||||
@property
|
||||
def pending_feedback(self) -> "PendingFeedbackContext | None":
|
||||
def pending_feedback(self) -> PendingFeedbackContext | None:
|
||||
"""Get the pending feedback context if this flow is waiting for feedback.
|
||||
|
||||
Returns:
|
||||
@@ -716,9 +720,10 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
Raises:
|
||||
ValueError: If no pending feedback context exists
|
||||
"""
|
||||
from crewai.flow.human_feedback import HumanFeedbackResult
|
||||
from datetime import datetime
|
||||
|
||||
from crewai.flow.human_feedback import HumanFeedbackResult
|
||||
|
||||
if self._pending_feedback_context is None:
|
||||
raise ValueError(
|
||||
"No pending feedback context. Use from_pending() to restore a paused flow."
|
||||
@@ -1295,6 +1300,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
self._completed_methods.clear()
|
||||
self._method_outputs.clear()
|
||||
self._pending_and_listeners.clear()
|
||||
self._fired_or_listeners.clear()
|
||||
else:
|
||||
# We're restoring from persistence, set the flag
|
||||
self._is_execution_resuming = True
|
||||
@@ -1346,9 +1352,26 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
self._initialize_state(inputs)
|
||||
|
||||
try:
|
||||
# Determine which start methods to execute at kickoff
|
||||
# Conditional start methods (with __trigger_methods__) are only triggered by their conditions
|
||||
# UNLESS there are no unconditional starts (then all starts run as entry points)
|
||||
unconditional_starts = [
|
||||
start_method
|
||||
for start_method in self._start_methods
|
||||
if not getattr(
|
||||
self._methods.get(start_method), "__trigger_methods__", None
|
||||
)
|
||||
]
|
||||
# If there are unconditional starts, only run those at kickoff
|
||||
# If there are NO unconditional starts, run all starts (including conditional ones)
|
||||
starts_to_execute = (
|
||||
unconditional_starts
|
||||
if unconditional_starts
|
||||
else self._start_methods
|
||||
)
|
||||
tasks = [
|
||||
self._execute_start_method(start_method)
|
||||
for start_method in self._start_methods
|
||||
for start_method in starts_to_execute
|
||||
]
|
||||
await asyncio.gather(*tasks)
|
||||
except Exception as e:
|
||||
@@ -1481,6 +1504,8 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
return
|
||||
# For cyclic flows, clear from completed to allow re-execution
|
||||
self._completed_methods.discard(start_method_name)
|
||||
# Also clear fired OR listeners to allow them to fire again in new cycle
|
||||
self._fired_or_listeners.clear()
|
||||
|
||||
method = self._methods[start_method_name]
|
||||
enhanced_method = self._inject_trigger_payload_for_start_method(method)
|
||||
@@ -1503,11 +1528,9 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
if self.last_human_feedback is not None
|
||||
else result
|
||||
)
|
||||
tasks = [
|
||||
self._execute_single_listener(listener_name, listener_result)
|
||||
for listener_name in listeners_for_result
|
||||
]
|
||||
await asyncio.gather(*tasks)
|
||||
# Execute listeners sequentially to prevent race conditions on shared state
|
||||
for listener_name in listeners_for_result:
|
||||
await self._execute_single_listener(listener_name, listener_result)
|
||||
else:
|
||||
await self._execute_listeners(start_method_name, result)
|
||||
|
||||
@@ -1573,11 +1596,19 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
if future:
|
||||
self._event_futures.append(future)
|
||||
|
||||
result = (
|
||||
await method(*args, **kwargs)
|
||||
if asyncio.iscoroutinefunction(method)
|
||||
else method(*args, **kwargs)
|
||||
)
|
||||
if asyncio.iscoroutinefunction(method):
|
||||
result = await method(*args, **kwargs)
|
||||
else:
|
||||
# Run sync methods in thread pool for isolation
|
||||
# This allows Agent.kickoff() to work synchronously inside Flow methods
|
||||
import contextvars
|
||||
|
||||
ctx = contextvars.copy_context()
|
||||
result = await asyncio.to_thread(ctx.run, method, *args, **kwargs)
|
||||
|
||||
# Auto-await coroutines returned from sync methods (enables AgentExecutor pattern)
|
||||
if asyncio.iscoroutine(result):
|
||||
result = await result
|
||||
|
||||
self._method_outputs.append(result)
|
||||
self._method_execution_counts[method_name] = (
|
||||
@@ -1724,11 +1755,11 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
listener_result = router_result_to_feedback.get(
|
||||
str(current_trigger), result
|
||||
)
|
||||
tasks = [
|
||||
self._execute_single_listener(listener_name, listener_result)
|
||||
for listener_name in listeners_triggered
|
||||
]
|
||||
await asyncio.gather(*tasks)
|
||||
# Execute listeners sequentially to prevent race conditions on shared state
|
||||
for listener_name in listeners_triggered:
|
||||
await self._execute_single_listener(
|
||||
listener_name, listener_result
|
||||
)
|
||||
|
||||
if current_trigger in router_results:
|
||||
# Find start methods triggered by this router result
|
||||
@@ -1745,14 +1776,16 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
should_trigger = current_trigger in all_methods
|
||||
|
||||
if should_trigger:
|
||||
# Only execute if this is a cycle (method was already completed)
|
||||
# Execute conditional start method triggered by router result
|
||||
if method_name in self._completed_methods:
|
||||
# For router-triggered start methods in cycles, temporarily clear resumption flag
|
||||
# to allow cyclic execution
|
||||
# For cyclic re-execution, temporarily clear resumption flag
|
||||
was_resuming = self._is_execution_resuming
|
||||
self._is_execution_resuming = False
|
||||
await self._execute_start_method(method_name)
|
||||
self._is_execution_resuming = was_resuming
|
||||
else:
|
||||
# First-time execution of conditional start
|
||||
await self._execute_start_method(method_name)
|
||||
|
||||
def _evaluate_condition(
|
||||
self,
|
||||
@@ -1850,8 +1883,21 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
condition_type, methods = condition_data
|
||||
|
||||
if condition_type == OR_CONDITION:
|
||||
if trigger_method in methods:
|
||||
triggered.append(listener_name)
|
||||
# Only trigger multi-source OR listeners (or_(A, B, C)) once - skip if already fired
|
||||
# Simple single-method listeners fire every time their trigger occurs
|
||||
# Routers also fire every time - they're decision points
|
||||
has_multiple_triggers = len(methods) > 1
|
||||
should_check_fired = has_multiple_triggers and not is_router
|
||||
|
||||
if (
|
||||
not should_check_fired
|
||||
or listener_name not in self._fired_or_listeners
|
||||
):
|
||||
if trigger_method in methods:
|
||||
triggered.append(listener_name)
|
||||
# Only track multi-source OR listeners (not single-method or routers)
|
||||
if should_check_fired:
|
||||
self._fired_or_listeners.add(listener_name)
|
||||
elif condition_type == AND_CONDITION:
|
||||
pending_key = PendingListenerKey(listener_name)
|
||||
if pending_key not in self._pending_and_listeners:
|
||||
@@ -1864,10 +1910,26 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
self._pending_and_listeners.pop(pending_key, None)
|
||||
|
||||
elif is_flow_condition_dict(condition_data):
|
||||
# For complex conditions, check if top-level is OR and track accordingly
|
||||
top_level_type = condition_data.get("type", OR_CONDITION)
|
||||
is_or_based = top_level_type == OR_CONDITION
|
||||
|
||||
# Only track multi-source OR conditions (multiple sub-conditions), not routers
|
||||
sub_conditions = condition_data.get("conditions", [])
|
||||
has_multiple_triggers = is_or_based and len(sub_conditions) > 1
|
||||
should_check_fired = has_multiple_triggers and not is_router
|
||||
|
||||
# Skip compound OR-based listeners that have already fired
|
||||
if should_check_fired and listener_name in self._fired_or_listeners:
|
||||
continue
|
||||
|
||||
if self._evaluate_condition(
|
||||
condition_data, trigger_method, listener_name
|
||||
):
|
||||
triggered.append(listener_name)
|
||||
# Track compound OR-based listeners so they only fire once
|
||||
if should_check_fired:
|
||||
self._fired_or_listeners.add(listener_name)
|
||||
|
||||
return triggered
|
||||
|
||||
@@ -1896,9 +1958,22 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
if self._is_execution_resuming:
|
||||
# During resumption, skip execution but continue listeners
|
||||
await self._execute_listeners(listener_name, None)
|
||||
|
||||
# For routers, also check if any conditional starts they triggered are completed
|
||||
# If so, continue their chains
|
||||
if listener_name in self._routers:
|
||||
for start_method_name in self._start_methods:
|
||||
if (
|
||||
start_method_name in self._listeners
|
||||
and start_method_name in self._completed_methods
|
||||
):
|
||||
# This conditional start was executed, continue its chain
|
||||
await self._execute_start_method(start_method_name)
|
||||
return
|
||||
# For cyclic flows, clear from completed to allow re-execution
|
||||
self._completed_methods.discard(listener_name)
|
||||
# Also clear from fired OR listeners for cyclic flows
|
||||
self._fired_or_listeners.discard(listener_name)
|
||||
|
||||
try:
|
||||
method = self._methods[listener_name]
|
||||
@@ -1931,11 +2006,9 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
if self.last_human_feedback is not None
|
||||
else listener_result
|
||||
)
|
||||
tasks = [
|
||||
self._execute_single_listener(name, feedback_result)
|
||||
for name in listeners_for_result
|
||||
]
|
||||
await asyncio.gather(*tasks)
|
||||
# Execute listeners sequentially to prevent race conditions on shared state
|
||||
for name in listeners_for_result:
|
||||
await self._execute_single_listener(name, feedback_result)
|
||||
|
||||
except Exception as e:
|
||||
# Don't log HumanFeedbackPending as an error - it's expected control flow
|
||||
|
||||
@@ -10,6 +10,7 @@ from typing import (
|
||||
get_origin,
|
||||
)
|
||||
import uuid
|
||||
import warnings
|
||||
|
||||
from pydantic import (
|
||||
UUID4,
|
||||
@@ -80,6 +81,11 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
"""
|
||||
A lightweight agent that can process messages and use tools.
|
||||
|
||||
.. deprecated::
|
||||
LiteAgent is deprecated and will be removed in a future version.
|
||||
Use ``Agent().kickoff(messages)`` instead, which provides the same
|
||||
functionality with additional features like memory and knowledge support.
|
||||
|
||||
This agent is simpler than the full Agent class, focusing on direct execution
|
||||
rather than task delegation. It's designed to be used for simple interactions
|
||||
where a full crew is not needed.
|
||||
@@ -164,6 +170,18 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
default_factory=get_after_llm_call_hooks
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def emit_deprecation_warning(self) -> Self:
|
||||
"""Emit deprecation warning for LiteAgent usage."""
|
||||
warnings.warn(
|
||||
"LiteAgent is deprecated and will be removed in a future version. "
|
||||
"Use Agent().kickoff(messages) instead, which provides the same "
|
||||
"functionality with additional features like memory and knowledge support.",
|
||||
DeprecationWarning,
|
||||
stacklevel=2,
|
||||
)
|
||||
return self
|
||||
|
||||
@model_validator(mode="after")
|
||||
def setup_llm(self) -> Self:
|
||||
"""Set up the LLM and other components after initialization."""
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Callable, Sequence
|
||||
import json
|
||||
import re
|
||||
@@ -54,6 +55,23 @@ console = Console()
|
||||
_MULTIPLE_NEWLINES: Final[re.Pattern[str]] = re.compile(r"\n+")
|
||||
|
||||
|
||||
def is_inside_event_loop() -> bool:
|
||||
"""Check if code is currently running inside an asyncio event loop.
|
||||
|
||||
This is used to detect when code is being called from within an async context
|
||||
(e.g., inside a Flow). In such cases, callers should return a coroutine
|
||||
instead of executing synchronously to avoid nested event loop errors.
|
||||
|
||||
Returns:
|
||||
True if inside a running event loop, False otherwise.
|
||||
"""
|
||||
try:
|
||||
asyncio.get_running_loop()
|
||||
return True
|
||||
except RuntimeError:
|
||||
return False
|
||||
|
||||
|
||||
def parse_tools(tools: list[BaseTool]) -> list[CrewStructuredTool]:
|
||||
"""Parse tools to be used for the task.
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
"""Unit tests for CrewAgentExecutorFlow.
|
||||
"""Unit tests for AgentExecutor.
|
||||
|
||||
Tests the Flow-based agent executor implementation including state management,
|
||||
flow methods, routing logic, and error handling.
|
||||
@@ -8,9 +8,9 @@ from unittest.mock import Mock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.experimental.crew_agent_executor_flow import (
|
||||
from crewai.experimental.agent_executor import (
|
||||
AgentReActState,
|
||||
CrewAgentExecutorFlow,
|
||||
AgentExecutor,
|
||||
)
|
||||
from crewai.agents.parser import AgentAction, AgentFinish
|
||||
|
||||
@@ -43,8 +43,8 @@ class TestAgentReActState:
|
||||
assert state.ask_for_human_input is True
|
||||
|
||||
|
||||
class TestCrewAgentExecutorFlow:
|
||||
"""Test CrewAgentExecutorFlow class."""
|
||||
class TestAgentExecutor:
|
||||
"""Test AgentExecutor class."""
|
||||
|
||||
@pytest.fixture
|
||||
def mock_dependencies(self):
|
||||
@@ -87,8 +87,8 @@ class TestCrewAgentExecutorFlow:
|
||||
}
|
||||
|
||||
def test_executor_initialization(self, mock_dependencies):
|
||||
"""Test CrewAgentExecutorFlow initialization."""
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
"""Test AgentExecutor initialization."""
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
|
||||
assert executor.llm == mock_dependencies["llm"]
|
||||
assert executor.task == mock_dependencies["task"]
|
||||
@@ -100,9 +100,9 @@ class TestCrewAgentExecutorFlow:
|
||||
def test_initialize_reasoning(self, mock_dependencies):
|
||||
"""Test flow entry point."""
|
||||
with patch.object(
|
||||
CrewAgentExecutorFlow, "_show_start_logs"
|
||||
AgentExecutor, "_show_start_logs"
|
||||
) as mock_show_start:
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
result = executor.initialize_reasoning()
|
||||
|
||||
assert result == "initialized"
|
||||
@@ -110,7 +110,7 @@ class TestCrewAgentExecutorFlow:
|
||||
|
||||
def test_check_max_iterations_not_reached(self, mock_dependencies):
|
||||
"""Test routing when iterations < max."""
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
executor.state.iterations = 5
|
||||
|
||||
result = executor.check_max_iterations()
|
||||
@@ -118,7 +118,7 @@ class TestCrewAgentExecutorFlow:
|
||||
|
||||
def test_check_max_iterations_reached(self, mock_dependencies):
|
||||
"""Test routing when iterations >= max."""
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
executor.state.iterations = 10
|
||||
|
||||
result = executor.check_max_iterations()
|
||||
@@ -126,7 +126,7 @@ class TestCrewAgentExecutorFlow:
|
||||
|
||||
def test_route_by_answer_type_action(self, mock_dependencies):
|
||||
"""Test routing for AgentAction."""
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
executor.state.current_answer = AgentAction(
|
||||
thought="thinking", tool="search", tool_input="query", text="action text"
|
||||
)
|
||||
@@ -136,7 +136,7 @@ class TestCrewAgentExecutorFlow:
|
||||
|
||||
def test_route_by_answer_type_finish(self, mock_dependencies):
|
||||
"""Test routing for AgentFinish."""
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
executor.state.current_answer = AgentFinish(
|
||||
thought="final thoughts", output="Final answer", text="complete"
|
||||
)
|
||||
@@ -146,7 +146,7 @@ class TestCrewAgentExecutorFlow:
|
||||
|
||||
def test_continue_iteration(self, mock_dependencies):
|
||||
"""Test iteration continuation."""
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
|
||||
result = executor.continue_iteration()
|
||||
|
||||
@@ -154,8 +154,8 @@ class TestCrewAgentExecutorFlow:
|
||||
|
||||
def test_finalize_success(self, mock_dependencies):
|
||||
"""Test finalize with valid AgentFinish."""
|
||||
with patch.object(CrewAgentExecutorFlow, "_show_logs") as mock_show_logs:
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
with patch.object(AgentExecutor, "_show_logs") as mock_show_logs:
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
executor.state.current_answer = AgentFinish(
|
||||
thought="final thinking", output="Done", text="complete"
|
||||
)
|
||||
@@ -168,7 +168,7 @@ class TestCrewAgentExecutorFlow:
|
||||
|
||||
def test_finalize_failure(self, mock_dependencies):
|
||||
"""Test finalize skips when given AgentAction instead of AgentFinish."""
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
executor.state.current_answer = AgentAction(
|
||||
thought="thinking", tool="search", tool_input="query", text="action text"
|
||||
)
|
||||
@@ -181,7 +181,7 @@ class TestCrewAgentExecutorFlow:
|
||||
|
||||
def test_format_prompt(self, mock_dependencies):
|
||||
"""Test prompt formatting."""
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
inputs = {"input": "test input", "tool_names": "tool1, tool2", "tools": "desc"}
|
||||
|
||||
result = executor._format_prompt("Prompt {input} {tool_names} {tools}", inputs)
|
||||
@@ -192,18 +192,18 @@ class TestCrewAgentExecutorFlow:
|
||||
|
||||
def test_is_training_mode_false(self, mock_dependencies):
|
||||
"""Test training mode detection when not in training."""
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
assert executor._is_training_mode() is False
|
||||
|
||||
def test_is_training_mode_true(self, mock_dependencies):
|
||||
"""Test training mode detection when in training."""
|
||||
mock_dependencies["crew"]._train = True
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
assert executor._is_training_mode() is True
|
||||
|
||||
def test_append_message_to_state(self, mock_dependencies):
|
||||
"""Test message appending to state."""
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
initial_count = len(executor.state.messages)
|
||||
|
||||
executor._append_message_to_state("test message")
|
||||
@@ -216,7 +216,7 @@ class TestCrewAgentExecutorFlow:
|
||||
callback = Mock()
|
||||
mock_dependencies["step_callback"] = callback
|
||||
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
answer = AgentFinish(thought="thinking", output="test", text="final")
|
||||
|
||||
executor._invoke_step_callback(answer)
|
||||
@@ -226,14 +226,14 @@ class TestCrewAgentExecutorFlow:
|
||||
def test_invoke_step_callback_none(self, mock_dependencies):
|
||||
"""Test step callback when none provided."""
|
||||
mock_dependencies["step_callback"] = None
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
|
||||
# Should not raise error
|
||||
executor._invoke_step_callback(
|
||||
AgentFinish(thought="thinking", output="test", text="final")
|
||||
)
|
||||
|
||||
@patch("crewai.experimental.crew_agent_executor_flow.handle_output_parser_exception")
|
||||
@patch("crewai.experimental.agent_executor.handle_output_parser_exception")
|
||||
def test_recover_from_parser_error(
|
||||
self, mock_handle_exception, mock_dependencies
|
||||
):
|
||||
@@ -242,7 +242,7 @@ class TestCrewAgentExecutorFlow:
|
||||
|
||||
mock_handle_exception.return_value = None
|
||||
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
executor._last_parser_error = OutputParserError("test error")
|
||||
initial_iterations = executor.state.iterations
|
||||
|
||||
@@ -252,12 +252,12 @@ class TestCrewAgentExecutorFlow:
|
||||
assert executor.state.iterations == initial_iterations + 1
|
||||
mock_handle_exception.assert_called_once()
|
||||
|
||||
@patch("crewai.experimental.crew_agent_executor_flow.handle_context_length")
|
||||
@patch("crewai.experimental.agent_executor.handle_context_length")
|
||||
def test_recover_from_context_length(
|
||||
self, mock_handle_context, mock_dependencies
|
||||
):
|
||||
"""Test recovery from context length error."""
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
executor._last_context_error = Exception("context too long")
|
||||
initial_iterations = executor.state.iterations
|
||||
|
||||
@@ -270,16 +270,16 @@ class TestCrewAgentExecutorFlow:
|
||||
def test_use_stop_words_property(self, mock_dependencies):
|
||||
"""Test use_stop_words property."""
|
||||
mock_dependencies["llm"].supports_stop_words.return_value = True
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
assert executor.use_stop_words is True
|
||||
|
||||
mock_dependencies["llm"].supports_stop_words.return_value = False
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
assert executor.use_stop_words is False
|
||||
|
||||
def test_compatibility_properties(self, mock_dependencies):
|
||||
"""Test compatibility properties for mixin."""
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
executor.state.messages = [{"role": "user", "content": "test"}]
|
||||
executor.state.iterations = 5
|
||||
|
||||
@@ -321,8 +321,8 @@ class TestFlowErrorHandling:
|
||||
"tools_handler": Mock(),
|
||||
}
|
||||
|
||||
@patch("crewai.experimental.crew_agent_executor_flow.get_llm_response")
|
||||
@patch("crewai.experimental.crew_agent_executor_flow.enforce_rpm_limit")
|
||||
@patch("crewai.experimental.agent_executor.get_llm_response")
|
||||
@patch("crewai.experimental.agent_executor.enforce_rpm_limit")
|
||||
def test_call_llm_parser_error(
|
||||
self, mock_enforce_rpm, mock_get_llm, mock_dependencies
|
||||
):
|
||||
@@ -332,15 +332,15 @@ class TestFlowErrorHandling:
|
||||
mock_enforce_rpm.return_value = None
|
||||
mock_get_llm.side_effect = OutputParserError("parse failed")
|
||||
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
result = executor.call_llm_and_parse()
|
||||
|
||||
assert result == "parser_error"
|
||||
assert executor._last_parser_error is not None
|
||||
|
||||
@patch("crewai.experimental.crew_agent_executor_flow.get_llm_response")
|
||||
@patch("crewai.experimental.crew_agent_executor_flow.enforce_rpm_limit")
|
||||
@patch("crewai.experimental.crew_agent_executor_flow.is_context_length_exceeded")
|
||||
@patch("crewai.experimental.agent_executor.get_llm_response")
|
||||
@patch("crewai.experimental.agent_executor.enforce_rpm_limit")
|
||||
@patch("crewai.experimental.agent_executor.is_context_length_exceeded")
|
||||
def test_call_llm_context_error(
|
||||
self,
|
||||
mock_is_context_exceeded,
|
||||
@@ -353,7 +353,7 @@ class TestFlowErrorHandling:
|
||||
mock_get_llm.side_effect = Exception("context length")
|
||||
mock_is_context_exceeded.return_value = True
|
||||
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
result = executor.call_llm_and_parse()
|
||||
|
||||
assert result == "context_error"
|
||||
@@ -397,10 +397,10 @@ class TestFlowInvoke:
|
||||
"tools_handler": Mock(),
|
||||
}
|
||||
|
||||
@patch.object(CrewAgentExecutorFlow, "kickoff")
|
||||
@patch.object(CrewAgentExecutorFlow, "_create_short_term_memory")
|
||||
@patch.object(CrewAgentExecutorFlow, "_create_long_term_memory")
|
||||
@patch.object(CrewAgentExecutorFlow, "_create_external_memory")
|
||||
@patch.object(AgentExecutor, "kickoff")
|
||||
@patch.object(AgentExecutor, "_create_short_term_memory")
|
||||
@patch.object(AgentExecutor, "_create_long_term_memory")
|
||||
@patch.object(AgentExecutor, "_create_external_memory")
|
||||
def test_invoke_success(
|
||||
self,
|
||||
mock_external_memory,
|
||||
@@ -410,7 +410,7 @@ class TestFlowInvoke:
|
||||
mock_dependencies,
|
||||
):
|
||||
"""Test successful invoke without human feedback."""
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
|
||||
# Mock kickoff to set the final answer in state
|
||||
def mock_kickoff_side_effect():
|
||||
@@ -429,10 +429,10 @@ class TestFlowInvoke:
|
||||
mock_long_term_memory.assert_called_once()
|
||||
mock_external_memory.assert_called_once()
|
||||
|
||||
@patch.object(CrewAgentExecutorFlow, "kickoff")
|
||||
@patch.object(AgentExecutor, "kickoff")
|
||||
def test_invoke_failure_no_agent_finish(self, mock_kickoff, mock_dependencies):
|
||||
"""Test invoke fails without AgentFinish."""
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
executor.state.current_answer = AgentAction(
|
||||
thought="thinking", tool="test", tool_input="test", text="action text"
|
||||
)
|
||||
@@ -442,10 +442,10 @@ class TestFlowInvoke:
|
||||
with pytest.raises(RuntimeError, match="without reaching a final answer"):
|
||||
executor.invoke(inputs)
|
||||
|
||||
@patch.object(CrewAgentExecutorFlow, "kickoff")
|
||||
@patch.object(CrewAgentExecutorFlow, "_create_short_term_memory")
|
||||
@patch.object(CrewAgentExecutorFlow, "_create_long_term_memory")
|
||||
@patch.object(CrewAgentExecutorFlow, "_create_external_memory")
|
||||
@patch.object(AgentExecutor, "kickoff")
|
||||
@patch.object(AgentExecutor, "_create_short_term_memory")
|
||||
@patch.object(AgentExecutor, "_create_long_term_memory")
|
||||
@patch.object(AgentExecutor, "_create_external_memory")
|
||||
def test_invoke_with_system_prompt(
|
||||
self,
|
||||
mock_external_memory,
|
||||
@@ -459,7 +459,7 @@ class TestFlowInvoke:
|
||||
"system": "System: {input}",
|
||||
"user": "User: {input} {tool_names} {tools}",
|
||||
}
|
||||
executor = CrewAgentExecutorFlow(**mock_dependencies)
|
||||
executor = AgentExecutor(**mock_dependencies)
|
||||
|
||||
def mock_kickoff_side_effect():
|
||||
executor.state.current_answer = AgentFinish(
|
||||
@@ -72,62 +72,53 @@ class ResearchResult(BaseModel):
|
||||
|
||||
@pytest.mark.vcr()
|
||||
@pytest.mark.parametrize("verbose", [True, False])
|
||||
def test_lite_agent_created_with_correct_parameters(monkeypatch, verbose):
|
||||
"""Test that LiteAgent is created with the correct parameters when Agent.kickoff() is called."""
|
||||
def test_agent_kickoff_preserves_parameters(verbose):
|
||||
"""Test that Agent.kickoff() uses the correct parameters from the Agent."""
|
||||
# Create a test agent with specific parameters
|
||||
llm = LLM(model="gpt-4o-mini")
|
||||
mock_llm = Mock(spec=LLM)
|
||||
mock_llm.call.return_value = "Final Answer: Test response"
|
||||
mock_llm.stop = []
|
||||
|
||||
from crewai.types.usage_metrics import UsageMetrics
|
||||
|
||||
mock_usage_metrics = UsageMetrics(
|
||||
total_tokens=100,
|
||||
prompt_tokens=50,
|
||||
completion_tokens=50,
|
||||
cached_prompt_tokens=0,
|
||||
successful_requests=1,
|
||||
)
|
||||
mock_llm.get_token_usage_summary.return_value = mock_usage_metrics
|
||||
|
||||
custom_tools = [WebSearchTool(), CalculatorTool()]
|
||||
max_iter = 10
|
||||
max_execution_time = 300
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test Goal",
|
||||
backstory="Test Backstory",
|
||||
llm=llm,
|
||||
llm=mock_llm,
|
||||
tools=custom_tools,
|
||||
max_iter=max_iter,
|
||||
max_execution_time=max_execution_time,
|
||||
verbose=verbose,
|
||||
)
|
||||
|
||||
# Create a mock to capture the created LiteAgent
|
||||
created_lite_agent = None
|
||||
original_lite_agent = LiteAgent
|
||||
# Call kickoff and verify it works
|
||||
result = agent.kickoff("Test query")
|
||||
|
||||
# Define a mock LiteAgent class that captures its arguments
|
||||
class MockLiteAgent(original_lite_agent):
|
||||
def __init__(self, **kwargs):
|
||||
nonlocal created_lite_agent
|
||||
created_lite_agent = kwargs
|
||||
super().__init__(**kwargs)
|
||||
# Verify the agent was configured correctly
|
||||
assert agent.role == "Test Agent"
|
||||
assert agent.goal == "Test Goal"
|
||||
assert agent.backstory == "Test Backstory"
|
||||
assert len(agent.tools) == 2
|
||||
assert isinstance(agent.tools[0], WebSearchTool)
|
||||
assert isinstance(agent.tools[1], CalculatorTool)
|
||||
assert agent.max_iter == max_iter
|
||||
assert agent.verbose == verbose
|
||||
|
||||
# Patch the LiteAgent class
|
||||
monkeypatch.setattr("crewai.agent.core.LiteAgent", MockLiteAgent)
|
||||
|
||||
# Call kickoff to create the LiteAgent
|
||||
agent.kickoff("Test query")
|
||||
|
||||
# Verify all parameters were passed correctly
|
||||
assert created_lite_agent is not None
|
||||
assert created_lite_agent["role"] == "Test Agent"
|
||||
assert created_lite_agent["goal"] == "Test Goal"
|
||||
assert created_lite_agent["backstory"] == "Test Backstory"
|
||||
assert created_lite_agent["llm"] == llm
|
||||
assert len(created_lite_agent["tools"]) == 2
|
||||
assert isinstance(created_lite_agent["tools"][0], WebSearchTool)
|
||||
assert isinstance(created_lite_agent["tools"][1], CalculatorTool)
|
||||
assert created_lite_agent["max_iterations"] == max_iter
|
||||
assert created_lite_agent["max_execution_time"] == max_execution_time
|
||||
assert created_lite_agent["verbose"] == verbose
|
||||
assert created_lite_agent["response_format"] is None
|
||||
|
||||
# Test with a response_format
|
||||
class TestResponse(BaseModel):
|
||||
test_field: str
|
||||
|
||||
agent.kickoff("Test query", response_format=TestResponse)
|
||||
assert created_lite_agent["response_format"] == TestResponse
|
||||
# Verify kickoff returned a result
|
||||
assert result is not None
|
||||
assert result.raw is not None
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
@@ -310,7 +301,8 @@ def verify_agent_parent_flow(result, agent, flow):
|
||||
|
||||
|
||||
def test_sets_parent_flow_when_inside_flow():
|
||||
captured_agent = None
|
||||
"""Test that an Agent can be created and executed inside a Flow context."""
|
||||
captured_event = None
|
||||
|
||||
mock_llm = Mock(spec=LLM)
|
||||
mock_llm.call.return_value = "Test response"
|
||||
@@ -343,15 +335,17 @@ def test_sets_parent_flow_when_inside_flow():
|
||||
event_received = threading.Event()
|
||||
|
||||
@crewai_event_bus.on(LiteAgentExecutionStartedEvent)
|
||||
def capture_agent(source, event):
|
||||
nonlocal captured_agent
|
||||
captured_agent = source
|
||||
def capture_event(source, event):
|
||||
nonlocal captured_event
|
||||
captured_event = event
|
||||
event_received.set()
|
||||
|
||||
flow.kickoff()
|
||||
result = flow.kickoff()
|
||||
|
||||
assert event_received.wait(timeout=5), "Timeout waiting for agent execution event"
|
||||
assert captured_agent.parent_flow is flow
|
||||
assert captured_event is not None
|
||||
assert captured_event.agent_info["role"] == "Test Agent"
|
||||
assert result is not None
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
@@ -373,16 +367,14 @@ def test_guardrail_is_called_using_string():
|
||||
|
||||
@crewai_event_bus.on(LLMGuardrailStartedEvent)
|
||||
def capture_guardrail_started(source, event):
|
||||
assert isinstance(source, LiteAgent)
|
||||
assert source.original_agent == agent
|
||||
assert isinstance(source, Agent)
|
||||
with condition:
|
||||
guardrail_events["started"].append(event)
|
||||
condition.notify()
|
||||
|
||||
@crewai_event_bus.on(LLMGuardrailCompletedEvent)
|
||||
def capture_guardrail_completed(source, event):
|
||||
assert isinstance(source, LiteAgent)
|
||||
assert source.original_agent == agent
|
||||
assert isinstance(source, Agent)
|
||||
with condition:
|
||||
guardrail_events["completed"].append(event)
|
||||
condition.notify()
|
||||
@@ -683,3 +675,151 @@ def test_agent_kickoff_with_mcp_tools(mock_get_mcp_tools):
|
||||
|
||||
# Verify MCP tools were retrieved
|
||||
mock_get_mcp_tools.assert_called_once_with("https://mcp.exa.ai/mcp?api_key=test_exa_key&profile=research")
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Tests for LiteAgent inside Flow (magic auto-async pattern)
|
||||
# ============================================================================
|
||||
|
||||
from crewai.flow.flow import listen
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_lite_agent_inside_flow_sync():
|
||||
"""Test that LiteAgent.kickoff() works magically inside a Flow.
|
||||
|
||||
This tests the "magic auto-async" pattern where calling agent.kickoff()
|
||||
from within a Flow automatically detects the event loop and returns a
|
||||
coroutine that the Flow framework awaits. Users don't need to use async/await.
|
||||
"""
|
||||
# Track execution
|
||||
execution_log = []
|
||||
|
||||
class TestFlow(Flow):
|
||||
@start()
|
||||
def run_agent(self):
|
||||
execution_log.append("flow_started")
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Answer questions",
|
||||
backstory="A helpful test assistant",
|
||||
llm=LLM(model="gpt-4o-mini"),
|
||||
verbose=False,
|
||||
)
|
||||
# Magic: just call kickoff() normally - it auto-detects Flow context
|
||||
result = agent.kickoff(messages="What is 2+2? Reply with just the number.")
|
||||
execution_log.append("agent_completed")
|
||||
return result
|
||||
|
||||
flow = TestFlow()
|
||||
result = flow.kickoff()
|
||||
|
||||
# Verify the flow executed successfully
|
||||
assert "flow_started" in execution_log
|
||||
assert "agent_completed" in execution_log
|
||||
assert result is not None
|
||||
assert isinstance(result, LiteAgentOutput)
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_lite_agent_inside_flow_with_tools():
|
||||
"""Test that LiteAgent with tools works correctly inside a Flow."""
|
||||
class TestFlow(Flow):
|
||||
@start()
|
||||
def run_agent_with_tools(self):
|
||||
agent = Agent(
|
||||
role="Calculator Agent",
|
||||
goal="Perform calculations",
|
||||
backstory="A math expert",
|
||||
llm=LLM(model="gpt-4o-mini"),
|
||||
tools=[CalculatorTool()],
|
||||
verbose=False,
|
||||
)
|
||||
result = agent.kickoff(messages="Calculate 10 * 5")
|
||||
return result
|
||||
|
||||
flow = TestFlow()
|
||||
result = flow.kickoff()
|
||||
|
||||
assert result is not None
|
||||
assert isinstance(result, LiteAgentOutput)
|
||||
assert result.raw is not None
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_multiple_agents_in_same_flow():
|
||||
"""Test that multiple LiteAgents can run sequentially in the same Flow."""
|
||||
class MultiAgentFlow(Flow):
|
||||
@start()
|
||||
def first_step(self):
|
||||
agent1 = Agent(
|
||||
role="First Agent",
|
||||
goal="Greet users",
|
||||
backstory="A friendly greeter",
|
||||
llm=LLM(model="gpt-4o-mini"),
|
||||
verbose=False,
|
||||
)
|
||||
return agent1.kickoff(messages="Say hello")
|
||||
|
||||
@listen(first_step)
|
||||
def second_step(self, first_result):
|
||||
agent2 = Agent(
|
||||
role="Second Agent",
|
||||
goal="Say goodbye",
|
||||
backstory="A polite farewell agent",
|
||||
llm=LLM(model="gpt-4o-mini"),
|
||||
verbose=False,
|
||||
)
|
||||
return agent2.kickoff(messages="Say goodbye")
|
||||
|
||||
flow = MultiAgentFlow()
|
||||
result = flow.kickoff()
|
||||
|
||||
assert result is not None
|
||||
assert isinstance(result, LiteAgentOutput)
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_lite_agent_kickoff_async_inside_flow():
|
||||
"""Test that Agent.kickoff_async() works correctly from async Flow methods."""
|
||||
class AsyncAgentFlow(Flow):
|
||||
@start()
|
||||
async def async_agent_step(self):
|
||||
agent = Agent(
|
||||
role="Async Test Agent",
|
||||
goal="Answer questions asynchronously",
|
||||
backstory="An async helper",
|
||||
llm=LLM(model="gpt-4o-mini"),
|
||||
verbose=False,
|
||||
)
|
||||
result = await agent.kickoff_async(messages="What is 3+3?")
|
||||
return result
|
||||
|
||||
flow = AsyncAgentFlow()
|
||||
result = flow.kickoff()
|
||||
|
||||
assert result is not None
|
||||
assert isinstance(result, LiteAgentOutput)
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_lite_agent_standalone_still_works():
|
||||
"""Test that LiteAgent.kickoff() still works normally outside of a Flow.
|
||||
|
||||
This verifies that the magic auto-async pattern doesn't break standalone usage
|
||||
where there's no event loop running.
|
||||
"""
|
||||
agent = Agent(
|
||||
role="Standalone Agent",
|
||||
goal="Answer questions",
|
||||
backstory="A helpful assistant",
|
||||
llm=LLM(model="gpt-4o-mini"),
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
# This should work normally - no Flow, no event loop
|
||||
result = agent.kickoff(messages="What is 5+5? Reply with just the number.")
|
||||
|
||||
assert result is not None
|
||||
assert isinstance(result, LiteAgentOutput)
|
||||
assert result.raw is not None
|
||||
|
||||
@@ -0,0 +1,119 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Test Agent. A helpful
|
||||
test assistant\nYour personal goal is: Answer questions\nTo give my best complete
|
||||
final answer to the task respond using the exact following format:\n\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task:
|
||||
What is 2+2? Reply with just the number.\n\nBegin! This is VERY important to
|
||||
you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '673'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-Cy7b0HjL79y39EkUcMLrRhPFe3XGj\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768444914,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: 4\",\n \"refusal\": null,\n \"annotations\": []\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
|
||||
\ \"usage\": {\n \"prompt_tokens\": 136,\n \"completion_tokens\": 13,\n
|
||||
\ \"total_tokens\": 149,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_8bbc38b4db\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 15 Jan 2026 02:41:55 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '857'
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '341'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '358'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,255 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Calculator Agent. A math
|
||||
expert\nYour personal goal is: Perform calculations\nYou ONLY have access to
|
||||
the following tools, and should NEVER make up tools that are not listed here:\n\nTool
|
||||
Name: calculate\nTool Arguments: {\n \"properties\": {\n \"expression\":
|
||||
{\n \"title\": \"Expression\",\n \"type\": \"string\"\n }\n },\n \"required\":
|
||||
[\n \"expression\"\n ],\n \"title\": \"CalculatorToolSchema\",\n \"type\":
|
||||
\"object\",\n \"additionalProperties\": false\n}\nTool Description: Calculate
|
||||
the result of a mathematical expression.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [calculate], just the name, exactly as
|
||||
it''s written.\nAction Input: the input to the action, just a simple JSON object,
|
||||
enclosed in curly braces, using \" to wrap keys and values.\nObservation: the
|
||||
result of the action\n```\n\nOnce all necessary information is gathered, return
|
||||
the following format:\n\n```\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question\n```"},{"role":"user","content":"\nCurrent
|
||||
Task: Calculate 10 * 5\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1403'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-Cy7avghVPSpszLmlbHpwDQlWDoD6O\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768444909,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I need to calculate the expression
|
||||
10 * 5.\\nAction: calculate\\nAction Input: {\\\"expression\\\":\\\"10 * 5\\\"}\\nObservation:
|
||||
50\",\n \"refusal\": null,\n \"annotations\": []\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
|
||||
\ \"usage\": {\n \"prompt_tokens\": 291,\n \"completion_tokens\": 33,\n
|
||||
\ \"total_tokens\": 324,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_c4585b5b9c\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 15 Jan 2026 02:41:49 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '939'
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '579'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '598'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Calculator Agent. A math
|
||||
expert\nYour personal goal is: Perform calculations\nYou ONLY have access to
|
||||
the following tools, and should NEVER make up tools that are not listed here:\n\nTool
|
||||
Name: calculate\nTool Arguments: {\n \"properties\": {\n \"expression\":
|
||||
{\n \"title\": \"Expression\",\n \"type\": \"string\"\n }\n },\n \"required\":
|
||||
[\n \"expression\"\n ],\n \"title\": \"CalculatorToolSchema\",\n \"type\":
|
||||
\"object\",\n \"additionalProperties\": false\n}\nTool Description: Calculate
|
||||
the result of a mathematical expression.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [calculate], just the name, exactly as
|
||||
it''s written.\nAction Input: the input to the action, just a simple JSON object,
|
||||
enclosed in curly braces, using \" to wrap keys and values.\nObservation: the
|
||||
result of the action\n```\n\nOnce all necessary information is gathered, return
|
||||
the following format:\n\n```\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question\n```"},{"role":"user","content":"\nCurrent
|
||||
Task: Calculate 10 * 5\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"},{"role":"assistant","content":"Thought:
|
||||
I need to calculate the expression 10 * 5.\nAction: calculate\nAction Input:
|
||||
{\"expression\":\"10 * 5\"}\nObservation: The result of 10 * 5 is 50"}],"model":"gpt-4o-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1591'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-Cy7avDhDZCLvv8v2dh8ZQRrLdci6A\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768444909,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I now know the final answer.\\nFinal
|
||||
Answer: 50\",\n \"refusal\": null,\n \"annotations\": []\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
|
||||
\ \"usage\": {\n \"prompt_tokens\": 337,\n \"completion_tokens\": 14,\n
|
||||
\ \"total_tokens\": 351,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_c4585b5b9c\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 15 Jan 2026 02:41:50 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '864'
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '429'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '457'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,119 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Async Test Agent. An async
|
||||
helper\nYour personal goal is: Answer questions asynchronously\nTo give my best
|
||||
complete final answer to the task respond using the exact following format:\n\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task:
|
||||
What is 3+3?\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '657'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-Cy7atOGxtc4y3oYNI62WiQ0Vogsdv\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768444907,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: The sum of 3 + 3 is 6. Therefore, the outcome is that if you add three
|
||||
and three together, you will arrive at the total of six.\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
131,\n \"completion_tokens\": 46,\n \"total_tokens\": 177,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_29330a9688\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 15 Jan 2026 02:41:48 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '983'
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '944'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1192'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,119 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Standalone Agent. A helpful
|
||||
assistant\nYour personal goal is: Answer questions\nTo give my best complete
|
||||
final answer to the task respond using the exact following format:\n\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task:
|
||||
What is 5+5? Reply with just the number.\n\nBegin! This is VERY important to
|
||||
you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '674'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-Cy7azhPwUHQ0p5tdhxSAmLPoE8UgC\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768444913,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: 10\",\n \"refusal\": null,\n \"annotations\": []\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
|
||||
\ \"usage\": {\n \"prompt_tokens\": 136,\n \"completion_tokens\": 13,\n
|
||||
\ \"total_tokens\": 149,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_29330a9688\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 15 Jan 2026 02:41:54 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '858'
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '455'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '583'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,239 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are First Agent. A friendly
|
||||
greeter\nYour personal goal is: Greet users\nTo give my best complete final
|
||||
answer to the task respond using the exact following format:\n\nThought: I now
|
||||
can give a great answer\nFinal Answer: Your final answer must be the great and
|
||||
the most complete as possible, it must be outcome described.\n\nI MUST use these
|
||||
formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task: Say
|
||||
hello\n\nBegin! This is VERY important to you, use the tools available and give
|
||||
your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '632'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-CyRKzgODZ9yn3F9OkaXsscLk2Ln3N\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768520801,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: Hello! Welcome! I'm so glad to see you here. If you need any assistance
|
||||
or have any questions, feel free to ask. Have a wonderful day!\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
127,\n \"completion_tokens\": 43,\n \"total_tokens\": 170,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_c4585b5b9c\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 15 Jan 2026 23:46:42 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '990'
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '880'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1160'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Second Agent. A polite
|
||||
farewell agent\nYour personal goal is: Say goodbye\nTo give my best complete
|
||||
final answer to the task respond using the exact following format:\n\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task:
|
||||
Say goodbye\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '640'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-CyRL1Ua2PkK5xXPp3KeF0AnGAk3JP\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768520803,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: As we reach the end of our conversation, I want to express my gratitude
|
||||
for the time we've shared. It's been a pleasure assisting you, and I hope
|
||||
you found our interaction helpful and enjoyable. Remember, whenever you need
|
||||
assistance, I'm just a message away. Wishing you all the best in your future
|
||||
endeavors. Goodbye and take care!\",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 126,\n \"completion_tokens\":
|
||||
79,\n \"total_tokens\": 205,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_29330a9688\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 15 Jan 2026 23:46:44 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '1189'
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '1363'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1605'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
File diff suppressed because one or more lines are too long
@@ -1,456 +1,528 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"trace_id": "00000000-0000-0000-0000-000000000000", "execution_type": "crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null, "crew_name": "Unknown Crew", "flow_name": null, "crewai_version": "1.3.0", "privacy_level": "standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count": 0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-11-05T22:19:56.074812+00:00"}}'
|
||||
body: "{\"messages\":[{\"role\":\"system\",\"content\":\"You are Guardrail Agent.
|
||||
You are a expert at validating the output of a task. By providing effective
|
||||
feedback if the output is not valid.\\nYour personal goal is: Validate the output
|
||||
of the task\\nTo give my best complete final answer to the task respond using
|
||||
the exact following format:\\n\\nThought: I now can give a great answer\\nFinal
|
||||
Answer: Your final answer must be the great and the most complete as possible,
|
||||
it must be outcome described.\\n\\nI MUST use these formats, my job depends
|
||||
on it!\"},{\"role\":\"user\",\"content\":\"\\nCurrent Task: \\n Ensure
|
||||
the following task result complies with the given guardrail.\\n\\n Task
|
||||
result:\\n \\n Lorem Ipsum is simply dummy text of the printing
|
||||
and typesetting industry. Lorem Ipsum has been the industry's standard dummy
|
||||
text ever\\n \\n\\n Guardrail:\\n Ensure the result has
|
||||
less than 10 words\\n\\n Your task:\\n - Confirm if the Task result
|
||||
complies with the guardrail.\\n - If not, provide clear feedback explaining
|
||||
what is wrong (e.g., by how much it violates the rule, or what specific part
|
||||
fails).\\n - Focus only on identifying issues \u2014 do not propose corrections.\\n
|
||||
\ - If the Task result complies with the guardrail, saying that is valid\\n
|
||||
\ \\n\\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\\n\\nThought:\"}],\"model\":\"gpt-4o\"}"
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '434'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.3.0
|
||||
X-Crewai-Version:
|
||||
- 1.3.0
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/batches
|
||||
response:
|
||||
body:
|
||||
string: '{"error":"bad_credentials","message":"Bad credentials"}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '55'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Wed, 05 Nov 2025 22:19:56 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self'' ''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/ https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net https://js.hscollectedforms.net
|
||||
https://js.usemessages.com https://snap.licdn.com https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data: *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com; connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/* https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com https://api.hubspot.com
|
||||
https://forms.hscollectedforms.net https://api.hubapi.com https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509 https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self'' *.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com https://drive.google.com https://slides.google.com https://accounts.google.com https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/ https://www.youtube.com https://share.descript.com'
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- 230c6cb5-92c7-448d-8c94-e5548a9f4259
|
||||
x-runtime:
|
||||
- '0.073220'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 401
|
||||
message: Unauthorized
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Guardrail Agent. You are a expert at validating the output of a task. By providing effective feedback if the output is not valid.\nYour personal goal is: Validate the output of the task\n\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!Ensure your final answer strictly adheres to the following OpenAPI schema: {\n \"type\": \"json_schema\",\n \"json_schema\": {\n \"name\": \"LLMGuardrailResult\",\n \"strict\": true,\n \"schema\": {\n \"properties\": {\n \"valid\": {\n \"description\": \"Whether the task output complies with the guardrail\",\n \"title\": \"Valid\",\n \"type\": \"boolean\"\n },\n \"feedback\": {\n \"anyOf\":
|
||||
[\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null,\n \"description\": \"A feedback about the task output if it is not valid\",\n \"title\": \"Feedback\"\n }\n },\n \"required\": [\n \"valid\",\n \"feedback\"\n ],\n \"title\": \"LLMGuardrailResult\",\n \"type\": \"object\",\n \"additionalProperties\": false\n }\n }\n}\n\nDo not include the OpenAPI schema in the final output. Ensure the final output does not include any code block markers like ```json or ```python."},{"role":"user","content":"\n Ensure the following task result complies with the given guardrail.\n\n Task result:\n \n Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry''s standard dummy text ever\n \n\n Guardrail:\n Ensure
|
||||
the result has less than 10 words\n\n Your task:\n - Confirm if the Task result complies with the guardrail.\n - If not, provide clear feedback explaining what is wrong (e.g., by how much it violates the rule, or what specific part fails).\n - Focus only on identifying issues — do not propose corrections.\n - If the Task result complies with the guardrail, saying that is valid\n "}],"model":"gpt-4o"}'
|
||||
headers:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2452'
|
||||
- '1467'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-CYg96Riy2RJRxnBHvoROukymP9wvs\",\n \"object\": \"chat.completion\",\n \"created\": 1762381196,\n \"model\": \"gpt-4o-2024-08-06\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"Thought: I need to check if the task result meets the requirement of having less than 10 words.\\n\\nFinal Answer: {\\n \\\"valid\\\": false,\\n \\\"feedback\\\": \\\"The task result contains more than 10 words, violating the guardrail. The text provided contains about 21 words.\\\"\\n}\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 489,\n \"completion_tokens\": 61,\n \"total_tokens\": 550,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\"\
|
||||
: 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": \"fp_cbf1785567\"\n}\n"
|
||||
string: "{\n \"id\": \"chatcmpl-Cy7yHRYTZi8yzRbcODnKr92keLKCb\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768446357,\n \"model\": \"gpt-4o-2024-08-06\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"The task result provided has more than
|
||||
10 words. I will count the words to verify this.\\n\\nThe task result is the
|
||||
following text:\\n\\\"Lorem Ipsum is simply dummy text of the printing and
|
||||
typesetting industry. Lorem Ipsum has been the industry's standard dummy text
|
||||
ever\\\"\\n\\nCounting the words:\\n\\n1. Lorem \\n2. Ipsum \\n3. is \\n4.
|
||||
simply \\n5. dummy \\n6. text \\n7. of \\n8. the \\n9. printing \\n10. and
|
||||
\\n11. typesetting \\n12. industry. \\n13. Lorem \\n14. Ipsum \\n15. has \\n16.
|
||||
been \\n17. the \\n18. industry's \\n19. standard \\n20. dummy \\n21. text
|
||||
\\n22. ever\\n\\nThe total word count is 22.\\n\\nThought: I now can give
|
||||
a great answer\\nFinal Answer: The task result does not comply with the guardrail.
|
||||
It contains 22 words, which exceeds the limit of 10 words.\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
285,\n \"completion_tokens\": 195,\n \"total_tokens\": 480,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_deacdd5f6f\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- REDACTED-RAY
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 05 Nov 2025 22:19:58 GMT
|
||||
- Thu, 15 Jan 2026 03:05:59 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=REDACTED; path=/; expires=Wed, 05-Nov-25 22:49:58 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
- _cfuvid=REDACTED; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '1557'
|
||||
openai-organization:
|
||||
- user-hortuttj2f3qtmxyik2zxf4q
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '2201'
|
||||
- '2130'
|
||||
openai-project:
|
||||
- proj_fL4UBWR1CMpAAdgzaSKqsVvA
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '2401'
|
||||
- '2147'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '500'
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000'
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- '499'
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29439'
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- 120ms
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1.122s
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- req_REDACTED
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"Ensure your final answer strictly adheres to the following OpenAPI schema: {\n \"type\": \"json_schema\",\n \"json_schema\": {\n \"name\": \"LLMGuardrailResult\",\n \"strict\": true,\n \"schema\": {\n \"properties\": {\n \"valid\": {\n \"description\": \"Whether the task output complies with the guardrail\",\n \"title\": \"Valid\",\n \"type\": \"boolean\"\n },\n \"feedback\": {\n \"anyOf\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null,\n \"description\": \"A feedback about the task output if it is not valid\",\n \"title\": \"Feedback\"\n }\n },\n \"required\": [\n \"valid\",\n \"feedback\"\n ],\n \"title\": \"LLMGuardrailResult\",\n \"type\": \"object\",\n \"additionalProperties\":
|
||||
false\n }\n }\n}\n\nDo not include the OpenAPI schema in the final output. Ensure the final output does not include any code block markers like ```json or ```python."},{"role":"user","content":"{\n \"valid\": false,\n \"feedback\": \"The task result contains more than 10 words, violating the guardrail. The text provided contains about 21 words.\"\n}"}],"model":"gpt-4o","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"valid":{"description":"Whether the task output complies with the guardrail","title":"Valid","type":"boolean"},"feedback":{"anyOf":[{"type":"string"},{"type":"null"}],"description":"A feedback about the task output if it is not valid","title":"Feedback"}},"required":["valid","feedback"],"title":"LLMGuardrailResult","type":"object","additionalProperties":false},"name":"LLMGuardrailResult","strict":true}},"stream":false}'
|
||||
body: '{"messages":[{"role":"system","content":"Ensure your final answer strictly
|
||||
adheres to the following OpenAPI schema: {\n \"type\": \"json_schema\",\n \"json_schema\":
|
||||
{\n \"name\": \"LLMGuardrailResult\",\n \"strict\": true,\n \"schema\":
|
||||
{\n \"properties\": {\n \"valid\": {\n \"description\":
|
||||
\"Whether the task output complies with the guardrail\",\n \"title\":
|
||||
\"Valid\",\n \"type\": \"boolean\"\n },\n \"feedback\":
|
||||
{\n \"anyOf\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\":
|
||||
\"null\"\n }\n ],\n \"default\": null,\n \"description\":
|
||||
\"A feedback about the task output if it is not valid\",\n \"title\":
|
||||
\"Feedback\"\n }\n },\n \"required\": [\n \"valid\",\n \"feedback\"\n ],\n \"title\":
|
||||
\"LLMGuardrailResult\",\n \"type\": \"object\",\n \"additionalProperties\":
|
||||
false\n }\n }\n}\n\nDo not include the OpenAPI schema in the final output.
|
||||
Ensure the final output does not include any code block markers like ```json
|
||||
or ```python."},{"role":"user","content":"The task result does not comply with
|
||||
the guardrail. It contains 22 words, which exceeds the limit of 10 words."}],"model":"gpt-4o","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"valid":{"description":"Whether
|
||||
the task output complies with the guardrail","title":"Valid","type":"boolean"},"feedback":{"anyOf":[{"type":"string"},{"type":"null"}],"description":"A
|
||||
feedback about the task output if it is not valid","title":"Feedback"}},"required":["valid","feedback"],"title":"LLMGuardrailResult","type":"object","additionalProperties":false},"name":"LLMGuardrailResult","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1884'
|
||||
- '1835'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=REDACTED; _cfuvid=REDACTED
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- chat.completions.parse
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-CYg98QlZ8NTrQ69676MpXXyCoZJT8\",\n \"object\": \"chat.completion\",\n \"created\": 1762381198,\n \"model\": \"gpt-4o-2024-08-06\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"{\\\"valid\\\":false,\\\"feedback\\\":\\\"The task result contains more than 10 words, violating the guardrail. The text provided contains about 21 words.\\\"}\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 374,\n \"completion_tokens\": 32,\n \"total_tokens\": 406,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n\
|
||||
\ \"service_tier\": \"default\",\n \"system_fingerprint\": \"fp_cbf1785567\"\n}\n"
|
||||
string: "{\n \"id\": \"chatcmpl-Cy7yJiPCk4fXuogyT5e8XeGRLCSf8\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768446359,\n \"model\": \"gpt-4o-2024-08-06\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"valid\\\":false,\\\"feedback\\\":\\\"The
|
||||
task output exceeds the word limit of 10 words by containing 22 words.\\\"}\",\n
|
||||
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
363,\n \"completion_tokens\": 25,\n \"total_tokens\": 388,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_a0e9480a2f\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- REDACTED-RAY
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 05 Nov 2025 22:19:59 GMT
|
||||
- Thu, 15 Jan 2026 03:05:59 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '913'
|
||||
openai-organization:
|
||||
- user-hortuttj2f3qtmxyik2zxf4q
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '419'
|
||||
- '488'
|
||||
openai-project:
|
||||
- proj_fL4UBWR1CMpAAdgzaSKqsVvA
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '432'
|
||||
- '507'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '500'
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000'
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- '499'
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29702'
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- 120ms
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- 596ms
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- req_REDACTED
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Guardrail Agent. You are a expert at validating the output of a task. By providing effective feedback if the output is not valid.\nYour personal goal is: Validate the output of the task\n\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!Ensure your final answer strictly adheres to the following OpenAPI schema: {\n \"type\": \"json_schema\",\n \"json_schema\": {\n \"name\": \"LLMGuardrailResult\",\n \"strict\": true,\n \"schema\": {\n \"properties\": {\n \"valid\": {\n \"description\": \"Whether the task output complies with the guardrail\",\n \"title\": \"Valid\",\n \"type\": \"boolean\"\n },\n \"feedback\": {\n \"anyOf\":
|
||||
[\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null,\n \"description\": \"A feedback about the task output if it is not valid\",\n \"title\": \"Feedback\"\n }\n },\n \"required\": [\n \"valid\",\n \"feedback\"\n ],\n \"title\": \"LLMGuardrailResult\",\n \"type\": \"object\",\n \"additionalProperties\": false\n }\n }\n}\n\nDo not include the OpenAPI schema in the final output. Ensure the final output does not include any code block markers like ```json or ```python."},{"role":"user","content":"\n Ensure the following task result complies with the given guardrail.\n\n Task result:\n \n Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry''s standard dummy text ever\n \n\n Guardrail:\n Ensure
|
||||
the result has less than 500 words\n\n Your task:\n - Confirm if the Task result complies with the guardrail.\n - If not, provide clear feedback explaining what is wrong (e.g., by how much it violates the rule, or what specific part fails).\n - Focus only on identifying issues — do not propose corrections.\n - If the Task result complies with the guardrail, saying that is valid\n "}],"model":"gpt-4o"}'
|
||||
body: "{\"messages\":[{\"role\":\"system\",\"content\":\"You are Guardrail Agent.
|
||||
You are a expert at validating the output of a task. By providing effective
|
||||
feedback if the output is not valid.\\nYour personal goal is: Validate the output
|
||||
of the task\\nTo give my best complete final answer to the task respond using
|
||||
the exact following format:\\n\\nThought: I now can give a great answer\\nFinal
|
||||
Answer: Your final answer must be the great and the most complete as possible,
|
||||
it must be outcome described.\\n\\nI MUST use these formats, my job depends
|
||||
on it!\"},{\"role\":\"user\",\"content\":\"\\nCurrent Task: \\n Ensure
|
||||
the following task result complies with the given guardrail.\\n\\n Task
|
||||
result:\\n \\n Lorem Ipsum is simply dummy text of the printing
|
||||
and typesetting industry. Lorem Ipsum has been the industry's standard dummy
|
||||
text ever\\n \\n\\n Guardrail:\\n Ensure the result has
|
||||
less than 500 words\\n\\n Your task:\\n - Confirm if the Task
|
||||
result complies with the guardrail.\\n - If not, provide clear feedback
|
||||
explaining what is wrong (e.g., by how much it violates the rule, or what specific
|
||||
part fails).\\n - Focus only on identifying issues \u2014 do not propose
|
||||
corrections.\\n - If the Task result complies with the guardrail, saying
|
||||
that is valid\\n \\n\\nBegin! This is VERY important to you, use the
|
||||
tools available and give your best Final Answer, your job depends on it!\\n\\nThought:\"}],\"model\":\"gpt-4o\"}"
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2453'
|
||||
- '1468'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-CYgBMV6fu7EvV2BqzMdJaKyLAg1WW\",\n \"object\": \"chat.completion\",\n \"created\": 1762381336,\n \"model\": \"gpt-4o-2024-08-06\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal Answer: {\\\"valid\\\": true, \\\"feedback\\\": null}\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 489,\n \"completion_tokens\": 23,\n \"total_tokens\": 512,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\"\
|
||||
: \"fp_cbf1785567\"\n}\n"
|
||||
string: "{\n \"id\": \"chatcmpl-Cy7yKa0rmi2YoTLpyXt9hjeLt2rTI\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768446360,\n \"model\": \"gpt-4o-2024-08-06\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"First, I'll count the number of words
|
||||
in the Task result to ensure it complies with the guardrail. \\n\\nThe Task
|
||||
result is: \\\"Lorem Ipsum is simply dummy text of the printing and typesetting
|
||||
industry. Lorem Ipsum has been the industry's standard dummy text ever.\\\"\\n\\nBy
|
||||
counting the words: \\n1. Lorem\\n2. Ipsum\\n3. is\\n4. simply\\n5. dummy\\n6.
|
||||
text\\n7. of\\n8. the\\n9. printing\\n10. and\\n11. typesetting\\n12. industry\\n13.
|
||||
Lorem\\n14. Ipsum\\n15. has\\n16. been\\n17. the\\n18. industry's\\n19. standard\\n20.
|
||||
dummy\\n21. text\\n22. ever\\n\\nThere are 22 words total in the Task result.\\n\\nI
|
||||
need to verify if the count of 22 words is less than the guardrail limit of
|
||||
500 words.\\n\\nThought: I now can give a great answer\\nFinal Answer: The
|
||||
Task result complies with the guardrail as it contains 22 words, which is
|
||||
less than the 500-word limit. Therefore, the output is valid.\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
285,\n \"completion_tokens\": 227,\n \"total_tokens\": 512,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_deacdd5f6f\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- REDACTED-RAY
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 05 Nov 2025 22:22:16 GMT
|
||||
- Thu, 15 Jan 2026 03:06:02 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=REDACTED; path=/; expires=Wed, 05-Nov-25 22:52:16 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
- _cfuvid=REDACTED; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '1668'
|
||||
openai-organization:
|
||||
- user-hortuttj2f3qtmxyik2zxf4q
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '327'
|
||||
- '2502'
|
||||
openai-project:
|
||||
- proj_fL4UBWR1CMpAAdgzaSKqsVvA
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '372'
|
||||
- '2522'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '500'
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000'
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- '499'
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29438'
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- 120ms
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1.124s
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- req_REDACTED
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"Ensure your final answer strictly adheres to the following OpenAPI schema: {\n \"type\": \"json_schema\",\n \"json_schema\": {\n \"name\": \"LLMGuardrailResult\",\n \"strict\": true,\n \"schema\": {\n \"properties\": {\n \"valid\": {\n \"description\": \"Whether the task output complies with the guardrail\",\n \"title\": \"Valid\",\n \"type\": \"boolean\"\n },\n \"feedback\": {\n \"anyOf\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null,\n \"description\": \"A feedback about the task output if it is not valid\",\n \"title\": \"Feedback\"\n }\n },\n \"required\": [\n \"valid\",\n \"feedback\"\n ],\n \"title\": \"LLMGuardrailResult\",\n \"type\": \"object\",\n \"additionalProperties\":
|
||||
false\n }\n }\n}\n\nDo not include the OpenAPI schema in the final output. Ensure the final output does not include any code block markers like ```json or ```python."},{"role":"user","content":"{\"valid\": true, \"feedback\": null}"}],"model":"gpt-4o","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"valid":{"description":"Whether the task output complies with the guardrail","title":"Valid","type":"boolean"},"feedback":{"anyOf":[{"type":"string"},{"type":"null"}],"description":"A feedback about the task output if it is not valid","title":"Feedback"}},"required":["valid","feedback"],"title":"LLMGuardrailResult","type":"object","additionalProperties":false},"name":"LLMGuardrailResult","strict":true}},"stream":false}'
|
||||
body: '{"messages":[{"role":"system","content":"Ensure your final answer strictly
|
||||
adheres to the following OpenAPI schema: {\n \"type\": \"json_schema\",\n \"json_schema\":
|
||||
{\n \"name\": \"LLMGuardrailResult\",\n \"strict\": true,\n \"schema\":
|
||||
{\n \"properties\": {\n \"valid\": {\n \"description\":
|
||||
\"Whether the task output complies with the guardrail\",\n \"title\":
|
||||
\"Valid\",\n \"type\": \"boolean\"\n },\n \"feedback\":
|
||||
{\n \"anyOf\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\":
|
||||
\"null\"\n }\n ],\n \"default\": null,\n \"description\":
|
||||
\"A feedback about the task output if it is not valid\",\n \"title\":
|
||||
\"Feedback\"\n }\n },\n \"required\": [\n \"valid\",\n \"feedback\"\n ],\n \"title\":
|
||||
\"LLMGuardrailResult\",\n \"type\": \"object\",\n \"additionalProperties\":
|
||||
false\n }\n }\n}\n\nDo not include the OpenAPI schema in the final output.
|
||||
Ensure the final output does not include any code block markers like ```json
|
||||
or ```python."},{"role":"user","content":"The Task result complies with the
|
||||
guardrail as it contains 22 words, which is less than the 500-word limit. Therefore,
|
||||
the output is valid."}],"model":"gpt-4o","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"valid":{"description":"Whether
|
||||
the task output complies with the guardrail","title":"Valid","type":"boolean"},"feedback":{"anyOf":[{"type":"string"},{"type":"null"}],"description":"A
|
||||
feedback about the task output if it is not valid","title":"Feedback"}},"required":["valid","feedback"],"title":"LLMGuardrailResult","type":"object","additionalProperties":false},"name":"LLMGuardrailResult","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1762'
|
||||
- '1864'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=REDACTED; _cfuvid=REDACTED
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- chat.completions.parse
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-CYgBMU20R45qGGaLN6vNAmW1NR4R6\",\n \"object\": \"chat.completion\",\n \"created\": 1762381336,\n \"model\": \"gpt-4o-2024-08-06\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"{\\\"valid\\\":true,\\\"feedback\\\":null}\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 347,\n \"completion_tokens\": 9,\n \"total_tokens\": 356,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": \"fp_cbf1785567\"\n}\n"
|
||||
string: "{\n \"id\": \"chatcmpl-Cy7yMAjNYSCz2foZPEcSVCuapzF8y\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768446362,\n \"model\": \"gpt-4o-2024-08-06\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"valid\\\":true,\\\"feedback\\\":null}\",\n
|
||||
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
369,\n \"completion_tokens\": 9,\n \"total_tokens\": 378,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_a0e9480a2f\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- REDACTED-RAY
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 05 Nov 2025 22:22:17 GMT
|
||||
- Thu, 15 Jan 2026 03:06:03 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '837'
|
||||
openai-organization:
|
||||
- user-hortuttj2f3qtmxyik2zxf4q
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '1081'
|
||||
- '413'
|
||||
openai-project:
|
||||
- proj_fL4UBWR1CMpAAdgzaSKqsVvA
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1241'
|
||||
- '650'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '500'
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000'
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- '499'
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29478'
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- 120ms
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1.042s
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- req_REDACTED
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
|
||||
138
lib/crewai/tests/cli/authentication/providers/test_keycloak.py
Normal file
138
lib/crewai/tests/cli/authentication/providers/test_keycloak.py
Normal file
@@ -0,0 +1,138 @@
|
||||
import pytest
|
||||
|
||||
from crewai.cli.authentication.main import Oauth2Settings
|
||||
from crewai.cli.authentication.providers.keycloak import KeycloakProvider
|
||||
|
||||
|
||||
class TestKeycloakProvider:
|
||||
@pytest.fixture(autouse=True)
|
||||
def setup_method(self):
|
||||
self.valid_settings = Oauth2Settings(
|
||||
provider="keycloak",
|
||||
domain="keycloak.example.com",
|
||||
client_id="test-client-id",
|
||||
audience="test-audience",
|
||||
extra={
|
||||
"realm": "test-realm"
|
||||
}
|
||||
)
|
||||
self.provider = KeycloakProvider(self.valid_settings)
|
||||
|
||||
def test_initialization_with_valid_settings(self):
|
||||
provider = KeycloakProvider(self.valid_settings)
|
||||
assert provider.settings == self.valid_settings
|
||||
assert provider.settings.provider == "keycloak"
|
||||
assert provider.settings.domain == "keycloak.example.com"
|
||||
assert provider.settings.client_id == "test-client-id"
|
||||
assert provider.settings.audience == "test-audience"
|
||||
assert provider.settings.extra.get("realm") == "test-realm"
|
||||
|
||||
def test_get_authorize_url(self):
|
||||
expected_url = "https://keycloak.example.com/realms/test-realm/protocol/openid-connect/auth/device"
|
||||
assert self.provider.get_authorize_url() == expected_url
|
||||
|
||||
def test_get_authorize_url_with_different_domain(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="keycloak",
|
||||
domain="auth.company.com",
|
||||
client_id="test-client",
|
||||
audience="test-audience",
|
||||
extra={
|
||||
"realm": "my-realm"
|
||||
}
|
||||
)
|
||||
provider = KeycloakProvider(settings)
|
||||
expected_url = "https://auth.company.com/realms/my-realm/protocol/openid-connect/auth/device"
|
||||
assert provider.get_authorize_url() == expected_url
|
||||
|
||||
def test_get_token_url(self):
|
||||
expected_url = "https://keycloak.example.com/realms/test-realm/protocol/openid-connect/token"
|
||||
assert self.provider.get_token_url() == expected_url
|
||||
|
||||
def test_get_token_url_with_different_domain(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="keycloak",
|
||||
domain="sso.enterprise.com",
|
||||
client_id="test-client",
|
||||
audience="test-audience",
|
||||
extra={
|
||||
"realm": "enterprise-realm"
|
||||
}
|
||||
)
|
||||
provider = KeycloakProvider(settings)
|
||||
expected_url = "https://sso.enterprise.com/realms/enterprise-realm/protocol/openid-connect/token"
|
||||
assert provider.get_token_url() == expected_url
|
||||
|
||||
def test_get_jwks_url(self):
|
||||
expected_url = "https://keycloak.example.com/realms/test-realm/protocol/openid-connect/certs"
|
||||
assert self.provider.get_jwks_url() == expected_url
|
||||
|
||||
def test_get_jwks_url_with_different_domain(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="keycloak",
|
||||
domain="identity.org",
|
||||
client_id="test-client",
|
||||
audience="test-audience",
|
||||
extra={
|
||||
"realm": "org-realm"
|
||||
}
|
||||
)
|
||||
provider = KeycloakProvider(settings)
|
||||
expected_url = "https://identity.org/realms/org-realm/protocol/openid-connect/certs"
|
||||
assert provider.get_jwks_url() == expected_url
|
||||
|
||||
def test_get_issuer(self):
|
||||
expected_issuer = "https://keycloak.example.com/realms/test-realm"
|
||||
assert self.provider.get_issuer() == expected_issuer
|
||||
|
||||
def test_get_issuer_with_different_domain(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="keycloak",
|
||||
domain="login.myapp.io",
|
||||
client_id="test-client",
|
||||
audience="test-audience",
|
||||
extra={
|
||||
"realm": "app-realm"
|
||||
}
|
||||
)
|
||||
provider = KeycloakProvider(settings)
|
||||
expected_issuer = "https://login.myapp.io/realms/app-realm"
|
||||
assert provider.get_issuer() == expected_issuer
|
||||
|
||||
def test_get_audience(self):
|
||||
assert self.provider.get_audience() == "test-audience"
|
||||
|
||||
def test_get_client_id(self):
|
||||
assert self.provider.get_client_id() == "test-client-id"
|
||||
|
||||
def test_get_required_fields(self):
|
||||
assert self.provider.get_required_fields() == ["realm"]
|
||||
|
||||
def test_oauth2_base_url(self):
|
||||
assert self.provider._oauth2_base_url() == "https://keycloak.example.com"
|
||||
|
||||
def test_oauth2_base_url_strips_https_prefix(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="keycloak",
|
||||
domain="https://keycloak.example.com",
|
||||
client_id="test-client-id",
|
||||
audience="test-audience",
|
||||
extra={
|
||||
"realm": "test-realm"
|
||||
}
|
||||
)
|
||||
provider = KeycloakProvider(settings)
|
||||
assert provider._oauth2_base_url() == "https://keycloak.example.com"
|
||||
|
||||
def test_oauth2_base_url_strips_http_prefix(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="keycloak",
|
||||
domain="http://keycloak.example.com",
|
||||
client_id="test-client-id",
|
||||
audience="test-audience",
|
||||
extra={
|
||||
"realm": "test-realm"
|
||||
}
|
||||
)
|
||||
provider = KeycloakProvider(settings)
|
||||
assert provider._oauth2_base_url() == "https://keycloak.example.com"
|
||||
@@ -1202,8 +1202,9 @@ def test_complex_and_or_branching():
|
||||
)
|
||||
assert execution_order.index("branch_2b") > min_branch_1_index
|
||||
|
||||
# Final should be last and after both 2a and 2b
|
||||
assert execution_order[-1] == "final"
|
||||
# Final should be after both 2a and 2b
|
||||
# Note: final may not be absolutely last due to independent branches (like branch_1c)
|
||||
# that don't contribute to the final result path with sequential listener execution
|
||||
assert execution_order.index("final") > execution_order.index("branch_2a")
|
||||
assert execution_order.index("final") > execution_order.index("branch_2b")
|
||||
|
||||
|
||||
@@ -185,8 +185,8 @@ def test_task_guardrail_process_output(task_output):
|
||||
|
||||
result = guardrail(task_output)
|
||||
assert result[0] is False
|
||||
|
||||
assert result[1] == "The task result contains more than 10 words, violating the guardrail. The text provided contains about 21 words."
|
||||
# Check that feedback is provided (wording varies by LLM)
|
||||
assert result[1] and len(result[1]) > 0
|
||||
|
||||
guardrail = LLMGuardrail(
|
||||
description="Ensure the result has less than 500 words", llm=LLM(model="gpt-4o")
|
||||
|
||||
@@ -348,11 +348,11 @@ def test_agent_emits_execution_error_event(base_agent, base_task):
|
||||
|
||||
error_message = "Error happening while sending prompt to model."
|
||||
base_agent.max_retry_limit = 0
|
||||
with patch.object(
|
||||
CrewAgentExecutor, "invoke", wraps=base_agent.agent_executor.invoke
|
||||
) as invoke_mock:
|
||||
invoke_mock.side_effect = Exception(error_message)
|
||||
|
||||
# Patch at the class level since agent_executor is created lazily
|
||||
with patch.object(
|
||||
CrewAgentExecutor, "invoke", side_effect=Exception(error_message)
|
||||
):
|
||||
with pytest.raises(Exception): # noqa: B017
|
||||
base_agent.execute_task(
|
||||
task=base_task,
|
||||
|
||||
Reference in New Issue
Block a user