mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-04-13 06:23:03 +00:00
Compare commits
14 Commits
devin/1775
...
docs/conve
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f789cf854b | ||
|
|
575bf87f07 | ||
|
|
9c4fb28956 | ||
|
|
4fe0cc348f | ||
|
|
3bc168f223 | ||
|
|
b690ef69ae | ||
|
|
804c26bd01 | ||
|
|
4e46913045 | ||
|
|
335130cb15 | ||
|
|
186ea77c63 | ||
|
|
9e51229e6c | ||
|
|
247d623499 | ||
|
|
c260f3e19f | ||
|
|
d9cf7dda31 |
13
.github/workflows/docs-broken-links.yml
vendored
13
.github/workflows/docs-broken-links.yml
vendored
@@ -4,13 +4,13 @@ on:
|
||||
pull_request:
|
||||
paths:
|
||||
- "docs/**"
|
||||
- "docs.json"
|
||||
- "docs/docs.json"
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
paths:
|
||||
- "docs/**"
|
||||
- "docs.json"
|
||||
- "docs/docs.json"
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
@@ -25,11 +25,12 @@ jobs:
|
||||
with:
|
||||
node-version: "22"
|
||||
|
||||
- name: Install libsecret for Mintlify CLI
|
||||
run: sudo apt-get update && sudo apt-get install -y libsecret-1-0
|
||||
|
||||
- name: Install Mintlify CLI
|
||||
run: npm i -g mintlify
|
||||
run: npm install -g mint@latest
|
||||
|
||||
- name: Run broken link checker
|
||||
run: |
|
||||
# Auto-answer the prompt with yes command
|
||||
yes "" | mintlify broken-links || test $? -eq 141
|
||||
working-directory: ./docs
|
||||
run: mint broken-links
|
||||
|
||||
@@ -4,6 +4,28 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="2 أبريل 2026">
|
||||
## v1.13.0a7
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a7)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة امتداد A2UI مع دعم v0.8/v0.9، والمخططات، والوثائق
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- إصلاح بادئات الرؤية متعددة الأنماط عن طريق إضافة GPT-5 وسلسلة o
|
||||
|
||||
### الوثائق
|
||||
- تحديث سجل التغييرات والإصدار لـ v1.13.0a6
|
||||
|
||||
## المساهمون
|
||||
|
||||
@alex-clawd, @greysonlalonde, @joaomdmoura
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="1 أبريل 2026">
|
||||
## v1.13.0a6
|
||||
|
||||
|
||||
@@ -5,6 +5,14 @@ icon: wrench
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
### شاهد: بناء Agents و Flows في CrewAI باستخدام Coding Agent Skills
|
||||
|
||||
قم بتثبيت مهارات وكيل البرمجة الخاصة بنا (Claude Code، Codex، ...) لتشغيل وكلاء البرمجة بسرعة مع CrewAI.
|
||||
|
||||
يمكنك تثبيتها باستخدام `npx skills add crewaiinc/skills`
|
||||
|
||||
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
|
||||
|
||||
## فيديو تعليمي
|
||||
|
||||
شاهد هذا الفيديو التعليمي لعرض تفصيلي لعملية التثبيت:
|
||||
|
||||
@@ -16,6 +16,14 @@ mode: "wide"
|
||||
|
||||
مع أكثر من 100,000 مطور معتمد عبر دوراتنا المجتمعية، يُعد CrewAI المعيار لأتمتة الذكاء الاصطناعي الجاهزة للمؤسسات.
|
||||
|
||||
### شاهد: بناء Agents و Flows في CrewAI باستخدام Coding Agent Skills
|
||||
|
||||
قم بتثبيت مهارات وكيل البرمجة الخاصة بنا (Claude Code، Codex، ...) لتشغيل وكلاء البرمجة بسرعة مع CrewAI.
|
||||
|
||||
يمكنك تثبيتها باستخدام `npx skills add crewaiinc/skills`
|
||||
|
||||
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
|
||||
|
||||
## بنية CrewAI المعمارية
|
||||
|
||||
صُممت بنية CrewAI لتحقيق التوازن بين الاستقلالية والتحكم.
|
||||
|
||||
@@ -5,6 +5,14 @@ icon: rocket
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
### شاهد: بناء Agents و Flows في CrewAI باستخدام Coding Agent Skills
|
||||
|
||||
قم بتثبيت مهارات وكيل البرمجة الخاصة بنا (Claude Code، Codex، ...) لتشغيل وكلاء البرمجة بسرعة مع CrewAI.
|
||||
|
||||
يمكنك تثبيتها باستخدام `npx skills add crewaiinc/skills`
|
||||
|
||||
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
|
||||
|
||||
## ابنِ أول وكيل CrewAI
|
||||
|
||||
لننشئ طاقماً بسيطاً يساعدنا في `البحث` و`إعداد التقارير` عن `أحدث تطورات الذكاء الاصطناعي` لموضوع أو مجال معين.
|
||||
|
||||
@@ -4,6 +4,28 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="Apr 02, 2026">
|
||||
## v1.13.0a7
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a7)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add A2UI extension with v0.8/v0.9 support, schemas, and docs
|
||||
|
||||
### Bug Fixes
|
||||
- Fix multimodal vision prefixes by adding GPT-5 and o-series
|
||||
|
||||
### Documentation
|
||||
- Update changelog and version for v1.13.0a6
|
||||
|
||||
## Contributors
|
||||
|
||||
@alex-clawd, @greysonlalonde, @joaomdmoura
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 01, 2026">
|
||||
## v1.13.0a6
|
||||
|
||||
|
||||
@@ -572,6 +572,176 @@ The `third_method` and `fourth_method` listen to the output of the `second_metho
|
||||
|
||||
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
|
||||
|
||||
### Conversational Flows (User Input)
|
||||
|
||||
The `self.ask()` method pauses flow execution to request input from a user inline, then returns their response as a string. This enables conversational, interactive flows where the AI can gather information, ask clarifying questions, or request approvals during execution.
|
||||
|
||||
#### Basic Usage
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, start, listen
|
||||
|
||||
class GreetingFlow(Flow):
|
||||
@start()
|
||||
def greet(self):
|
||||
name = self.ask("What's your name?")
|
||||
self.state["name"] = name
|
||||
|
||||
@listen(greet)
|
||||
def welcome(self):
|
||||
print(f"Welcome, {self.state['name']}!")
|
||||
|
||||
flow = GreetingFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
By default, `self.ask()` uses a `ConsoleProvider` that prompts via Python's built-in `input()`.
|
||||
|
||||
#### Multiple Asks in One Method
|
||||
|
||||
You can call `self.ask()` multiple times within a single method to gather several inputs:
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, start
|
||||
|
||||
class OnboardingFlow(Flow):
|
||||
@start()
|
||||
def collect_info(self):
|
||||
name = self.ask("What's your name?")
|
||||
role = self.ask("What's your role?")
|
||||
team = self.ask("Which team are you joining?")
|
||||
self.state["profile"] = {"name": name, "role": role, "team": team}
|
||||
print(f"Welcome {name}, {role} on {team}!")
|
||||
|
||||
flow = OnboardingFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
#### Timeout Support
|
||||
|
||||
Pass `timeout=` (in seconds) to avoid blocking indefinitely. If the user doesn't respond in time, `self.ask()` returns `None`:
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, start
|
||||
|
||||
class ApprovalFlow(Flow):
|
||||
@start()
|
||||
def request_approval(self):
|
||||
response = self.ask("Approve deployment? (yes/no)", timeout=120)
|
||||
|
||||
if response is None:
|
||||
print("No response received — timed out.")
|
||||
self.state["approved"] = False
|
||||
return
|
||||
|
||||
self.state["approved"] = response.strip().lower() == "yes"
|
||||
```
|
||||
|
||||
Use a `while` loop to retry on timeout:
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, start
|
||||
|
||||
class RetryFlow(Flow):
|
||||
@start()
|
||||
def ask_with_retry(self):
|
||||
answer = None
|
||||
while answer is None:
|
||||
answer = self.ask("Please confirm (yes/no):", timeout=60)
|
||||
if answer is None:
|
||||
print("Timed out, asking again...")
|
||||
self.state["confirmed"] = answer.strip().lower() == "yes"
|
||||
```
|
||||
|
||||
#### Metadata Support
|
||||
|
||||
The `metadata` parameter enables bidirectional context passing between the flow and the input provider. Send context to the provider, and receive structured context back:
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, start
|
||||
|
||||
class ContextualFlow(Flow):
|
||||
@start()
|
||||
def gather_feedback(self):
|
||||
response = self.ask(
|
||||
"Rate this output (1-5):",
|
||||
metadata={
|
||||
"step": "quality_review",
|
||||
"output_id": "abc-123",
|
||||
"options": ["1", "2", "3", "4", "5"],
|
||||
},
|
||||
)
|
||||
self.state["rating"] = int(response) if response else None
|
||||
```
|
||||
|
||||
When a custom provider returns an `InputResponse`, it can include its own metadata (e.g., user identity, timestamp, channel info) that your flow can process.
|
||||
|
||||
#### Custom InputProvider
|
||||
|
||||
For production use cases (Slack bots, web UIs, webhooks), implement the `InputProvider` protocol:
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, start
|
||||
from crewai.flow.input_provider import InputProvider, InputResponse
|
||||
import requests
|
||||
|
||||
class SlackInputProvider(InputProvider):
|
||||
def __init__(self, channel_id: str, bot_token: str):
|
||||
self.channel_id = channel_id
|
||||
self.bot_token = bot_token
|
||||
|
||||
def request_input(self, message, flow, metadata=None):
|
||||
# Post the question to Slack
|
||||
requests.post(
|
||||
"https://slack.com/api/chat.postMessage",
|
||||
headers={"Authorization": f"Bearer {self.bot_token}"},
|
||||
json={"channel": self.channel_id, "text": message},
|
||||
)
|
||||
# Wait for and return the user's reply (simplified)
|
||||
reply = self.poll_for_reply()
|
||||
return InputResponse(
|
||||
value=reply["text"],
|
||||
metadata={"user": reply["user"], "ts": reply["ts"]},
|
||||
)
|
||||
|
||||
def poll_for_reply(self):
|
||||
# Your implementation to wait for a Slack reply
|
||||
...
|
||||
|
||||
# Use the custom provider
|
||||
flow = Flow(input_provider=SlackInputProvider(
|
||||
channel_id="C01ABC123",
|
||||
bot_token="xoxb-...",
|
||||
))
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
The `request_input` method can return:
|
||||
- A **string** — used directly as the user's response
|
||||
- An **`InputResponse`** — includes `value` (the response string) and optional `metadata`
|
||||
- **`None`** — treated as a timeout / no response
|
||||
|
||||
#### Auto-Checkpoint Behavior
|
||||
|
||||
<Note>
|
||||
When persistence is configured, the flow state is automatically saved **before** each `self.ask()` call. If the process restarts while waiting for input, the flow can resume from the checkpoint without losing progress.
|
||||
</Note>
|
||||
|
||||
#### `self.ask()` vs `@human_feedback`
|
||||
|
||||
| | `self.ask()` | `@human_feedback` |
|
||||
|---|---|---|
|
||||
| **Purpose** | Inline user input during execution | Approval gates and review feedback |
|
||||
| **Returns** | `str \| None` | `HumanFeedbackResult` with structured fields |
|
||||
| **Timeout** | Built-in `timeout=` parameter | Not built-in |
|
||||
| **Provider** | Pluggable `InputProvider` protocol | Console-based |
|
||||
| **Use when** | Gathering data, clarifications, confirmations | Review/approval workflows with structured feedback |
|
||||
| **Decorator** | None — call `self.ask()` anywhere | `@human_feedback` on the method |
|
||||
|
||||
<Note>
|
||||
Both features coexist — you can use `self.ask()` and `@human_feedback` in the same flow. Use `self.ask()` for inline data gathering and `@human_feedback` for structured review gates.
|
||||
</Note>
|
||||
|
||||
### Human in the Loop (human feedback)
|
||||
|
||||
<Note>
|
||||
|
||||
@@ -5,6 +5,14 @@ icon: wrench
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
### Watch: Building CrewAI Agents & Flows with Coding Agent Skills
|
||||
|
||||
Install our coding agent skills (Claude Code, Codex, ...) to quickly get your coding agents up and running with CrewAI.
|
||||
|
||||
You can install it with `npx skills add crewaiinc/skills`
|
||||
|
||||
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
|
||||
|
||||
## Video Tutorial
|
||||
|
||||
Watch this video tutorial for a step-by-step demonstration of the installation process:
|
||||
|
||||
@@ -16,6 +16,14 @@ It empowers developers to build production-ready multi-agent systems by combinin
|
||||
|
||||
With over 100,000 developers certified through our community courses, CrewAI is the standard for enterprise-ready AI automation.
|
||||
|
||||
### Watch: Building CrewAI Agents & Flows with Coding Agent Skills
|
||||
|
||||
Install our coding agent skills (Claude Code, Codex, ...) to quickly get your coding agents up and running with CrewAI.
|
||||
|
||||
You can install it with `npx skills add crewaiinc/skills`
|
||||
|
||||
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
|
||||
|
||||
## The CrewAI Architecture
|
||||
|
||||
CrewAI's architecture is designed to balance autonomy with control.
|
||||
|
||||
@@ -5,6 +5,14 @@ icon: rocket
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
### Watch: Building CrewAI Agents & Flows with Coding Agent Skills
|
||||
|
||||
Install our coding agent skills (Claude Code, Codex, ...) to quickly get your coding agents up and running with CrewAI.
|
||||
|
||||
You can install it with `npx skills add crewaiinc/skills`
|
||||
|
||||
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
|
||||
|
||||
## Build your first CrewAI Agent
|
||||
|
||||
Let's create a simple crew that will help us `research` and `report` on the `latest AI developments` for a given topic or subject.
|
||||
|
||||
@@ -4,6 +4,28 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="2026년 4월 2일">
|
||||
## v1.13.0a7
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a7)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- v0.8/v0.9 지원, 스키마 및 문서가 포함된 A2UI 확장 추가
|
||||
|
||||
### 버그 수정
|
||||
- GPT-5 및 o-series를 추가하여 다중 모드 비전 접두사 수정
|
||||
|
||||
### 문서
|
||||
- v1.13.0a6에 대한 변경 로그 및 버전 업데이트
|
||||
|
||||
## 기여자
|
||||
|
||||
@alex-clawd, @greysonlalonde, @joaomdmoura
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 1일">
|
||||
## v1.13.0a6
|
||||
|
||||
|
||||
@@ -5,6 +5,14 @@ icon: wrench
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
### 영상: 코딩 에이전트 스킬을 활용한 CrewAI Agents & Flows 구축
|
||||
|
||||
코딩 에이전트 스킬(Claude Code, Codex 등)을 설치하여 CrewAI로 코딩 에이전트를 빠르게 시작하세요.
|
||||
|
||||
`npx skills add crewaiinc/skills` 명령어로 설치할 수 있습니다
|
||||
|
||||
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
|
||||
|
||||
## 비디오 튜토리얼
|
||||
|
||||
설치 과정을 단계별로 시연하는 비디오 튜토리얼을 시청하세요:
|
||||
|
||||
@@ -16,6 +16,14 @@ mode: "wide"
|
||||
|
||||
10만 명이 넘는 개발자가 커뮤니티 과정을 통해 인증을 받았으며, CrewAI는 기업용 AI 자동화의 표준입니다.
|
||||
|
||||
### 영상: 코딩 에이전트 스킬을 활용한 CrewAI Agents & Flows 구축
|
||||
|
||||
코딩 에이전트 스킬(Claude Code, Codex 등)을 설치하여 CrewAI로 코딩 에이전트를 빠르게 시작하세요.
|
||||
|
||||
`npx skills add crewaiinc/skills` 명령어로 설치할 수 있습니다
|
||||
|
||||
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
|
||||
|
||||
## CrewAI 아키텍처
|
||||
|
||||
CrewAI의 아키텍처는 자율성과 제어의 균형을 맞추도록 설계되었습니다.
|
||||
|
||||
@@ -5,6 +5,14 @@ icon: rocket
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
### 영상: 코딩 에이전트 스킬을 활용한 CrewAI Agents & Flows 구축
|
||||
|
||||
코딩 에이전트 스킬(Claude Code, Codex 등)을 설치하여 CrewAI로 코딩 에이전트를 빠르게 시작하세요.
|
||||
|
||||
`npx skills add crewaiinc/skills` 명령어로 설치할 수 있습니다
|
||||
|
||||
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
|
||||
|
||||
## 첫 번째 CrewAI Agent 만들기
|
||||
|
||||
이제 주어진 주제나 항목에 대해 `최신 AI 개발 동향`을 `연구`하고 `보고`하는 간단한 crew를 만들어보겠습니다.
|
||||
|
||||
@@ -4,6 +4,28 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="02 abr 2026">
|
||||
## v1.13.0a7
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.13.0a7)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Funcionalidades
|
||||
- Adicionar a extensão A2UI com suporte a v0.8/v0.9, esquemas e documentação
|
||||
|
||||
### Correções de Bugs
|
||||
- Corrigir prefixos de visão multimodal adicionando GPT-5 e o-series
|
||||
|
||||
### Documentação
|
||||
- Atualizar changelog e versão para v1.13.0a6
|
||||
|
||||
## Contribuidores
|
||||
|
||||
@alex-clawd, @greysonlalonde, @joaomdmoura
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="01 abr 2026">
|
||||
## v1.13.0a6
|
||||
|
||||
|
||||
@@ -5,6 +5,14 @@ icon: wrench
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
### Assista: Construindo Agents e Flows CrewAI com Coding Agent Skills
|
||||
|
||||
Instale nossas coding agent skills (Claude Code, Codex, ...) para colocar seus agentes de código para funcionar rapidamente com o CrewAI.
|
||||
|
||||
Você pode instalar com `npx skills add crewaiinc/skills`
|
||||
|
||||
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
|
||||
|
||||
## Tutorial em Vídeo
|
||||
|
||||
Assista a este tutorial em vídeo para uma demonstração passo a passo do processo de instalação:
|
||||
|
||||
@@ -16,6 +16,14 @@ Ele capacita desenvolvedores a construir sistemas multi-agente prontos para prod
|
||||
|
||||
Com mais de 100.000 desenvolvedores certificados em nossos cursos comunitários, o CrewAI é o padrão para automação de IA pronta para empresas.
|
||||
|
||||
### Assista: Construindo Agents e Flows CrewAI com Coding Agent Skills
|
||||
|
||||
Instale nossas coding agent skills (Claude Code, Codex, ...) para colocar seus agentes de código para funcionar rapidamente com o CrewAI.
|
||||
|
||||
Você pode instalar com `npx skills add crewaiinc/skills`
|
||||
|
||||
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
|
||||
|
||||
## A Arquitetura do CrewAI
|
||||
|
||||
A arquitetura do CrewAI foi projetada para equilibrar autonomia com controle.
|
||||
|
||||
@@ -5,6 +5,14 @@ icon: rocket
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
### Assista: Construindo Agents e Flows CrewAI com Coding Agent Skills
|
||||
|
||||
Instale nossas coding agent skills (Claude Code, Codex, ...) para colocar seus agentes de código para funcionar rapidamente com o CrewAI.
|
||||
|
||||
Você pode instalar com `npx skills add crewaiinc/skills`
|
||||
|
||||
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
|
||||
|
||||
## Construa seu primeiro Agente CrewAI
|
||||
|
||||
Vamos criar uma tripulação simples que nos ajudará a `pesquisar` e `relatar` sobre os `últimos avanços em IA` para um determinado tópico ou assunto.
|
||||
|
||||
@@ -152,4 +152,4 @@ __all__ = [
|
||||
"wrap_file_source",
|
||||
]
|
||||
|
||||
__version__ = "1.13.0a6"
|
||||
__version__ = "1.13.0a7"
|
||||
|
||||
@@ -11,7 +11,7 @@ dependencies = [
|
||||
"pytube~=15.0.0",
|
||||
"requests~=2.32.5",
|
||||
"docker~=7.1.0",
|
||||
"crewai==1.13.0a6",
|
||||
"crewai==1.13.0a7",
|
||||
"tiktoken~=0.8.0",
|
||||
"beautifulsoup4~=4.13.4",
|
||||
"python-docx~=1.2.0",
|
||||
|
||||
@@ -309,4 +309,4 @@ __all__ = [
|
||||
"ZapierActionTools",
|
||||
]
|
||||
|
||||
__version__ = "1.13.0a6"
|
||||
__version__ = "1.13.0a7"
|
||||
|
||||
@@ -54,7 +54,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = [
|
||||
"crewai-tools==1.13.0a6",
|
||||
"crewai-tools==1.13.0a7",
|
||||
]
|
||||
embeddings = [
|
||||
"tiktoken~=0.8.0"
|
||||
|
||||
@@ -8,6 +8,7 @@ from pydantic import PydanticUserError
|
||||
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.agent.planning_config import PlanningConfig
|
||||
from crewai.context import ExecutionContext
|
||||
from crewai.crew import Crew
|
||||
from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.flow.flow import Flow
|
||||
@@ -15,6 +16,7 @@ from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.llm import LLM
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.process import Process
|
||||
from crewai.runtime_state import _entity_discriminator
|
||||
from crewai.task import Task
|
||||
from crewai.tasks.llm_guardrail import LLMGuardrail
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
@@ -44,7 +46,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
|
||||
|
||||
_suppress_pydantic_deprecation_warnings()
|
||||
|
||||
__version__ = "1.13.0a6"
|
||||
__version__ = "1.13.0a7"
|
||||
_telemetry_submitted = False
|
||||
|
||||
|
||||
@@ -96,6 +98,10 @@ def __getattr__(name: str) -> Any:
|
||||
|
||||
|
||||
try:
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent as _BaseAgent
|
||||
from crewai.agents.agent_builder.base_agent_executor_mixin import (
|
||||
CrewAgentExecutorMixin as _CrewAgentExecutorMixin,
|
||||
)
|
||||
from crewai.agents.tools_handler import ToolsHandler as _ToolsHandler
|
||||
from crewai.experimental.agent_executor import AgentExecutor as _AgentExecutor
|
||||
from crewai.hooks.llm_hooks import LLMCallHookContext as _LLMCallHookContext
|
||||
@@ -105,27 +111,93 @@ try:
|
||||
SystemPromptResult as _SystemPromptResult,
|
||||
)
|
||||
|
||||
_AgentExecutor.model_rebuild(
|
||||
force=True,
|
||||
_types_namespace={
|
||||
"Agent": Agent,
|
||||
"ToolsHandler": _ToolsHandler,
|
||||
"Crew": Crew,
|
||||
"BaseLLM": BaseLLM,
|
||||
"Task": Task,
|
||||
"StandardPromptResult": _StandardPromptResult,
|
||||
"SystemPromptResult": _SystemPromptResult,
|
||||
"LLMCallHookContext": _LLMCallHookContext,
|
||||
"ToolResult": _ToolResult,
|
||||
},
|
||||
)
|
||||
_base_namespace: dict[str, type] = {
|
||||
"Agent": Agent,
|
||||
"BaseAgent": _BaseAgent,
|
||||
"Crew": Crew,
|
||||
"Flow": Flow,
|
||||
"BaseLLM": BaseLLM,
|
||||
"Task": Task,
|
||||
"CrewAgentExecutorMixin": _CrewAgentExecutorMixin,
|
||||
"ExecutionContext": ExecutionContext,
|
||||
}
|
||||
|
||||
try:
|
||||
from crewai.a2a.config import (
|
||||
A2AClientConfig as _A2AClientConfig,
|
||||
A2AConfig as _A2AConfig,
|
||||
A2AServerConfig as _A2AServerConfig,
|
||||
)
|
||||
|
||||
_base_namespace.update(
|
||||
{
|
||||
"A2AConfig": _A2AConfig,
|
||||
"A2AClientConfig": _A2AClientConfig,
|
||||
"A2AServerConfig": _A2AServerConfig,
|
||||
}
|
||||
)
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
import sys
|
||||
|
||||
_full_namespace = {
|
||||
**_base_namespace,
|
||||
"ToolsHandler": _ToolsHandler,
|
||||
"StandardPromptResult": _StandardPromptResult,
|
||||
"SystemPromptResult": _SystemPromptResult,
|
||||
"LLMCallHookContext": _LLMCallHookContext,
|
||||
"ToolResult": _ToolResult,
|
||||
}
|
||||
|
||||
_resolve_namespace = {
|
||||
**_full_namespace,
|
||||
**sys.modules[_BaseAgent.__module__].__dict__,
|
||||
}
|
||||
|
||||
for _mod_name in (
|
||||
_BaseAgent.__module__,
|
||||
Agent.__module__,
|
||||
Crew.__module__,
|
||||
Flow.__module__,
|
||||
Task.__module__,
|
||||
_AgentExecutor.__module__,
|
||||
):
|
||||
sys.modules[_mod_name].__dict__.update(_resolve_namespace)
|
||||
|
||||
from crewai.tasks.conditional_task import ConditionalTask as _ConditionalTask
|
||||
|
||||
_BaseAgent.model_rebuild(force=True, _types_namespace=_full_namespace)
|
||||
Task.model_rebuild(force=True, _types_namespace=_full_namespace)
|
||||
_ConditionalTask.model_rebuild(force=True, _types_namespace=_full_namespace)
|
||||
Crew.model_rebuild(force=True, _types_namespace=_full_namespace)
|
||||
Flow.model_rebuild(force=True, _types_namespace=_full_namespace)
|
||||
_AgentExecutor.model_rebuild(force=True, _types_namespace=_full_namespace)
|
||||
|
||||
from typing import Annotated
|
||||
|
||||
from pydantic import Discriminator, RootModel, Tag
|
||||
|
||||
Entity = Annotated[
|
||||
Annotated[Flow, Tag("flow")] # type: ignore[type-arg]
|
||||
| Annotated[Crew, Tag("crew")]
|
||||
| Annotated[Agent, Tag("agent")],
|
||||
Discriminator(_entity_discriminator),
|
||||
]
|
||||
RuntimeState = RootModel[list[Entity]]
|
||||
|
||||
try:
|
||||
Agent.model_rebuild(force=True, _types_namespace=_full_namespace)
|
||||
except PydanticUserError:
|
||||
pass
|
||||
except (ImportError, PydanticUserError):
|
||||
import logging as _logging
|
||||
|
||||
_logging.getLogger(__name__).warning(
|
||||
"AgentExecutor.model_rebuild() failed; forward refs may be unresolved.",
|
||||
"model_rebuild() failed; forward refs may be unresolved.",
|
||||
exc_info=True,
|
||||
)
|
||||
RuntimeState = None # type: ignore[assignment,misc]
|
||||
|
||||
__all__ = [
|
||||
"LLM",
|
||||
@@ -133,12 +205,14 @@ __all__ = [
|
||||
"BaseLLM",
|
||||
"Crew",
|
||||
"CrewOutput",
|
||||
"ExecutionContext",
|
||||
"Flow",
|
||||
"Knowledge",
|
||||
"LLMGuardrail",
|
||||
"Memory",
|
||||
"PlanningConfig",
|
||||
"Process",
|
||||
"RuntimeState",
|
||||
"Task",
|
||||
"TaskOutput",
|
||||
"__version__",
|
||||
|
||||
@@ -14,6 +14,7 @@ import subprocess
|
||||
import time
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Annotated,
|
||||
Any,
|
||||
Literal,
|
||||
NoReturn,
|
||||
@@ -23,11 +24,14 @@ import warnings
|
||||
|
||||
from pydantic import (
|
||||
BaseModel,
|
||||
BeforeValidator,
|
||||
ConfigDict,
|
||||
Field,
|
||||
InstanceOf,
|
||||
PrivateAttr,
|
||||
model_validator,
|
||||
)
|
||||
from pydantic.functional_serializers import PlainSerializer
|
||||
from typing_extensions import Self
|
||||
|
||||
from crewai.agent.planning_config import PlanningConfig
|
||||
@@ -45,7 +49,11 @@ from crewai.agent.utils import (
|
||||
save_last_messages,
|
||||
validate_max_execution_time,
|
||||
)
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.agents.agent_builder.base_agent import (
|
||||
BaseAgent,
|
||||
_serialize_llm_ref,
|
||||
_validate_llm_ref,
|
||||
)
|
||||
from crewai.agents.cache.cache_handler import CacheHandler
|
||||
from crewai.agents.crew_agent_executor import CrewAgentExecutor
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
@@ -121,6 +129,24 @@ if TYPE_CHECKING:
|
||||
|
||||
_passthrough_exceptions: tuple[type[Exception], ...] = ()
|
||||
|
||||
_EXECUTOR_CLASS_MAP: dict[str, type] = {
|
||||
"CrewAgentExecutor": CrewAgentExecutor,
|
||||
"AgentExecutor": AgentExecutor,
|
||||
}
|
||||
|
||||
|
||||
def _validate_executor_class(value: Any) -> Any:
|
||||
if isinstance(value, str):
|
||||
cls = _EXECUTOR_CLASS_MAP.get(value)
|
||||
if cls is None:
|
||||
raise ValueError(f"Unknown executor class: {value}")
|
||||
return cls
|
||||
return value
|
||||
|
||||
|
||||
def _serialize_executor_class(value: Any) -> str:
|
||||
return value.__name__ if isinstance(value, type) else str(value)
|
||||
|
||||
|
||||
class Agent(BaseAgent):
|
||||
"""Represents an agent in a system.
|
||||
@@ -166,12 +192,16 @@ class Agent(BaseAgent):
|
||||
default=True,
|
||||
description="Use system prompt for the agent.",
|
||||
)
|
||||
llm: str | BaseLLM | None = Field(
|
||||
description="Language model that will run the agent.", default=None
|
||||
)
|
||||
function_calling_llm: str | BaseLLM | None = Field(
|
||||
description="Language model that will run the agent.", default=None
|
||||
)
|
||||
llm: Annotated[
|
||||
str | BaseLLM | None,
|
||||
BeforeValidator(_validate_llm_ref),
|
||||
PlainSerializer(_serialize_llm_ref, return_type=str | None, when_used="json"),
|
||||
] = Field(description="Language model that will run the agent.", default=None)
|
||||
function_calling_llm: Annotated[
|
||||
str | BaseLLM | None,
|
||||
BeforeValidator(_validate_llm_ref),
|
||||
PlainSerializer(_serialize_llm_ref, return_type=str | None, when_used="json"),
|
||||
] = Field(description="Language model that will run the agent.", default=None)
|
||||
system_template: str | None = Field(
|
||||
default=None, description="System format for the agent."
|
||||
)
|
||||
@@ -267,7 +297,14 @@ class Agent(BaseAgent):
|
||||
Can be a single A2AConfig/A2AClientConfig/A2AServerConfig, or a list of any number of A2AConfig/A2AClientConfig with a single A2AServerConfig.
|
||||
""",
|
||||
)
|
||||
executor_class: type[CrewAgentExecutor] | type[AgentExecutor] = Field(
|
||||
agent_executor: InstanceOf[CrewAgentExecutor] | InstanceOf[AgentExecutor] | None = (
|
||||
Field(default=None, description="An instance of the CrewAgentExecutor class.")
|
||||
)
|
||||
executor_class: Annotated[
|
||||
type[CrewAgentExecutor] | type[AgentExecutor],
|
||||
BeforeValidator(_validate_executor_class),
|
||||
PlainSerializer(_serialize_executor_class, return_type=str, when_used="json"),
|
||||
] = Field(
|
||||
default=CrewAgentExecutor,
|
||||
description="Class to use for the agent executor. Defaults to CrewAgentExecutor, can optionally use AgentExecutor.",
|
||||
)
|
||||
@@ -690,7 +727,9 @@ class Agent(BaseAgent):
|
||||
task_prompt,
|
||||
knowledge_config,
|
||||
self.knowledge.query if self.knowledge else lambda *a, **k: None,
|
||||
self.crew.query_knowledge if self.crew else lambda *a, **k: None,
|
||||
self.crew.query_knowledge
|
||||
if self.crew and not isinstance(self.crew, str)
|
||||
else lambda *a, **k: None,
|
||||
)
|
||||
|
||||
task_prompt = self._finalize_task_prompt(task_prompt, tools, task)
|
||||
@@ -777,14 +816,18 @@ class Agent(BaseAgent):
|
||||
if not self.agent_executor:
|
||||
raise RuntimeError("Agent executor is not initialized.")
|
||||
|
||||
return self.agent_executor.invoke(
|
||||
{
|
||||
"input": task_prompt,
|
||||
"tool_names": self.agent_executor.tools_names,
|
||||
"tools": self.agent_executor.tools_description,
|
||||
"ask_for_human_input": task.human_input,
|
||||
}
|
||||
)["output"]
|
||||
result = cast(
|
||||
dict[str, Any],
|
||||
self.agent_executor.invoke(
|
||||
{
|
||||
"input": task_prompt,
|
||||
"tool_names": self.agent_executor.tools_names,
|
||||
"tools": self.agent_executor.tools_description,
|
||||
"ask_for_human_input": task.human_input,
|
||||
}
|
||||
),
|
||||
)
|
||||
return result["output"]
|
||||
|
||||
async def aexecute_task(
|
||||
self,
|
||||
@@ -955,19 +998,23 @@ class Agent(BaseAgent):
|
||||
if self.agent_executor is not None:
|
||||
self._update_executor_parameters(
|
||||
task=task,
|
||||
tools=parsed_tools, # type: ignore[arg-type]
|
||||
tools=parsed_tools,
|
||||
raw_tools=raw_tools,
|
||||
prompt=prompt,
|
||||
stop_words=stop_words,
|
||||
rpm_limit_fn=rpm_limit_fn,
|
||||
)
|
||||
else:
|
||||
if not isinstance(self.llm, BaseLLM):
|
||||
raise RuntimeError(
|
||||
"LLM must be resolved before creating agent executor."
|
||||
)
|
||||
self.agent_executor = self.executor_class(
|
||||
llm=cast(BaseLLM, self.llm),
|
||||
llm=self.llm,
|
||||
task=task, # type: ignore[arg-type]
|
||||
i18n=self.i18n,
|
||||
agent=self,
|
||||
crew=self.crew,
|
||||
crew=self.crew, # type: ignore[arg-type]
|
||||
tools=parsed_tools,
|
||||
prompt=prompt,
|
||||
original_tools=raw_tools,
|
||||
@@ -991,7 +1038,7 @@ class Agent(BaseAgent):
|
||||
def _update_executor_parameters(
|
||||
self,
|
||||
task: Task | None,
|
||||
tools: list[BaseTool],
|
||||
tools: list[CrewStructuredTool],
|
||||
raw_tools: list[BaseTool],
|
||||
prompt: SystemPromptResult | StandardPromptResult,
|
||||
stop_words: list[str],
|
||||
@@ -1007,11 +1054,17 @@ class Agent(BaseAgent):
|
||||
stop_words: Stop words list.
|
||||
rpm_limit_fn: RPM limit callback function.
|
||||
"""
|
||||
if self.agent_executor is None:
|
||||
raise RuntimeError("Agent executor is not initialized.")
|
||||
|
||||
self.agent_executor.task = task
|
||||
self.agent_executor.tools = tools
|
||||
self.agent_executor.original_tools = raw_tools
|
||||
self.agent_executor.prompt = prompt
|
||||
self.agent_executor.stop_words = stop_words
|
||||
if isinstance(self.agent_executor, AgentExecutor):
|
||||
self.agent_executor.stop_words = stop_words
|
||||
else:
|
||||
self.agent_executor.stop = stop_words
|
||||
self.agent_executor.tools_names = get_tool_names(tools)
|
||||
self.agent_executor.tools_description = render_text_description_and_args(tools)
|
||||
self.agent_executor.response_model = (
|
||||
@@ -1033,7 +1086,7 @@ class Agent(BaseAgent):
|
||||
)
|
||||
)
|
||||
|
||||
def get_delegation_tools(self, agents: list[BaseAgent]) -> list[BaseTool]:
|
||||
def get_delegation_tools(self, agents: Sequence[BaseAgent]) -> list[BaseTool]:
|
||||
agent_tools = AgentTools(agents=agents)
|
||||
return agent_tools.tools()
|
||||
|
||||
@@ -1787,21 +1840,3 @@ class Agent(BaseAgent):
|
||||
LiteAgentOutput: The result of the agent execution.
|
||||
"""
|
||||
return await self.kickoff_async(messages, response_format, input_files)
|
||||
|
||||
|
||||
try:
|
||||
from crewai.a2a.config import (
|
||||
A2AClientConfig as _A2AClientConfig,
|
||||
A2AConfig as _A2AConfig,
|
||||
A2AServerConfig as _A2AServerConfig,
|
||||
)
|
||||
|
||||
Agent.model_rebuild(
|
||||
_types_namespace={
|
||||
"A2AConfig": _A2AConfig,
|
||||
"A2AClientConfig": _A2AClientConfig,
|
||||
"A2AServerConfig": _A2AServerConfig,
|
||||
}
|
||||
)
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
@@ -137,7 +137,8 @@ def handle_knowledge_retrieval(
|
||||
Returns:
|
||||
The task prompt potentially augmented with knowledge context.
|
||||
"""
|
||||
if not (agent.knowledge or (agent.crew and agent.crew.knowledge)):
|
||||
_crew = agent.crew if not isinstance(agent.crew, str) else None
|
||||
if not (agent.knowledge or (_crew and _crew.knowledge)):
|
||||
return task_prompt
|
||||
|
||||
crewai_event_bus.emit(
|
||||
@@ -244,7 +245,7 @@ def apply_training_data(agent: Agent, task_prompt: str) -> str:
|
||||
Returns:
|
||||
The task prompt with training data applied.
|
||||
"""
|
||||
if agent.crew and agent.crew._train:
|
||||
if agent.crew and not isinstance(agent.crew, str) and agent.crew._train:
|
||||
return agent._training_handler(task_prompt=task_prompt)
|
||||
return agent._use_trained_data(task_prompt=task_prompt)
|
||||
|
||||
@@ -355,7 +356,8 @@ async def ahandle_knowledge_retrieval(
|
||||
Returns:
|
||||
The task prompt potentially augmented with knowledge context.
|
||||
"""
|
||||
if not (agent.knowledge or (agent.crew and agent.crew.knowledge)):
|
||||
_crew = agent.crew if not isinstance(agent.crew, str) else None
|
||||
if not (agent.knowledge or (_crew and _crew.knowledge)):
|
||||
return task_prompt
|
||||
|
||||
crewai_event_bus.emit(
|
||||
@@ -381,15 +383,16 @@ async def ahandle_knowledge_retrieval(
|
||||
if agent.agent_knowledge_context:
|
||||
task_prompt += agent.agent_knowledge_context
|
||||
|
||||
knowledge_snippets = await agent.crew.aquery_knowledge(
|
||||
[agent.knowledge_search_query], **knowledge_config
|
||||
)
|
||||
if knowledge_snippets:
|
||||
agent.crew_knowledge_context = extract_knowledge_context(
|
||||
knowledge_snippets
|
||||
if _crew:
|
||||
knowledge_snippets = await _crew.aquery_knowledge(
|
||||
[agent.knowledge_search_query], **knowledge_config
|
||||
)
|
||||
if agent.crew_knowledge_context:
|
||||
task_prompt += agent.crew_knowledge_context
|
||||
if knowledge_snippets:
|
||||
agent.crew_knowledge_context = extract_knowledge_context(
|
||||
knowledge_snippets
|
||||
)
|
||||
if agent.crew_knowledge_context:
|
||||
task_prompt += agent.crew_knowledge_context
|
||||
|
||||
crewai_event_bus.emit(
|
||||
agent,
|
||||
|
||||
@@ -5,7 +5,7 @@ with CrewAI's agent system. Provides memory persistence, tool integration, and s
|
||||
output functionality.
|
||||
"""
|
||||
|
||||
from collections.abc import Callable
|
||||
from collections.abc import Callable, Sequence
|
||||
from typing import Any, cast
|
||||
|
||||
from pydantic import ConfigDict, Field, PrivateAttr
|
||||
@@ -30,6 +30,7 @@ from crewai.events.types.agent_events import (
|
||||
)
|
||||
from crewai.tools.agent_tools.agent_tools import AgentTools
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.types.callback import SerializableCallable
|
||||
from crewai.utilities import Logger
|
||||
from crewai.utilities.converter import Converter
|
||||
from crewai.utilities.import_utils import require
|
||||
@@ -50,7 +51,7 @@ class LangGraphAgentAdapter(BaseAgentAdapter):
|
||||
_memory: Any = PrivateAttr(default=None)
|
||||
_max_iterations: int = PrivateAttr(default=10)
|
||||
function_calling_llm: Any = Field(default=None)
|
||||
step_callback: Callable[..., Any] | None = Field(default=None)
|
||||
step_callback: SerializableCallable | None = Field(default=None)
|
||||
|
||||
model: str = Field(default="gpt-4o")
|
||||
verbose: bool = Field(default=False)
|
||||
@@ -272,7 +273,7 @@ class LangGraphAgentAdapter(BaseAgentAdapter):
|
||||
available_tools: list[Any] = self._tool_adapter.tools()
|
||||
self._graph.tools = available_tools
|
||||
|
||||
def get_delegation_tools(self, agents: list[BaseAgent]) -> list[BaseTool]:
|
||||
def get_delegation_tools(self, agents: Sequence[BaseAgent]) -> list[BaseTool]:
|
||||
"""Implement delegation tools support for LangGraph.
|
||||
|
||||
Creates delegation tools that allow this agent to delegate tasks to other agents.
|
||||
|
||||
@@ -4,6 +4,7 @@ This module contains the OpenAIAgentAdapter class that integrates OpenAI Assista
|
||||
with CrewAI's agent system, providing tool integration and structured output support.
|
||||
"""
|
||||
|
||||
from collections.abc import Sequence
|
||||
from typing import Any, cast
|
||||
|
||||
from pydantic import ConfigDict, Field, PrivateAttr
|
||||
@@ -188,14 +189,14 @@ class OpenAIAgentAdapter(BaseAgentAdapter):
|
||||
self._openai_agent = OpenAIAgent(
|
||||
name=self.role,
|
||||
instructions=instructions,
|
||||
model=self.llm,
|
||||
model=str(self.llm),
|
||||
**self._agent_config or {},
|
||||
)
|
||||
|
||||
if all_tools:
|
||||
self.configure_tools(all_tools)
|
||||
|
||||
self.agent_executor = Runner
|
||||
self.agent_executor = Runner # type: ignore[assignment]
|
||||
|
||||
def configure_tools(self, tools: list[BaseTool] | None = None) -> None:
|
||||
"""Configure tools for the OpenAI Assistant.
|
||||
@@ -221,7 +222,7 @@ class OpenAIAgentAdapter(BaseAgentAdapter):
|
||||
"""
|
||||
return self._converter_adapter.post_process_result(result.final_output)
|
||||
|
||||
def get_delegation_tools(self, agents: list[BaseAgent]) -> list[BaseTool]:
|
||||
def get_delegation_tools(self, agents: Sequence[BaseAgent]) -> list[BaseTool]:
|
||||
"""Implement delegation tools support.
|
||||
|
||||
Creates delegation tools that allow this agent to delegate tasks to other agents.
|
||||
|
||||
@@ -1,25 +1,30 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from collections.abc import Sequence
|
||||
from copy import copy as shallow_copy
|
||||
from hashlib import md5
|
||||
from pathlib import Path
|
||||
import re
|
||||
from typing import Any, Final, Literal
|
||||
from typing import TYPE_CHECKING, Annotated, Any, Final, Literal
|
||||
import uuid
|
||||
|
||||
from pydantic import (
|
||||
UUID4,
|
||||
BaseModel,
|
||||
BeforeValidator,
|
||||
Field,
|
||||
InstanceOf,
|
||||
PrivateAttr,
|
||||
field_validator,
|
||||
model_validator,
|
||||
)
|
||||
from pydantic.functional_serializers import PlainSerializer
|
||||
from pydantic_core import PydanticCustomError
|
||||
from typing_extensions import Self
|
||||
|
||||
from crewai.agent.internal.meta import AgentMeta
|
||||
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
|
||||
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
|
||||
from crewai.agents.cache.cache_handler import CacheHandler
|
||||
from crewai.agents.tools_handler import ToolsHandler
|
||||
@@ -27,6 +32,7 @@ from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.knowledge.knowledge_config import KnowledgeConfig
|
||||
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
|
||||
from crewai.knowledge.storage.base_knowledge_storage import BaseKnowledgeStorage
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.mcp.config import MCPServerConfig
|
||||
from crewai.memory.memory_scope import MemoryScope, MemorySlice
|
||||
from crewai.memory.unified_memory import Memory
|
||||
@@ -42,6 +48,41 @@ from crewai.utilities.rpm_controller import RPMController
|
||||
from crewai.utilities.string_utils import interpolate_only
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.context import ExecutionContext
|
||||
from crewai.crew import Crew
|
||||
|
||||
|
||||
def _validate_crew_ref(value: Any) -> Any:
|
||||
return value
|
||||
|
||||
|
||||
def _serialize_crew_ref(value: Any) -> str | None:
|
||||
if value is None:
|
||||
return None
|
||||
return str(value.id) if hasattr(value, "id") else str(value)
|
||||
|
||||
|
||||
def _validate_llm_ref(value: Any) -> Any:
|
||||
return value
|
||||
|
||||
|
||||
def _resolve_agent(value: Any, info: Any) -> Any:
|
||||
if isinstance(value, BaseAgent) or value is None or not isinstance(value, dict):
|
||||
return value
|
||||
from crewai.agent.core import Agent
|
||||
|
||||
return Agent.model_validate(value, context=getattr(info, "context", None))
|
||||
|
||||
|
||||
def _serialize_llm_ref(value: Any) -> str | None:
|
||||
if value is None:
|
||||
return None
|
||||
if isinstance(value, str):
|
||||
return value
|
||||
return getattr(value, "model", str(value))
|
||||
|
||||
|
||||
_SLUG_RE: Final[re.Pattern[str]] = re.compile(
|
||||
r"^(?:crewai-amp:)?[a-zA-Z0-9][a-zA-Z0-9_-]*(?:#[\w-]+)?$"
|
||||
)
|
||||
@@ -119,10 +160,12 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
Set private attributes.
|
||||
"""
|
||||
|
||||
entity_type: Literal["agent"] = "agent"
|
||||
|
||||
__hash__ = object.__hash__
|
||||
_logger: Logger = PrivateAttr(default_factory=lambda: Logger(verbose=False))
|
||||
_rpm_controller: RPMController | None = PrivateAttr(default=None)
|
||||
_request_within_rpm_limit: Any = PrivateAttr(default=None)
|
||||
_request_within_rpm_limit: SerializableCallable | None = PrivateAttr(default=None)
|
||||
_original_role: str | None = PrivateAttr(default=None)
|
||||
_original_goal: str | None = PrivateAttr(default=None)
|
||||
_original_backstory: str | None = PrivateAttr(default=None)
|
||||
@@ -154,13 +197,21 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
max_iter: int = Field(
|
||||
default=25, description="Maximum iterations for an agent to execute a task"
|
||||
)
|
||||
agent_executor: Any = Field(
|
||||
agent_executor: InstanceOf[CrewAgentExecutorMixin] | None = Field(
|
||||
default=None, description="An instance of the CrewAgentExecutor class."
|
||||
)
|
||||
llm: Any = Field(
|
||||
default=None, description="Language model that will run the agent."
|
||||
)
|
||||
crew: Any = Field(default=None, description="Crew to which the agent belongs.")
|
||||
llm: Annotated[
|
||||
str | BaseLLM | None,
|
||||
BeforeValidator(_validate_llm_ref),
|
||||
PlainSerializer(_serialize_llm_ref, return_type=str | None, when_used="json"),
|
||||
] = Field(default=None, description="Language model that will run the agent.")
|
||||
crew: Annotated[
|
||||
Crew | str | None,
|
||||
BeforeValidator(_validate_crew_ref),
|
||||
PlainSerializer(
|
||||
_serialize_crew_ref, return_type=str | None, when_used="always"
|
||||
),
|
||||
] = Field(default=None, description="Crew to which the agent belongs.")
|
||||
i18n: I18N = Field(
|
||||
default_factory=get_i18n, description="Internationalization settings."
|
||||
)
|
||||
@@ -172,7 +223,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
description="An instance of the ToolsHandler class.",
|
||||
)
|
||||
tools_results: list[dict[str, Any]] = Field(
|
||||
default=[], description="Results of the tools used by the agent."
|
||||
default_factory=list, description="Results of the tools used by the agent."
|
||||
)
|
||||
max_tokens: int | None = Field(
|
||||
default=None, description="Maximum number of tokens for the agent's execution."
|
||||
@@ -223,6 +274,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
description="Agent Skills. Accepts paths for discovery or pre-loaded Skill objects.",
|
||||
min_length=1,
|
||||
)
|
||||
execution_context: ExecutionContext | None = Field(default=None)
|
||||
|
||||
@model_validator(mode="before")
|
||||
@classmethod
|
||||
@@ -337,11 +389,12 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
|
||||
@field_validator("id", mode="before")
|
||||
@classmethod
|
||||
def _deny_user_set_id(cls, v: UUID4 | None) -> None:
|
||||
if v:
|
||||
def _deny_user_set_id(cls, v: UUID4 | None, info: Any) -> UUID4 | None:
|
||||
if v and not (info.context or {}).get("from_checkpoint"):
|
||||
raise PydanticCustomError(
|
||||
"may_not_set_field", "This field is not to be set by the user.", {}
|
||||
)
|
||||
return v
|
||||
|
||||
@model_validator(mode="after")
|
||||
def set_private_attrs(self) -> Self:
|
||||
@@ -398,7 +451,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_delegation_tools(self, agents: list[BaseAgent]) -> list[BaseTool]:
|
||||
def get_delegation_tools(self, agents: Sequence[BaseAgent]) -> list[BaseTool]:
|
||||
"""Set the task tools that init BaseAgenTools class."""
|
||||
|
||||
@abstractmethod
|
||||
|
||||
@@ -3,20 +3,15 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from pydantic import GetCoreSchemaHandler
|
||||
from pydantic_core import CoreSchema, core_schema
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.agents.cache.cache_handler import CacheHandler
|
||||
from crewai.tools.cache_tools.cache_tools import CacheTools
|
||||
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agents.cache.cache_handler import CacheHandler
|
||||
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
|
||||
|
||||
|
||||
class ToolsHandler:
|
||||
class ToolsHandler(BaseModel):
|
||||
"""Callback handler for tool usage.
|
||||
|
||||
Attributes:
|
||||
@@ -24,14 +19,8 @@ class ToolsHandler:
|
||||
cache: Optional cache handler for storing tool outputs.
|
||||
"""
|
||||
|
||||
def __init__(self, cache: CacheHandler | None = None) -> None:
|
||||
"""Initialize the callback handler.
|
||||
|
||||
Args:
|
||||
cache: Optional cache handler for storing tool outputs.
|
||||
"""
|
||||
self.cache: CacheHandler | None = cache
|
||||
self.last_used_tool: ToolCalling | InstructorToolCalling | None = None
|
||||
cache: CacheHandler | None = Field(default=None)
|
||||
last_used_tool: ToolCalling | InstructorToolCalling | None = Field(default=None)
|
||||
|
||||
def on_tool_use(
|
||||
self,
|
||||
@@ -48,7 +37,6 @@ class ToolsHandler:
|
||||
"""
|
||||
self.last_used_tool = calling
|
||||
if self.cache and should_cache and calling.tool_name != CacheTools().name:
|
||||
# Convert arguments to string for cache
|
||||
input_str = ""
|
||||
if calling.arguments:
|
||||
if isinstance(calling.arguments, dict):
|
||||
@@ -61,14 +49,3 @@ class ToolsHandler:
|
||||
input=input_str,
|
||||
output=output,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def __get_pydantic_core_schema__(
|
||||
cls, _source_type: Any, _handler: GetCoreSchemaHandler
|
||||
) -> CoreSchema:
|
||||
"""Generate Pydantic core schema for BaseClient Protocol.
|
||||
|
||||
This allows the Protocol to be used in Pydantic models without
|
||||
requiring arbitrary_types_allowed=True.
|
||||
"""
|
||||
return core_schema.any_schema()
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.13.0a6"
|
||||
"crewai[tools]==1.13.0a7"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.13.0a6"
|
||||
"crewai[tools]==1.13.0a7"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.13.0a6"
|
||||
"crewai[tools]==1.13.0a7"
|
||||
]
|
||||
|
||||
[tool.crewai]
|
||||
|
||||
@@ -4,6 +4,23 @@ import contextvars
|
||||
import os
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.events.base_events import (
|
||||
get_emission_sequence,
|
||||
set_emission_counter,
|
||||
)
|
||||
from crewai.events.event_context import (
|
||||
_event_id_stack,
|
||||
_last_event_id,
|
||||
_triggering_event_id,
|
||||
)
|
||||
from crewai.flow.flow_context import (
|
||||
current_flow_id,
|
||||
current_flow_method_name,
|
||||
current_flow_request_id,
|
||||
)
|
||||
|
||||
|
||||
_platform_integration_token: contextvars.ContextVar[str | None] = (
|
||||
contextvars.ContextVar("platform_integration_token", default=None)
|
||||
@@ -63,3 +80,53 @@ def reset_current_task_id(token: contextvars.Token[str | None]) -> None:
|
||||
def get_current_task_id() -> str | None:
|
||||
"""Get the current task ID from the context."""
|
||||
return _current_task_id.get()
|
||||
|
||||
|
||||
class ExecutionContext(BaseModel):
|
||||
"""Snapshot of ContextVar execution state."""
|
||||
|
||||
current_task_id: str | None = Field(default=None)
|
||||
flow_request_id: str | None = Field(default=None)
|
||||
flow_id: str | None = Field(default=None)
|
||||
flow_method_name: str = Field(default="unknown")
|
||||
|
||||
event_id_stack: tuple[tuple[str, str], ...] = Field(default=())
|
||||
last_event_id: str | None = Field(default=None)
|
||||
triggering_event_id: str | None = Field(default=None)
|
||||
emission_sequence: int = Field(default=0)
|
||||
|
||||
feedback_callback_info: dict[str, Any] | None = Field(default=None)
|
||||
platform_token: str | None = Field(default=None)
|
||||
|
||||
|
||||
def capture_execution_context(
|
||||
feedback_callback_info: dict[str, Any] | None = None,
|
||||
) -> ExecutionContext:
|
||||
"""Read current ContextVars into an ExecutionContext."""
|
||||
return ExecutionContext(
|
||||
current_task_id=_current_task_id.get(),
|
||||
flow_request_id=current_flow_request_id.get(),
|
||||
flow_id=current_flow_id.get(),
|
||||
flow_method_name=current_flow_method_name.get(),
|
||||
event_id_stack=_event_id_stack.get(),
|
||||
last_event_id=_last_event_id.get(),
|
||||
triggering_event_id=_triggering_event_id.get(),
|
||||
emission_sequence=get_emission_sequence(),
|
||||
feedback_callback_info=feedback_callback_info,
|
||||
platform_token=_platform_integration_token.get(),
|
||||
)
|
||||
|
||||
|
||||
def apply_execution_context(ctx: ExecutionContext) -> None:
|
||||
"""Write an ExecutionContext back into the ContextVars."""
|
||||
_current_task_id.set(ctx.current_task_id)
|
||||
current_flow_request_id.set(ctx.flow_request_id)
|
||||
current_flow_id.set(ctx.flow_id)
|
||||
current_flow_method_name.set(ctx.flow_method_name)
|
||||
|
||||
_event_id_stack.set(ctx.event_id_stack)
|
||||
_last_event_id.set(ctx.last_event_id)
|
||||
_triggering_event_id.set(ctx.triggering_event_id)
|
||||
set_emission_counter(ctx.emission_sequence)
|
||||
|
||||
_platform_integration_token.set(ctx.platform_token)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Callable
|
||||
from collections.abc import Callable, Sequence
|
||||
from concurrent.futures import Future
|
||||
from copy import copy as shallow_copy
|
||||
from hashlib import md5
|
||||
@@ -10,7 +10,9 @@ from pathlib import Path
|
||||
import re
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Annotated,
|
||||
Any,
|
||||
Literal,
|
||||
cast,
|
||||
)
|
||||
import uuid
|
||||
@@ -21,12 +23,14 @@ from opentelemetry.context import attach, detach
|
||||
from pydantic import (
|
||||
UUID4,
|
||||
BaseModel,
|
||||
BeforeValidator,
|
||||
Field,
|
||||
Json,
|
||||
PrivateAttr,
|
||||
field_validator,
|
||||
model_validator,
|
||||
)
|
||||
from pydantic.functional_serializers import PlainSerializer
|
||||
from pydantic_core import PydanticCustomError
|
||||
from rich.console import Console
|
||||
from rich.panel import Panel
|
||||
@@ -37,6 +41,8 @@ if TYPE_CHECKING:
|
||||
from crewai_files import FileInput
|
||||
from opentelemetry.trace import Span
|
||||
|
||||
from crewai.context import ExecutionContext
|
||||
|
||||
try:
|
||||
from crewai_files import get_supported_content_types
|
||||
|
||||
@@ -49,7 +55,12 @@ except ImportError:
|
||||
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.agents.agent_builder.base_agent import (
|
||||
BaseAgent,
|
||||
_resolve_agent,
|
||||
_serialize_llm_ref,
|
||||
_validate_llm_ref,
|
||||
)
|
||||
from crewai.agents.cache.cache_handler import CacheHandler
|
||||
from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.crews.utils import (
|
||||
@@ -132,6 +143,12 @@ from crewai.utilities.training_handler import CrewTrainingHandler
|
||||
warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")
|
||||
|
||||
|
||||
def _resolve_agents(value: Any, info: Any) -> Any:
|
||||
if not isinstance(value, list):
|
||||
return value
|
||||
return [_resolve_agent(a, info) for a in value]
|
||||
|
||||
|
||||
class Crew(FlowTrackable, BaseModel):
|
||||
"""
|
||||
Represents a group of agents, defining how they should collaborate and the
|
||||
@@ -170,6 +187,8 @@ class Crew(FlowTrackable, BaseModel):
|
||||
fingerprinting.
|
||||
"""
|
||||
|
||||
entity_type: Literal["crew"] = "crew"
|
||||
|
||||
__hash__ = object.__hash__
|
||||
_execution_span: Span | None = PrivateAttr()
|
||||
_rpm_controller: RPMController = PrivateAttr()
|
||||
@@ -191,7 +210,10 @@ class Crew(FlowTrackable, BaseModel):
|
||||
name: str | None = Field(default="crew")
|
||||
cache: bool = Field(default=True)
|
||||
tasks: list[Task] = Field(default_factory=list)
|
||||
agents: list[BaseAgent] = Field(default_factory=list)
|
||||
agents: Annotated[
|
||||
list[BaseAgent],
|
||||
BeforeValidator(_resolve_agents),
|
||||
] = Field(default_factory=list)
|
||||
process: Process = Field(default=Process.sequential)
|
||||
verbose: bool = Field(default=False)
|
||||
memory: bool | Memory | MemoryScope | MemorySlice | None = Field(
|
||||
@@ -209,15 +231,20 @@ class Crew(FlowTrackable, BaseModel):
|
||||
default=None,
|
||||
description="Metrics for the LLM usage during all tasks execution.",
|
||||
)
|
||||
manager_llm: str | BaseLLM | None = Field(
|
||||
description="Language model that will run the agent.", default=None
|
||||
)
|
||||
manager_agent: BaseAgent | None = Field(
|
||||
description="Custom agent that will be used as manager.", default=None
|
||||
)
|
||||
function_calling_llm: str | LLM | None = Field(
|
||||
description="Language model that will run the agent.", default=None
|
||||
)
|
||||
manager_llm: Annotated[
|
||||
str | BaseLLM | None,
|
||||
BeforeValidator(_validate_llm_ref),
|
||||
PlainSerializer(_serialize_llm_ref, return_type=str | None, when_used="json"),
|
||||
] = Field(description="Language model that will run the agent.", default=None)
|
||||
manager_agent: Annotated[
|
||||
BaseAgent | None,
|
||||
BeforeValidator(_resolve_agent),
|
||||
] = Field(description="Custom agent that will be used as manager.", default=None)
|
||||
function_calling_llm: Annotated[
|
||||
str | LLM | None,
|
||||
BeforeValidator(_validate_llm_ref),
|
||||
PlainSerializer(_serialize_llm_ref, return_type=str | None, when_used="json"),
|
||||
] = Field(description="Language model that will run the agent.", default=None)
|
||||
config: Json[dict[str, Any]] | dict[str, Any] | None = Field(default=None)
|
||||
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
|
||||
share_crew: bool | None = Field(default=False)
|
||||
@@ -266,7 +293,11 @@ class Crew(FlowTrackable, BaseModel):
|
||||
default=False,
|
||||
description="Plan the crew execution and add the plan to the crew.",
|
||||
)
|
||||
planning_llm: str | BaseLLM | Any | None = Field(
|
||||
planning_llm: Annotated[
|
||||
str | BaseLLM | None,
|
||||
BeforeValidator(_validate_llm_ref),
|
||||
PlainSerializer(_serialize_llm_ref, return_type=str | None, when_used="json"),
|
||||
] = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Language model that will run the AgentPlanner if planning is True."
|
||||
@@ -287,7 +318,11 @@ class Crew(FlowTrackable, BaseModel):
|
||||
"knowledge object."
|
||||
),
|
||||
)
|
||||
chat_llm: str | BaseLLM | Any | None = Field(
|
||||
chat_llm: Annotated[
|
||||
str | BaseLLM | None,
|
||||
BeforeValidator(_validate_llm_ref),
|
||||
PlainSerializer(_serialize_llm_ref, return_type=str | None, when_used="json"),
|
||||
] = Field(
|
||||
default=None,
|
||||
description="LLM used to handle chatting with the crew.",
|
||||
)
|
||||
@@ -313,14 +348,20 @@ class Crew(FlowTrackable, BaseModel):
|
||||
description="Whether to enable tracing for the crew. True=always enable, False=always disable, None=check environment/user settings.",
|
||||
)
|
||||
|
||||
execution_context: ExecutionContext | None = Field(default=None)
|
||||
checkpoint_inputs: dict[str, Any] | None = Field(default=None)
|
||||
checkpoint_train: bool | None = Field(default=None)
|
||||
checkpoint_kickoff_event_id: str | None = Field(default=None)
|
||||
|
||||
@field_validator("id", mode="before")
|
||||
@classmethod
|
||||
def _deny_user_set_id(cls, v: UUID4 | None) -> None:
|
||||
def _deny_user_set_id(cls, v: UUID4 | None, info: Any) -> UUID4 | None:
|
||||
"""Prevent manual setting of the 'id' field by users."""
|
||||
if v:
|
||||
if v and not (info.context or {}).get("from_checkpoint"):
|
||||
raise PydanticCustomError(
|
||||
"may_not_set_field", "The 'id' field cannot be set by the user.", {}
|
||||
)
|
||||
return v
|
||||
|
||||
@field_validator("config", mode="before")
|
||||
@classmethod
|
||||
@@ -1311,7 +1352,7 @@ class Crew(FlowTrackable, BaseModel):
|
||||
and hasattr(agent, "multimodal")
|
||||
and getattr(agent, "multimodal", False)
|
||||
):
|
||||
if not (agent.llm and agent.llm.supports_multimodal()):
|
||||
if not (isinstance(agent.llm, BaseLLM) and agent.llm.supports_multimodal()):
|
||||
tools = self._add_multimodal_tools(agent, tools)
|
||||
|
||||
if agent and (hasattr(agent, "apps") and getattr(agent, "apps", None)):
|
||||
@@ -1328,7 +1369,11 @@ class Crew(FlowTrackable, BaseModel):
|
||||
files = get_all_files(self.id, task.id)
|
||||
if files:
|
||||
supported_types: list[str] = []
|
||||
if agent and agent.llm and agent.llm.supports_multimodal():
|
||||
if (
|
||||
agent
|
||||
and isinstance(agent.llm, BaseLLM)
|
||||
and agent.llm.supports_multimodal()
|
||||
):
|
||||
provider = (
|
||||
getattr(agent.llm, "provider", None)
|
||||
or getattr(agent.llm, "model", None)
|
||||
@@ -1384,7 +1429,7 @@ class Crew(FlowTrackable, BaseModel):
|
||||
self,
|
||||
tools: list[BaseTool],
|
||||
task_agent: BaseAgent,
|
||||
agents: list[BaseAgent],
|
||||
agents: Sequence[BaseAgent],
|
||||
) -> list[BaseTool]:
|
||||
if hasattr(task_agent, "get_delegation_tools"):
|
||||
delegation_tools = task_agent.get_delegation_tools(agents)
|
||||
@@ -1781,17 +1826,10 @@ class Crew(FlowTrackable, BaseModel):
|
||||
token_sum = self.manager_agent._token_process.get_summary()
|
||||
total_usage_metrics.add_usage_metrics(token_sum)
|
||||
|
||||
if (
|
||||
self.manager_agent
|
||||
and hasattr(self.manager_agent, "llm")
|
||||
and hasattr(self.manager_agent.llm, "get_token_usage_summary")
|
||||
):
|
||||
if self.manager_agent:
|
||||
if isinstance(self.manager_agent.llm, BaseLLM):
|
||||
llm_usage = self.manager_agent.llm.get_token_usage_summary()
|
||||
else:
|
||||
llm_usage = self.manager_agent.llm._token_process.get_summary()
|
||||
|
||||
total_usage_metrics.add_usage_metrics(llm_usage)
|
||||
total_usage_metrics.add_usage_metrics(llm_usage)
|
||||
|
||||
self.usage_metrics = total_usage_metrics
|
||||
return total_usage_metrics
|
||||
|
||||
@@ -21,7 +21,7 @@ class CrewOutput(BaseModel):
|
||||
description="JSON dict output of Crew", default=None
|
||||
)
|
||||
tasks_output: list[TaskOutput] = Field(
|
||||
description="Output of each task", default=[]
|
||||
description="Output of each task", default_factory=list
|
||||
)
|
||||
token_usage: UsageMetrics = Field(
|
||||
description="Processed token summary", default_factory=UsageMetrics
|
||||
|
||||
@@ -11,6 +11,7 @@ from opentelemetry import baggage
|
||||
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.rag.embeddings.types import EmbedderConfig
|
||||
from crewai.skills.loader import activate_skill, discover_skills
|
||||
from crewai.skills.models import INSTRUCTIONS, Skill as SkillModel
|
||||
@@ -50,7 +51,7 @@ def enable_agent_streaming(agents: Iterable[BaseAgent]) -> None:
|
||||
agents: Iterable of agents to enable streaming on.
|
||||
"""
|
||||
for agent in agents:
|
||||
if agent.llm is not None:
|
||||
if isinstance(agent.llm, BaseLLM):
|
||||
agent.llm.stream = True
|
||||
|
||||
|
||||
|
||||
@@ -25,13 +25,25 @@ def _get_or_create_counter() -> Iterator[int]:
|
||||
return counter
|
||||
|
||||
|
||||
_last_emitted: contextvars.ContextVar[int] = contextvars.ContextVar(
|
||||
"_last_emitted", default=0
|
||||
)
|
||||
|
||||
|
||||
def get_next_emission_sequence() -> int:
|
||||
"""Get the next emission sequence number.
|
||||
|
||||
Returns:
|
||||
The next sequence number.
|
||||
"""
|
||||
return next(_get_or_create_counter())
|
||||
seq = next(_get_or_create_counter())
|
||||
_last_emitted.set(seq)
|
||||
return seq
|
||||
|
||||
|
||||
def get_emission_sequence() -> int:
|
||||
"""Get the current emission sequence value without incrementing."""
|
||||
return _last_emitted.get()
|
||||
|
||||
|
||||
def reset_emission_counter() -> None:
|
||||
@@ -41,6 +53,14 @@ def reset_emission_counter() -> None:
|
||||
"""
|
||||
counter: Iterator[int] = itertools.count(start=1)
|
||||
_emission_counter.set(counter)
|
||||
_last_emitted.set(0)
|
||||
|
||||
|
||||
def set_emission_counter(start: int) -> None:
|
||||
"""Set the emission counter to resume from a given value."""
|
||||
counter: Iterator[int] = itertools.count(start=start + 1)
|
||||
_emission_counter.set(counter)
|
||||
_last_emitted.set(start)
|
||||
|
||||
|
||||
class BaseEvent(BaseModel):
|
||||
|
||||
@@ -78,9 +78,15 @@ from crewai.events.types.mcp_events import (
|
||||
MCPConnectionCompletedEvent,
|
||||
MCPConnectionFailedEvent,
|
||||
MCPConnectionStartedEvent,
|
||||
MCPToolExecutionCompletedEvent,
|
||||
MCPToolExecutionFailedEvent,
|
||||
MCPToolExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.memory_events import (
|
||||
MemoryQueryCompletedEvent,
|
||||
MemoryRetrievalCompletedEvent,
|
||||
MemorySaveCompletedEvent,
|
||||
)
|
||||
from crewai.events.types.observation_events import (
|
||||
GoalAchievedEarlyEvent,
|
||||
PlanRefinementEvent,
|
||||
@@ -94,6 +100,12 @@ from crewai.events.types.reasoning_events import (
|
||||
AgentReasoningFailedEvent,
|
||||
AgentReasoningStartedEvent,
|
||||
)
|
||||
from crewai.events.types.skill_events import (
|
||||
SkillActivatedEvent,
|
||||
SkillDiscoveryCompletedEvent,
|
||||
SkillLoadFailedEvent,
|
||||
SkillLoadedEvent,
|
||||
)
|
||||
from crewai.events.types.task_events import (
|
||||
TaskCompletedEvent,
|
||||
TaskFailedEvent,
|
||||
@@ -478,6 +490,7 @@ class EventListener(BaseEventListener):
|
||||
self.formatter.handle_guardrail_completed(
|
||||
event.success, event.error, event.retry_count
|
||||
)
|
||||
self._telemetry.feature_usage_span("guardrail:execution")
|
||||
|
||||
@crewai_event_bus.on(CrewTestStartedEvent)
|
||||
def on_crew_test_started(source: Any, event: CrewTestStartedEvent) -> None:
|
||||
@@ -559,6 +572,7 @@ class EventListener(BaseEventListener):
|
||||
event.plan,
|
||||
event.ready,
|
||||
)
|
||||
self._telemetry.feature_usage_span("planning:creation")
|
||||
|
||||
@crewai_event_bus.on(AgentReasoningFailedEvent)
|
||||
def on_agent_reasoning_failed(_: Any, event: AgentReasoningFailedEvent) -> None:
|
||||
@@ -616,6 +630,7 @@ class EventListener(BaseEventListener):
|
||||
event.replan_count,
|
||||
event.completed_steps_preserved,
|
||||
)
|
||||
self._telemetry.feature_usage_span("planning:replan")
|
||||
|
||||
@crewai_event_bus.on(GoalAchievedEarlyEvent)
|
||||
def on_goal_achieved_early(_: Any, event: GoalAchievedEarlyEvent) -> None:
|
||||
@@ -623,6 +638,25 @@ class EventListener(BaseEventListener):
|
||||
event.steps_completed,
|
||||
event.steps_remaining,
|
||||
)
|
||||
self._telemetry.feature_usage_span("planning:goal_achieved_early")
|
||||
|
||||
# ----------- SKILL EVENTS -----------
|
||||
|
||||
@crewai_event_bus.on(SkillDiscoveryCompletedEvent)
|
||||
def on_skill_discovery(_: Any, event: SkillDiscoveryCompletedEvent) -> None:
|
||||
self._telemetry.feature_usage_span("skill:discovery")
|
||||
|
||||
@crewai_event_bus.on(SkillLoadedEvent)
|
||||
def on_skill_loaded(_: Any, event: SkillLoadedEvent) -> None:
|
||||
self._telemetry.feature_usage_span("skill:loaded")
|
||||
|
||||
@crewai_event_bus.on(SkillLoadFailedEvent)
|
||||
def on_skill_load_failed(_: Any, event: SkillLoadFailedEvent) -> None:
|
||||
self._telemetry.feature_usage_span("skill:load_failed")
|
||||
|
||||
@crewai_event_bus.on(SkillActivatedEvent)
|
||||
def on_skill_activated(_: Any, event: SkillActivatedEvent) -> None:
|
||||
self._telemetry.feature_usage_span("skill:activated")
|
||||
|
||||
# ----------- AGENT LOGGING EVENTS -----------
|
||||
|
||||
@@ -662,6 +696,7 @@ class EventListener(BaseEventListener):
|
||||
event.error,
|
||||
event.is_multiturn,
|
||||
)
|
||||
self._telemetry.feature_usage_span("a2a:delegation")
|
||||
|
||||
@crewai_event_bus.on(A2AConversationStartedEvent)
|
||||
def on_a2a_conversation_started(
|
||||
@@ -703,6 +738,7 @@ class EventListener(BaseEventListener):
|
||||
event.error,
|
||||
event.total_turns,
|
||||
)
|
||||
self._telemetry.feature_usage_span("a2a:conversation")
|
||||
|
||||
@crewai_event_bus.on(A2APollingStartedEvent)
|
||||
def on_a2a_polling_started(_: Any, event: A2APollingStartedEvent) -> None:
|
||||
@@ -744,6 +780,7 @@ class EventListener(BaseEventListener):
|
||||
event.connection_duration_ms,
|
||||
event.is_reconnect,
|
||||
)
|
||||
self._telemetry.feature_usage_span("mcp:connection")
|
||||
|
||||
@crewai_event_bus.on(MCPConnectionFailedEvent)
|
||||
def on_mcp_connection_failed(_: Any, event: MCPConnectionFailedEvent) -> None:
|
||||
@@ -754,6 +791,7 @@ class EventListener(BaseEventListener):
|
||||
event.error,
|
||||
event.error_type,
|
||||
)
|
||||
self._telemetry.feature_usage_span("mcp:connection_failed")
|
||||
|
||||
@crewai_event_bus.on(MCPConfigFetchFailedEvent)
|
||||
def on_mcp_config_fetch_failed(
|
||||
@@ -764,6 +802,7 @@ class EventListener(BaseEventListener):
|
||||
event.error,
|
||||
event.error_type,
|
||||
)
|
||||
self._telemetry.feature_usage_span("mcp:config_fetch_failed")
|
||||
|
||||
@crewai_event_bus.on(MCPToolExecutionStartedEvent)
|
||||
def on_mcp_tool_execution_started(
|
||||
@@ -775,6 +814,12 @@ class EventListener(BaseEventListener):
|
||||
event.tool_args,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(MCPToolExecutionCompletedEvent)
|
||||
def on_mcp_tool_execution_completed(
|
||||
_: Any, event: MCPToolExecutionCompletedEvent
|
||||
) -> None:
|
||||
self._telemetry.feature_usage_span("mcp:tool_execution")
|
||||
|
||||
@crewai_event_bus.on(MCPToolExecutionFailedEvent)
|
||||
def on_mcp_tool_execution_failed(
|
||||
_: Any, event: MCPToolExecutionFailedEvent
|
||||
@@ -786,6 +831,45 @@ class EventListener(BaseEventListener):
|
||||
event.error,
|
||||
event.error_type,
|
||||
)
|
||||
self._telemetry.feature_usage_span("mcp:tool_execution_failed")
|
||||
|
||||
# ----------- MEMORY TELEMETRY -----------
|
||||
|
||||
@crewai_event_bus.on(MemorySaveCompletedEvent)
|
||||
def on_memory_save_completed(_: Any, event: MemorySaveCompletedEvent) -> None:
|
||||
self._telemetry.feature_usage_span("memory:save")
|
||||
|
||||
@crewai_event_bus.on(MemoryQueryCompletedEvent)
|
||||
def on_memory_query_completed(_: Any, event: MemoryQueryCompletedEvent) -> None:
|
||||
self._telemetry.feature_usage_span("memory:query")
|
||||
|
||||
@crewai_event_bus.on(MemoryRetrievalCompletedEvent)
|
||||
def on_memory_retrieval_completed_telemetry(
|
||||
_: Any, event: MemoryRetrievalCompletedEvent
|
||||
) -> None:
|
||||
self._telemetry.feature_usage_span("memory:retrieval")
|
||||
|
||||
@crewai_event_bus.on(CrewKickoffStartedEvent)
|
||||
def on_crew_kickoff_hooks(_: Any, event: CrewKickoffStartedEvent) -> None:
|
||||
from crewai.hooks.llm_hooks import (
|
||||
get_after_llm_call_hooks,
|
||||
get_before_llm_call_hooks,
|
||||
)
|
||||
from crewai.hooks.tool_hooks import (
|
||||
get_after_tool_call_hooks,
|
||||
get_before_tool_call_hooks,
|
||||
)
|
||||
|
||||
has_hooks = any(
|
||||
[
|
||||
get_before_llm_call_hooks(),
|
||||
get_after_llm_call_hooks(),
|
||||
get_before_tool_call_hooks(),
|
||||
get_after_tool_call_hooks(),
|
||||
]
|
||||
)
|
||||
if has_hooks:
|
||||
self._telemetry.feature_usage_span("hooks:registered")
|
||||
|
||||
|
||||
event_listener = EventListener()
|
||||
|
||||
@@ -25,6 +25,7 @@ import logging
|
||||
import threading
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Annotated,
|
||||
Any,
|
||||
ClassVar,
|
||||
Generic,
|
||||
@@ -41,9 +42,11 @@ from opentelemetry import baggage
|
||||
from opentelemetry.context import attach, detach
|
||||
from pydantic import (
|
||||
BaseModel,
|
||||
BeforeValidator,
|
||||
ConfigDict,
|
||||
Field,
|
||||
PrivateAttr,
|
||||
SerializeAsAny,
|
||||
ValidationError,
|
||||
)
|
||||
from pydantic._internal._model_construction import ModelMetaclass
|
||||
@@ -115,6 +118,7 @@ from crewai.memory.unified_memory import Memory
|
||||
if TYPE_CHECKING:
|
||||
from crewai_files import FileInput
|
||||
|
||||
from crewai.context import ExecutionContext
|
||||
from crewai.flow.async_feedback.types import PendingFeedbackContext
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
|
||||
@@ -134,6 +138,19 @@ from crewai.utilities.streaming import (
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _resolve_persistence(value: Any) -> Any:
|
||||
if value is None or isinstance(value, FlowPersistence):
|
||||
return value
|
||||
if isinstance(value, dict):
|
||||
from crewai.flow.persistence.base import _persistence_registry
|
||||
|
||||
type_name = value.get("persistence_type", "SQLiteFlowPersistence")
|
||||
cls = _persistence_registry.get(type_name)
|
||||
if cls is not None:
|
||||
return cls.model_validate(value)
|
||||
return value
|
||||
|
||||
|
||||
class FlowState(BaseModel):
|
||||
"""Base model for all flow states, ensuring each state has a unique ID."""
|
||||
|
||||
@@ -883,6 +900,8 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
_routers: ClassVar[set[FlowMethodName]] = set()
|
||||
_router_paths: ClassVar[dict[FlowMethodName, list[FlowMethodName]]] = {}
|
||||
|
||||
entity_type: Literal["flow"] = "flow"
|
||||
|
||||
initial_state: Any = Field(default=None)
|
||||
name: str | None = Field(default=None)
|
||||
tracing: bool | None = Field(default=None)
|
||||
@@ -893,8 +912,17 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
human_feedback_history: list[HumanFeedbackResult] = Field(default_factory=list)
|
||||
last_human_feedback: HumanFeedbackResult | None = Field(default=None)
|
||||
|
||||
persistence: Any = Field(default=None, exclude=True)
|
||||
max_method_calls: int = Field(default=100, exclude=True)
|
||||
persistence: Annotated[
|
||||
SerializeAsAny[FlowPersistence] | Any,
|
||||
BeforeValidator(lambda v, _: _resolve_persistence(v)),
|
||||
] = Field(default=None)
|
||||
max_method_calls: int = Field(default=100)
|
||||
|
||||
execution_context: ExecutionContext | None = Field(default=None)
|
||||
checkpoint_completed_methods: set[str] | None = Field(default=None)
|
||||
checkpoint_method_outputs: list[Any] | None = Field(default=None)
|
||||
checkpoint_method_counts: dict[str, int] | None = Field(default=None)
|
||||
checkpoint_state: dict[str, Any] | None = Field(default=None)
|
||||
|
||||
_methods: dict[FlowMethodName, FlowMethod[Any, Any]] = PrivateAttr(
|
||||
default_factory=dict
|
||||
|
||||
@@ -5,14 +5,17 @@ from __future__ import annotations
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from pydantic import BaseModel
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.flow.async_feedback.types import PendingFeedbackContext
|
||||
|
||||
|
||||
class FlowPersistence(ABC):
|
||||
_persistence_registry: dict[str, type[FlowPersistence]] = {}
|
||||
|
||||
|
||||
class FlowPersistence(BaseModel, ABC):
|
||||
"""Abstract base class for flow state persistence.
|
||||
|
||||
This class defines the interface that all persistence implementations must follow.
|
||||
@@ -24,6 +27,13 @@ class FlowPersistence(ABC):
|
||||
- clear_pending_feedback(): Clears pending feedback after resume
|
||||
"""
|
||||
|
||||
persistence_type: str = Field(default="base")
|
||||
|
||||
def __init_subclass__(cls, **kwargs: Any) -> None:
|
||||
super().__init_subclass__(**kwargs)
|
||||
if not getattr(cls, "__abstractmethods__", set()):
|
||||
_persistence_registry[cls.__name__] = cls
|
||||
|
||||
@abstractmethod
|
||||
def init_db(self) -> None:
|
||||
"""Initialize the persistence backend.
|
||||
@@ -95,7 +105,7 @@ class FlowPersistence(ABC):
|
||||
"""
|
||||
return None
|
||||
|
||||
def clear_pending_feedback(self, flow_uuid: str) -> None: # noqa: B027
|
||||
def clear_pending_feedback(self, flow_uuid: str) -> None:
|
||||
"""Clear the pending feedback marker after successful resume.
|
||||
|
||||
This is called after feedback is received and the flow resumes.
|
||||
|
||||
@@ -9,7 +9,8 @@ from pathlib import Path
|
||||
import sqlite3
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from pydantic import BaseModel
|
||||
from pydantic import BaseModel, Field, PrivateAttr, model_validator
|
||||
from typing_extensions import Self
|
||||
|
||||
from crewai.flow.persistence.base import FlowPersistence
|
||||
from crewai.utilities.lock_store import lock as store_lock
|
||||
@@ -50,26 +51,22 @@ class SQLiteFlowPersistence(FlowPersistence):
|
||||
```
|
||||
"""
|
||||
|
||||
def __init__(self, db_path: str | None = None) -> None:
|
||||
"""Initialize SQLite persistence.
|
||||
persistence_type: str = Field(default="SQLiteFlowPersistence")
|
||||
db_path: str = Field(
|
||||
default_factory=lambda: str(Path(db_storage_path()) / "flow_states.db")
|
||||
)
|
||||
_lock_name: str = PrivateAttr()
|
||||
|
||||
Args:
|
||||
db_path: Path to the SQLite database file. If not provided, uses
|
||||
db_storage_path() from utilities.paths.
|
||||
def __init__(self, db_path: str | None = None, /, **kwargs: Any) -> None:
|
||||
if db_path is not None:
|
||||
kwargs["db_path"] = db_path
|
||||
super().__init__(**kwargs)
|
||||
|
||||
Raises:
|
||||
ValueError: If db_path is invalid
|
||||
"""
|
||||
|
||||
# Get path from argument or default location
|
||||
path = db_path or str(Path(db_storage_path()) / "flow_states.db")
|
||||
|
||||
if not path:
|
||||
raise ValueError("Database path must be provided")
|
||||
|
||||
self.db_path = path # Now mypy knows this is str
|
||||
@model_validator(mode="after")
|
||||
def _setup(self) -> Self:
|
||||
self._lock_name = f"sqlite:{os.path.realpath(self.db_path)}"
|
||||
self.init_db()
|
||||
return self
|
||||
|
||||
def init_db(self) -> None:
|
||||
"""Create the necessary tables if they don't exist."""
|
||||
|
||||
@@ -40,7 +40,9 @@ class LiteAgentOutput(BaseModel):
|
||||
usage_metrics: dict[str, Any] | None = Field(
|
||||
description="Token usage metrics for this execution", default=None
|
||||
)
|
||||
messages: list[LLMMessage] = Field(description="Messages of the agent", default=[])
|
||||
messages: list[LLMMessage] = Field(
|
||||
description="Messages of the agent", default_factory=list
|
||||
)
|
||||
|
||||
plan: str | None = Field(
|
||||
default=None, description="The execution plan that was generated, if any"
|
||||
|
||||
@@ -32,6 +32,10 @@ class MemoryScope(BaseModel):
|
||||
"""Extract memory dependency and normalize root path before validation."""
|
||||
if isinstance(data, MemoryScope):
|
||||
return data
|
||||
if not isinstance(data, dict):
|
||||
raise ValueError(f"Expected dict or MemoryScope, got {type(data).__name__}")
|
||||
if "memory" not in data:
|
||||
raise ValueError("MemoryScope requires a 'memory' key")
|
||||
memory = data.pop("memory")
|
||||
instance: MemoryScope = handler(data)
|
||||
instance._memory = memory
|
||||
@@ -199,6 +203,10 @@ class MemorySlice(BaseModel):
|
||||
"""Extract memory dependency and normalize scopes before validation."""
|
||||
if isinstance(data, MemorySlice):
|
||||
return data
|
||||
if not isinstance(data, dict):
|
||||
raise ValueError(f"Expected dict or MemorySlice, got {type(data).__name__}")
|
||||
if "memory" not in data:
|
||||
raise ValueError("MemorySlice requires a 'memory' key")
|
||||
memory = data.pop("memory")
|
||||
data["scopes"] = [s.rstrip("/") or "/" for s in data.get("scopes", [])]
|
||||
instance: MemorySlice = handler(data)
|
||||
|
||||
18
lib/crewai/src/crewai/runtime_state.py
Normal file
18
lib/crewai/src/crewai/runtime_state.py
Normal file
@@ -0,0 +1,18 @@
|
||||
"""Unified runtime state for crewAI.
|
||||
|
||||
``RuntimeState`` is a ``RootModel`` whose ``model_dump_json()`` produces a
|
||||
complete, self-contained snapshot of every active entity in the program.
|
||||
|
||||
The ``Entity`` type alias and ``RuntimeState`` model are built at import time
|
||||
in ``crewai/__init__.py`` after all forward references are resolved.
|
||||
"""
|
||||
|
||||
from typing import Any
|
||||
|
||||
|
||||
def _entity_discriminator(v: dict[str, Any] | object) -> str:
|
||||
if isinstance(v, dict):
|
||||
raw = v.get("entity_type", "agent")
|
||||
else:
|
||||
raw = getattr(v, "entity_type", "agent")
|
||||
return str(raw)
|
||||
@@ -1,6 +1,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Sequence
|
||||
from concurrent.futures import Future
|
||||
import contextvars
|
||||
from copy import copy as shallow_copy
|
||||
@@ -12,6 +13,7 @@ import logging
|
||||
from pathlib import Path
|
||||
import threading
|
||||
from typing import (
|
||||
Annotated,
|
||||
Any,
|
||||
ClassVar,
|
||||
cast,
|
||||
@@ -24,6 +26,7 @@ import warnings
|
||||
from pydantic import (
|
||||
UUID4,
|
||||
BaseModel,
|
||||
BeforeValidator,
|
||||
Field,
|
||||
PrivateAttr,
|
||||
field_validator,
|
||||
@@ -32,7 +35,7 @@ from pydantic import (
|
||||
from pydantic_core import PydanticCustomError
|
||||
from typing_extensions import Self
|
||||
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent, _resolve_agent
|
||||
from crewai.context import reset_current_task_id, set_current_task_id
|
||||
from crewai.core.providers.content_processor import process_content
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
@@ -41,6 +44,7 @@ from crewai.events.types.task_events import (
|
||||
TaskFailedEvent,
|
||||
TaskStartedEvent,
|
||||
)
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.security import Fingerprint, SecurityConfig
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
@@ -128,9 +132,10 @@ class Task(BaseModel):
|
||||
callback: SerializableCallable | None = Field(
|
||||
description="Callback to be executed after the task is completed.", default=None
|
||||
)
|
||||
agent: BaseAgent | None = Field(
|
||||
description="Agent responsible for execution the task.", default=None
|
||||
)
|
||||
agent: Annotated[
|
||||
BaseAgent | None,
|
||||
BeforeValidator(_resolve_agent),
|
||||
] = Field(description="Agent responsible for execution the task.", default=None)
|
||||
context: list[Task] | None | _NotSpecified = Field(
|
||||
description="Other tasks that will have their output used as context for this task.",
|
||||
default=NOT_SPECIFIED,
|
||||
@@ -316,6 +321,10 @@ class Task(BaseModel):
|
||||
if self.agent is None:
|
||||
raise ValueError("Agent is required to use LLMGuardrail")
|
||||
|
||||
if not isinstance(self.agent.llm, BaseLLM):
|
||||
raise ValueError(
|
||||
"Agent must have a BaseLLM instance to use LLMGuardrail"
|
||||
)
|
||||
self._guardrail = cast(
|
||||
GuardrailCallable,
|
||||
LLMGuardrail(description=self.guardrail, llm=self.agent.llm),
|
||||
@@ -339,6 +348,10 @@ class Task(BaseModel):
|
||||
)
|
||||
from crewai.tasks.llm_guardrail import LLMGuardrail
|
||||
|
||||
if not isinstance(self.agent.llm, BaseLLM):
|
||||
raise ValueError(
|
||||
"Agent must have a BaseLLM instance to use LLMGuardrail"
|
||||
)
|
||||
guardrails.append(
|
||||
cast(
|
||||
GuardrailCallable,
|
||||
@@ -359,6 +372,10 @@ class Task(BaseModel):
|
||||
)
|
||||
from crewai.tasks.llm_guardrail import LLMGuardrail
|
||||
|
||||
if not isinstance(self.agent.llm, BaseLLM):
|
||||
raise ValueError(
|
||||
"Agent must have a BaseLLM instance to use LLMGuardrail"
|
||||
)
|
||||
guardrails.append(
|
||||
cast(
|
||||
GuardrailCallable,
|
||||
@@ -379,11 +396,12 @@ class Task(BaseModel):
|
||||
|
||||
@field_validator("id", mode="before")
|
||||
@classmethod
|
||||
def _deny_user_set_id(cls, v: UUID4 | None) -> None:
|
||||
if v:
|
||||
def _deny_user_set_id(cls, v: UUID4 | None, info: Any) -> UUID4 | None:
|
||||
if v and not (info.context or {}).get("from_checkpoint"):
|
||||
raise PydanticCustomError(
|
||||
"may_not_set_field", "This field is not to be set by the user.", {}
|
||||
)
|
||||
return v
|
||||
|
||||
@field_validator("input_files", mode="before")
|
||||
@classmethod
|
||||
@@ -646,7 +664,12 @@ class Task(BaseModel):
|
||||
await cb_result
|
||||
|
||||
crew = self.agent.crew # type: ignore[union-attr]
|
||||
if crew and crew.task_callback and crew.task_callback != self.callback:
|
||||
if (
|
||||
crew
|
||||
and not isinstance(crew, str)
|
||||
and crew.task_callback
|
||||
and crew.task_callback != self.callback
|
||||
):
|
||||
cb_result = crew.task_callback(self.output)
|
||||
if inspect.isawaitable(cb_result):
|
||||
await cb_result
|
||||
@@ -761,7 +784,12 @@ class Task(BaseModel):
|
||||
asyncio.run(cb_result)
|
||||
|
||||
crew = self.agent.crew # type: ignore[union-attr]
|
||||
if crew and crew.task_callback and crew.task_callback != self.callback:
|
||||
if (
|
||||
crew
|
||||
and not isinstance(crew, str)
|
||||
and crew.task_callback
|
||||
and crew.task_callback != self.callback
|
||||
):
|
||||
cb_result = crew.task_callback(self.output)
|
||||
if inspect.iscoroutine(cb_result):
|
||||
asyncio.run(cb_result)
|
||||
@@ -812,11 +840,14 @@ class Task(BaseModel):
|
||||
if trigger_payload is not None:
|
||||
description += f"\n\nTrigger Payload: {trigger_payload}"
|
||||
|
||||
if self.agent and self.agent.crew:
|
||||
if self.agent and self.agent.crew and not isinstance(self.agent.crew, str):
|
||||
files = get_all_files(self.agent.crew.id, self.id)
|
||||
if files:
|
||||
supported_types: list[str] = []
|
||||
if self.agent.llm and self.agent.llm.supports_multimodal():
|
||||
if (
|
||||
isinstance(self.agent.llm, BaseLLM)
|
||||
and self.agent.llm.supports_multimodal()
|
||||
):
|
||||
provider: str = str(
|
||||
getattr(self.agent.llm, "provider", None)
|
||||
or getattr(self.agent.llm, "model", "openai")
|
||||
@@ -971,7 +1002,7 @@ Follow these guidelines:
|
||||
self.delegations += 1
|
||||
|
||||
def copy( # type: ignore
|
||||
self, agents: list[BaseAgent], task_mapping: dict[str, Task]
|
||||
self, agents: Sequence[BaseAgent], task_mapping: dict[str, Task]
|
||||
) -> Task:
|
||||
"""Creates a deep copy of the Task while preserving its original class type.
|
||||
|
||||
|
||||
@@ -8,6 +8,7 @@ from pydantic import Field
|
||||
from crewai.task import Task
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
from crewai.types.callback import SerializableCallable
|
||||
|
||||
|
||||
class ConditionalTask(Task):
|
||||
@@ -24,7 +25,7 @@ class ConditionalTask(Task):
|
||||
- Cannot be the first task since it needs context from the previous task
|
||||
"""
|
||||
|
||||
condition: Callable[[TaskOutput], bool] | None = Field(
|
||||
condition: SerializableCallable | None = Field(
|
||||
default=None,
|
||||
description="Function that determines whether the task should be executed based on previous task output.",
|
||||
)
|
||||
@@ -51,7 +52,7 @@ class ConditionalTask(Task):
|
||||
"""
|
||||
if self.condition is None:
|
||||
raise ValueError("No condition function set for conditional task")
|
||||
return self.condition(context)
|
||||
return bool(self.condition(context))
|
||||
|
||||
def get_skipped_task_output(self) -> TaskOutput:
|
||||
"""Generate a TaskOutput for when the conditional task is skipped.
|
||||
|
||||
@@ -43,7 +43,9 @@ class TaskOutput(BaseModel):
|
||||
output_format: OutputFormat = Field(
|
||||
description="Output format of the task", default=OutputFormat.RAW
|
||||
)
|
||||
messages: list[LLMMessage] = Field(description="Messages of the task", default=[])
|
||||
messages: list[LLMMessage] = Field(
|
||||
description="Messages of the task", default_factory=list
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def set_summary(self) -> TaskOutput:
|
||||
|
||||
@@ -41,6 +41,7 @@ from crewai.events.types.system_events import (
|
||||
SigTStpEvent,
|
||||
SigTermEvent,
|
||||
)
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.telemetry.constants import (
|
||||
CREWAI_TELEMETRY_BASE_URL,
|
||||
CREWAI_TELEMETRY_SERVICE_NAME,
|
||||
@@ -323,7 +324,9 @@ class Telemetry:
|
||||
if getattr(agent, "function_calling_llm", None)
|
||||
else ""
|
||||
),
|
||||
"llm": agent.llm.model,
|
||||
"llm": agent.llm.model
|
||||
if isinstance(agent.llm, BaseLLM)
|
||||
else str(agent.llm),
|
||||
"delegation_enabled?": agent.allow_delegation,
|
||||
"allow_code_execution?": getattr(
|
||||
agent, "allow_code_execution", False
|
||||
@@ -427,7 +430,9 @@ class Telemetry:
|
||||
if getattr(agent, "function_calling_llm", None)
|
||||
else ""
|
||||
),
|
||||
"llm": agent.llm.model,
|
||||
"llm": agent.llm.model
|
||||
if isinstance(agent.llm, BaseLLM)
|
||||
else str(agent.llm),
|
||||
"delegation_enabled?": agent.allow_delegation,
|
||||
"allow_code_execution?": getattr(
|
||||
agent, "allow_code_execution", False
|
||||
@@ -840,7 +845,9 @@ class Telemetry:
|
||||
"max_iter": agent.max_iter,
|
||||
"max_rpm": agent.max_rpm,
|
||||
"i18n": agent.i18n.prompt_file,
|
||||
"llm": agent.llm.model,
|
||||
"llm": agent.llm.model
|
||||
if isinstance(agent.llm, BaseLLM)
|
||||
else str(agent.llm),
|
||||
"delegation_enabled?": agent.allow_delegation,
|
||||
"tools_names": [
|
||||
sanitize_tool_name(tool.name)
|
||||
@@ -1033,3 +1040,20 @@ class Telemetry:
|
||||
close_span(span)
|
||||
|
||||
self._safe_telemetry_operation(_operation)
|
||||
|
||||
def feature_usage_span(self, feature: str) -> None:
|
||||
"""Records that a feature was used. One span = one count.
|
||||
|
||||
Args:
|
||||
feature: Feature identifier, e.g. "planning:creation",
|
||||
"mcp:connection", "a2a:delegation".
|
||||
"""
|
||||
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Feature Usage")
|
||||
self._add_attribute(span, "crewai_version", version("crewai"))
|
||||
self._add_attribute(span, "feature", feature)
|
||||
close_span(span)
|
||||
|
||||
self._safe_telemetry_operation(_operation)
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Sequence
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from crewai.tools.agent_tools.ask_question_tool import AskQuestionTool
|
||||
@@ -16,7 +17,7 @@ if TYPE_CHECKING:
|
||||
class AgentTools:
|
||||
"""Manager class for agent-related tools"""
|
||||
|
||||
def __init__(self, agents: list[BaseAgent], i18n: I18N | None = None) -> None:
|
||||
def __init__(self, agents: Sequence[BaseAgent], i18n: I18N | None = None) -> None:
|
||||
self.agents = agents
|
||||
self.i18n = i18n if i18n is not None else get_i18n()
|
||||
|
||||
|
||||
@@ -318,6 +318,8 @@ class ToolUsage:
|
||||
if self.task:
|
||||
self.task.increment_delegations(coworker)
|
||||
|
||||
fingerprint_config = self._build_fingerprint_config()
|
||||
|
||||
if calling.arguments:
|
||||
try:
|
||||
acceptable_args = tool.args_schema.model_json_schema()[
|
||||
@@ -328,15 +330,16 @@ class ToolUsage:
|
||||
for k, v in calling.arguments.items()
|
||||
if k in acceptable_args
|
||||
}
|
||||
arguments = self._add_fingerprint_metadata(arguments)
|
||||
result = await tool.ainvoke(input=arguments)
|
||||
result = await tool.ainvoke(
|
||||
input=arguments, config=fingerprint_config
|
||||
)
|
||||
except Exception:
|
||||
arguments = calling.arguments
|
||||
arguments = self._add_fingerprint_metadata(arguments)
|
||||
result = await tool.ainvoke(input=arguments)
|
||||
result = await tool.ainvoke(
|
||||
input=arguments, config=fingerprint_config
|
||||
)
|
||||
else:
|
||||
arguments = self._add_fingerprint_metadata({})
|
||||
result = await tool.ainvoke(input=arguments)
|
||||
result = await tool.ainvoke(input={}, config=fingerprint_config)
|
||||
|
||||
if self.tools_handler:
|
||||
should_cache = True
|
||||
@@ -550,6 +553,8 @@ class ToolUsage:
|
||||
if self.task:
|
||||
self.task.increment_delegations(coworker)
|
||||
|
||||
fingerprint_config = self._build_fingerprint_config()
|
||||
|
||||
if calling.arguments:
|
||||
try:
|
||||
acceptable_args = tool.args_schema.model_json_schema()[
|
||||
@@ -560,15 +565,16 @@ class ToolUsage:
|
||||
for k, v in calling.arguments.items()
|
||||
if k in acceptable_args
|
||||
}
|
||||
arguments = self._add_fingerprint_metadata(arguments)
|
||||
result = tool.invoke(input=arguments)
|
||||
result = tool.invoke(
|
||||
input=arguments, config=fingerprint_config
|
||||
)
|
||||
except Exception:
|
||||
arguments = calling.arguments
|
||||
arguments = self._add_fingerprint_metadata(arguments)
|
||||
result = tool.invoke(input=arguments)
|
||||
result = tool.invoke(
|
||||
input=arguments, config=fingerprint_config
|
||||
)
|
||||
else:
|
||||
arguments = self._add_fingerprint_metadata({})
|
||||
result = tool.invoke(input=arguments)
|
||||
result = tool.invoke(input={}, config=fingerprint_config)
|
||||
|
||||
if self.tools_handler:
|
||||
should_cache = True
|
||||
@@ -1008,23 +1014,16 @@ class ToolUsage:
|
||||
|
||||
return event_data
|
||||
|
||||
def _add_fingerprint_metadata(self, arguments: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Add fingerprint metadata to tool arguments if available.
|
||||
def _build_fingerprint_config(self) -> dict[str, Any]:
|
||||
"""Build fingerprint metadata as a config dict for tool invocation.
|
||||
|
||||
Args:
|
||||
arguments: The original tool arguments
|
||||
Returns the fingerprint data in a config dict rather than injecting it
|
||||
into tool arguments, so it doesn't conflict with strict tool schemas.
|
||||
|
||||
Returns:
|
||||
Updated arguments dictionary with fingerprint metadata
|
||||
Config dictionary with security_context metadata.
|
||||
"""
|
||||
# Create a shallow copy to avoid modifying the original
|
||||
arguments = arguments.copy()
|
||||
|
||||
# Add security metadata under a designated key
|
||||
if "security_context" not in arguments:
|
||||
arguments["security_context"] = {}
|
||||
|
||||
security_context = arguments["security_context"]
|
||||
security_context: dict[str, Any] = {}
|
||||
|
||||
# Add agent fingerprint if available
|
||||
if self.agent and hasattr(self.agent, "security_config"):
|
||||
@@ -1048,4 +1047,4 @@ class ToolUsage:
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
return arguments
|
||||
return {"security_context": security_context} if security_context else {}
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
from typing import Annotated, Final
|
||||
|
||||
from pydantic_core import CoreSchema
|
||||
|
||||
from crewai.utilities.printer import PrinterColor
|
||||
|
||||
|
||||
@@ -36,6 +38,25 @@ class _NotSpecified:
|
||||
def __repr__(self) -> str:
|
||||
return "NOT_SPECIFIED"
|
||||
|
||||
@classmethod
|
||||
def __get_pydantic_core_schema__(
|
||||
cls, _source_type: object, _handler: object
|
||||
) -> CoreSchema:
|
||||
from pydantic_core import core_schema
|
||||
|
||||
def _validate(v: object) -> _NotSpecified:
|
||||
if isinstance(v, _NotSpecified) or v == "NOT_SPECIFIED":
|
||||
return NOT_SPECIFIED
|
||||
raise ValueError(f"Expected NOT_SPECIFIED sentinel, got {type(v).__name__}")
|
||||
|
||||
return core_schema.no_info_plain_validator_function(
|
||||
_validate,
|
||||
serialization=core_schema.plain_serializer_function_ser_schema(
|
||||
lambda v: "NOT_SPECIFIED",
|
||||
info_arg=False,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
NOT_SPECIFIED: Final[
|
||||
Annotated[
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
"""CrewAI development tools."""
|
||||
|
||||
__version__ = "1.13.0a6"
|
||||
__version__ = "1.13.0a7"
|
||||
|
||||
Reference in New Issue
Block a user