Compare commits

...

69 Commits

Author SHA1 Message Date
Brandon Hancock (bhancock_ai)
2816e97753 Merge branch 'main' into fix/clone_when_using_knowledge 2025-01-28 13:08:34 -05:00
Paul Nugent
c3e7a3ec19 Merge pull request #1991 from crewAIInc/feat/update-litellm-for-deepseek-support
update litellm for deepseek
2025-01-28 17:32:05 +00:00
Brandon Hancock
cba8c9faec update litellm 2025-01-28 12:23:06 -05:00
Brandon Hancock (bhancock_ai)
bcb7fb27d0 Fix (#1990)
* Fix

* drop failing files
2025-01-28 11:54:53 -05:00
João Moura
c310044bec preparing new version 2025-01-28 10:29:53 -03:00
Lorenze Jay
c4da244b9a Merge branch 'main' of github.com:crewAIInc/crewAI into fix/clone_when_using_knowledge 2025-01-27 15:58:46 -08:00
Lorenze Jay
6617db78f8 better docs 2025-01-27 15:54:03 -08:00
Lorenze Jay
fd89c3b896 cleanup 2025-01-27 15:48:57 -08:00
Lorenze Jay
b183aaf51d linted 2025-01-27 15:44:31 -08:00
Lorenze Jay
8570461969 checker 2025-01-27 15:39:58 -08:00
Lorenze Jay
b92253bb13 printing agent_copy 2025-01-27 15:16:37 -08:00
Lorenze Jay
42769e8b22 rm print statements 2025-01-27 15:13:52 -08:00
Brandon Hancock (bhancock_ai)
5263df24b6 quick fix for mike (#1987) 2025-01-27 17:41:26 -05:00
Brandon Hancock (bhancock_ai)
dea6ed7ef0 fix issue pointed out by mike (#1986)
* fix issue pointed out by mike

* clean up

* Drop logger

* drop unused imports
2025-01-27 17:35:17 -05:00
Lorenze Jay
ac28f7f4bc just drop clone for now 2025-01-27 13:55:46 -08:00
Brandon Hancock (bhancock_ai)
d3a0dad323 Bugfix/litellm plus generic exceptions (#1965)
* wip

* More clean up

* Fix error

* clean up test

* Improve chat calling messages

* crewai chat improvements

* working but need to clean up

* Clean up chat
2025-01-27 13:41:46 -08:00
Lorenze Jay
9b88bcd97e fixes 2025-01-27 13:31:00 -08:00
Lorenze Jay
1de204eff8 exclude knowledge 2025-01-27 13:26:46 -08:00
Lorenze Jay
f4b7cffb6b try this 2025-01-27 13:22:19 -08:00
Lorenze Jay
1cc9c981e4 WIP: test check with prints 2025-01-27 12:24:13 -08:00
Lorenze Jay
adec0892fa this should fix it ! 2025-01-27 11:33:20 -08:00
Lorenze Jay
cb3865a042 hopefully fixes test 2025-01-27 09:28:20 -08:00
Lorenze Jay
d506bdb749 hopefully fixes test 2025-01-27 09:28:11 -08:00
Lorenze Jay
319128c90d another 2025-01-27 08:58:12 -08:00
Lorenze Jay
6fb654cccd try 2025-01-24 16:07:00 -08:00
Lorenze Jay
0675a2fe04 simple 2025-01-24 15:48:53 -08:00
Lorenze Jay
d438f5a7d4 fix 2025-01-24 15:36:20 -08:00
Lorenze Jay
4ff9d4963c better mocks 2025-01-24 15:30:13 -08:00
Lorenze Jay
079692de35 with fixtures 2025-01-24 15:24:13 -08:00
Lorenze Jay
65b6ff1cc7 fix again 2025-01-24 15:20:45 -08:00
Lorenze Jay
4008ba74f8 patch twice since 2025-01-24 15:13:48 -08:00
Lorenze Jay
24dbdd5686 Merge branch 'main' of github.com:crewAIInc/crewAI into fix/clone_when_using_knowledge 2025-01-24 15:01:33 -08:00
Lorenze Jay
e4b97e328e add fixture to this 2025-01-24 14:54:49 -08:00
Lorenze Jay
27e49300f6 add fixture to this 2025-01-24 14:52:59 -08:00
Lorenze Jay
b87c908434 fix 2025-01-24 14:49:57 -08:00
Lorenze Jay
c6d8c75869 fixed test 2025-01-24 14:43:49 -08:00
Lorenze Jay
849908c7ea remove fixture 2025-01-24 14:28:56 -08:00
Lorenze Jay
ab8d56de4f fix test 2025-01-24 14:25:20 -08:00
Lorenze Jay
79aaab99c4 updated cassette 2025-01-24 14:12:26 -08:00
Lorenze Jay
65d3837c0d Merge branch 'main' of github.com:crewAIInc/crewAI into fix/clone_when_using_knowledge 2025-01-24 14:04:42 -08:00
devin-ai-integration[bot]
67bf4aea56 Add version check to crew_chat.py (#1966)
* Add version check to crew_chat.py with min version 0.98.0

Co-Authored-By: brandon@crewai.com <brandon@crewai.com>

* Fix import sorting in crew_chat.py

Co-Authored-By: brandon@crewai.com <brandon@crewai.com>

* Fix import sorting in crew_chat.py (attempt 3)

Co-Authored-By: brandon@crewai.com <brandon@crewai.com>

* Update error message, add version check helper, fix import sorting

Co-Authored-By: brandon@crewai.com <brandon@crewai.com>

* Fix import sorting with Ruff auto-fix

Co-Authored-By: brandon@crewai.com <brandon@crewai.com>

* Remove poetry check and import comment headers in crew_chat.py

Co-Authored-By: brandon@crewai.com <brandon@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: brandon@crewai.com <brandon@crewai.com>
2025-01-24 17:04:41 -05:00
Lorenze Jay
e3e62c16d5 normalized name 2025-01-24 13:54:45 -08:00
Lorenze Jay
f34f53fae2 added tests 2025-01-24 12:24:54 -08:00
Lorenze Jay
71246e9de1 fix copy and custom storage 2025-01-24 12:17:31 -08:00
Lorenze Jay
591c4a511b ensure use of other knowledge storage works 2025-01-24 09:45:15 -08:00
Lorenze Jay
c67f75d848 better 2025-01-24 09:42:09 -08:00
Brandon Hancock (bhancock_ai)
8c76bad50f Fix litellm issues to be more broad (#1960)
* Fix litellm issues to be more broad

* Fix tests
2025-01-23 23:32:10 -05:00
Bobby Lindsey
e27a15023c Add SageMaker as a LLM provider (#1947)
* Add SageMaker as a LLM provider

* Removed unnecessary constants; updated docs to align with bootstrap naming convention

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-01-22 14:55:24 -05:00
Brandon Hancock (bhancock_ai)
a836f466f4 Updated calls and added tests to verify (#1953)
* Updated calls and added tests to verify

* Drop unused import
2025-01-22 14:36:15 -05:00
Brandon Hancock (bhancock_ai)
67f0de1f90 Bugfix/kickoff hangs when llm call fails (#1943)
* Wip to address https://github.com/crewAIInc/crewAI/issues/1934

* implement proper try / except

* clean up PR

* add tests

* Fix tests and code that was broken

* mnore clean up

* Fixing tests

* fix stop type errors]

* more fixes
2025-01-22 14:24:00 -05:00
Tony Kipkemboi
c642ebf97e docs: improve formatting and clarity in CLI and Composio Tool docs (#1946)
* docs: improve formatting and clarity in CLI and Composio Tool docs

- Add Terminal label to shell code blocks in CLI docs
- Update Composio Tool title and fix tip formatting

* docs: improve installation guide with virtual environment details

- Update Python version requirements and commands
- Add detailed virtual environment setup instructions
- Clarify project-specific environment activation steps
- Streamline additional tools installation with UV

* docs: simplify installation guide

- Remove redundant virtual environment instructions
- Simplify project creation steps
- Update UV package manager description
2025-01-22 10:30:16 -05:00
Brandon Hancock (bhancock_ai)
a21e310d78 add docs for crewai chat (#1936)
* add docs for crewai chat

* add version number
2025-01-21 11:10:25 -05:00
Abhishek Patil
aba68da542 feat: add Composio docs (#1904)
* feat: update Composio tool docs

* Update composiotool.mdx

* fix: minor changes

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-01-21 11:03:37 -05:00
Sanjeed
e254f11933 Fix wrong llm value in example (#1929)
Original example had `mixtal-llm` which would result in an error.
Replaced with gpt-4o according to https://docs.crewai.com/concepts/llms
2025-01-21 02:55:27 -03:00
João Moura
ab2274caf0 Stateful flows (#1931)
* fix: ensure persisted state overrides class defaults

- Remove early return in Flow.__init__ to allow proper state initialization
- Add test_flow_default_override.py to verify state override behavior
- Fix issue where default values weren't being overridden by persisted state

Fixes the issue where persisted state values weren't properly overriding
class defaults when restarting a flow with a previously saved state ID.

Co-Authored-By: Joe Moura <joao@crewai.com>

* test: improve state restoration verification with has_set_count flag

Co-Authored-By: Joe Moura <joao@crewai.com>

* test: add has_set_count field to PoemState

Co-Authored-By: Joe Moura <joao@crewai.com>

* refactoring test

* fix: ensure persisted state overrides class defaults

- Remove early return in Flow.__init__ to allow proper state initialization
- Add test_flow_default_override.py to verify state override behavior
- Fix issue where default values weren't being overridden by persisted state

Fixes the issue where persisted state values weren't properly overriding
class defaults when restarting a flow with a previously saved state ID.

Co-Authored-By: Joe Moura <joao@crewai.com>

* test: improve state restoration verification with has_set_count flag

Co-Authored-By: Joe Moura <joao@crewai.com>

* test: add has_set_count field to PoemState

Co-Authored-By: Joe Moura <joao@crewai.com>

* refactoring test

* Fixing flow state

* fixing peristed stateful flows

* linter

* type fix

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2025-01-20 13:30:09 -03:00
Lorenze Jay
dc9d1d6b49 fixed typo 2025-01-19 16:08:39 -08:00
Lorenze Jay
f3004ffb2b fix breakage when cloning agent/crew using knowledge_sources 2025-01-19 15:58:01 -08:00
devin-ai-integration[bot]
3e4f112f39 feat: add colored logging for flow operations (#1923)
* feat: add colored logging for flow operations

- Add flow_id property for easy ID access
- Add yellow colored logging for flow start
- Add bold_yellow colored logging for state operations
- Implement consistent logging across flow lifecycle

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: sort imports to fix lint error

Co-Authored-By: Joe Moura <joao@crewai.com>

* feat: improve flow logging and error handling

- Add centralized logging method for flow events
- Add robust error handling in persistence decorator
- Add consistent log messages and levels
- Add color-coded error messages

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: sort imports and improve error handling

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2025-01-19 05:50:30 -03:00
João Moura
cc018bf128 updating tools version 2025-01-19 00:36:19 -08:00
devin-ai-integration[bot]
46d3e4d4d9 docs: add flow persistence section (#1922)
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2025-01-19 04:34:58 -03:00
Brandon Hancock (bhancock_ai)
627bb3f5f6 Brandon/new release cleanup (#1918)
* WIP

* fixes to match enterprise changes
2025-01-18 15:46:41 -03:00
João Moura
4a44245de9 preparing new verison 2025-01-18 10:18:56 -08:00
Brandon Hancock (bhancock_ai)
30d027158a Fix union issue that Daniel was running into (#1910) 2025-01-16 15:54:16 -05:00
fzowl
3fecde49b6 feature: Introducing VoyageAI (#1871)
* Introducing VoyageAI's embedding models

* Adding back the whitespaces

* Adding the whitespaces back
2025-01-16 13:49:46 -05:00
Brandon Hancock (bhancock_ai)
cc129a0bce Fix docling issues (#1909)
* Fix docling issues

* update docs
2025-01-16 12:47:59 -05:00
Brandon Hancock (bhancock_ai)
b5779dca12 Fix nested pydantic model issue (#1905)
* Fix nested pydantic model issue

* fix failing tests

* add in vcr

* cleanup

* drop prints

* Fix vcr issues

* added new recordings

* trying to fix vcr

* add in fix from lorenze.
2025-01-16 11:28:58 -05:00
devin-ai-integration[bot]
42311d9c7a Fix SQLite log handling issue causing ValueError: Logs cannot be None in tests (#1899)
* Fix SQLite log handling issue causing ValueError: Logs cannot be None in tests

- Add proper error handling in SQLite storage operations
- Set up isolated test environment with temporary storage directory
- Ensure consistent error messages across all database operations

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Sort imports in conftest.py

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Convert TokenProcess counters to instance variables to fix callback tracking

Co-Authored-By: Joe Moura <joao@crewai.com>

* refactor: Replace print statements with logging and improve error handling

- Add proper logging setup in kickoff_task_outputs_storage.py
- Replace self._printer.print() with logger calls
- Use appropriate log levels (error/warning)
- Add directory validation in test environment setup
- Maintain consistent error messages with DatabaseError format

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Comprehensive improvements to database and token handling

- Fix SQLite database path handling in storage classes
- Add proper directory creation and error handling
- Improve token tracking with robust type checking
- Convert TokenProcess counters to instance variables
- Add standardized database error handling
- Set up isolated test environment with temporary storage

Resolves test failures in PR #1899

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2025-01-16 11:18:54 -03:00
devin-ai-integration[bot]
294f2cc3a9 Add @persist decorator with FlowPersistence interface (#1892)
* Add @persist decorator with SQLite persistence

- Add FlowPersistence abstract base class
- Implement SQLiteFlowPersistence backend
- Add @persist decorator for flow state persistence
- Add tests for flow persistence functionality

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix remaining merge conflicts in uv.lock

- Remove stray merge conflict markers
- Keep main's comprehensive platform-specific resolution markers
- Preserve all required dependencies for persistence functionality

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix final CUDA dependency conflicts in uv.lock

- Resolve NVIDIA CUDA solver dependency conflicts
- Use main's comprehensive platform checks
- Ensure all merge conflict markers are removed
- Preserve persistence-related dependencies

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix nvidia-cusparse-cu12 dependency conflicts in uv.lock

- Resolve NVIDIA CUSPARSE dependency conflicts
- Use main's comprehensive platform checks
- Complete systematic check of entire uv.lock file
- Ensure all merge conflict markers are removed

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix triton filelock dependency conflicts in uv.lock

- Resolve triton package filelock dependency conflict
- Use main's comprehensive platform checks
- Complete final systematic check of entire uv.lock file
- Ensure TOML file structure is valid

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix merge conflict in crew_test.py

- Remove duplicate assertion in test_multimodal_agent_live_image_analysis
- Clean up conflict markers
- Preserve test functionality

Co-Authored-By: Joe Moura <joao@crewai.com>

* Clean up trailing merge conflict marker in crew_test.py

- Remove remaining conflict marker at end of file
- Preserve test functionality
- Complete conflict resolution

Co-Authored-By: Joe Moura <joao@crewai.com>

* Improve type safety in persistence implementation and resolve merge conflicts

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Add explicit type casting in _create_initial_state method

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Improve type safety in flow state handling with proper validation

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Improve type system with proper TypeVar scoping and validation

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Improve state restoration logic and add comprehensive tests

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Initialize FlowState instances without passing id to constructor

Co-Authored-By: Joe Moura <joao@crewai.com>

* feat: Add class-level flow persistence decorator with SQLite default

- Add class-level @persist decorator support
- Set SQLiteFlowPersistence as default backend
- Use db_storage_path for consistent database location
- Improve async method handling and type safety
- Add comprehensive docstrings and examples

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Sort imports in decorators.py to fix lint error

Co-Authored-By: Joe Moura <joao@crewai.com>

* style: Organize imports according to PEP 8 standard

Co-Authored-By: Joe Moura <joao@crewai.com>

* style: Format typing imports with line breaks for better readability

Co-Authored-By: Joe Moura <joao@crewai.com>

* style: Simplify import organization to fix lint error

Co-Authored-By: Joe Moura <joao@crewai.com>

* style: Fix import sorting using Ruff auto-fix

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2025-01-16 10:23:46 -03:00
Tony Kipkemboi
3dc442801f Merge pull request #1903 from crewAIInc/tony-docs
fix: add multimodal docs path to mint.json
2025-01-15 14:25:48 -05:00
79 changed files with 7686 additions and 1386 deletions

View File

@@ -43,7 +43,7 @@ Think of an agent as a specialized team member with specific skills, expertise,
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
| **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
| **Embedder Config** _(optional)_ | `embedder_config` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Embedder** _(optional)_ | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
@@ -152,7 +152,7 @@ agent = Agent(
use_system_prompt=True, # Default: True
tools=[SerperDevTool()], # Optional: List of tools
knowledge_sources=None, # Optional: List of knowledge sources
embedder_config=None, # Optional: Custom embedder configuration
embedder=None, # Optional: Custom embedder configuration
system_template=None, # Optional: Custom system prompt template
prompt_template=None, # Optional: Custom prompt template
response_template=None, # Optional: Custom response template

View File

@@ -12,7 +12,7 @@ The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you
To use the CrewAI CLI, make sure you have CrewAI installed:
```shell
```shell Terminal
pip install crewai
```
@@ -20,7 +20,7 @@ pip install crewai
The basic structure of a CrewAI CLI command is:
```shell
```shell Terminal
crewai [COMMAND] [OPTIONS] [ARGUMENTS]
```
@@ -30,7 +30,7 @@ crewai [COMMAND] [OPTIONS] [ARGUMENTS]
Create a new crew or flow.
```shell
```shell Terminal
crewai create [OPTIONS] TYPE NAME
```
@@ -38,7 +38,7 @@ crewai create [OPTIONS] TYPE NAME
- `NAME`: Name of the crew or flow
Example:
```shell
```shell Terminal
crewai create crew my_new_crew
crewai create flow my_new_flow
```
@@ -47,14 +47,14 @@ crewai create flow my_new_flow
Show the installed version of CrewAI.
```shell
```shell Terminal
crewai version [OPTIONS]
```
- `--tools`: (Optional) Show the installed version of CrewAI tools
Example:
```shell
```shell Terminal
crewai version
crewai version --tools
```
@@ -63,7 +63,7 @@ crewai version --tools
Train the crew for a specified number of iterations.
```shell
```shell Terminal
crewai train [OPTIONS]
```
@@ -71,7 +71,7 @@ crewai train [OPTIONS]
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
Example:
```shell
```shell Terminal
crewai train -n 10 -f my_training_data.pkl
```
@@ -79,14 +79,14 @@ crewai train -n 10 -f my_training_data.pkl
Replay the crew execution from a specific task.
```shell
```shell Terminal
crewai replay [OPTIONS]
```
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example:
```shell
```shell Terminal
crewai replay -t task_123456
```
@@ -94,7 +94,7 @@ crewai replay -t task_123456
Retrieve your latest crew.kickoff() task outputs.
```shell
```shell Terminal
crewai log-tasks-outputs
```
@@ -102,7 +102,7 @@ crewai log-tasks-outputs
Reset the crew memories (long, short, entity, latest_crew_kickoff_outputs).
```shell
```shell Terminal
crewai reset-memories [OPTIONS]
```
@@ -113,7 +113,7 @@ crewai reset-memories [OPTIONS]
- `-a, --all`: Reset ALL memories
Example:
```shell
```shell Terminal
crewai reset-memories --long --short
crewai reset-memories --all
```
@@ -122,7 +122,7 @@ crewai reset-memories --all
Test the crew and evaluate the results.
```shell
```shell Terminal
crewai test [OPTIONS]
```
@@ -130,7 +130,7 @@ crewai test [OPTIONS]
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example:
```shell
```shell Terminal
crewai test -n 5 -m gpt-3.5-turbo
```
@@ -138,7 +138,7 @@ crewai test -n 5 -m gpt-3.5-turbo
Run the crew.
```shell
```shell Terminal
crewai run
```
<Note>
@@ -147,7 +147,36 @@ Some commands may require additional configuration or setup within your project
</Note>
### 9. API Keys
### 9. Chat
Starting in version `0.98.0`, when you run the `crewai chat` command, you start an interactive session with your crew. The AI assistant will guide you by asking for necessary inputs to execute the crew. Once all inputs are provided, the crew will execute its tasks.
After receiving the results, you can continue interacting with the assistant for further instructions or questions.
```shell Terminal
crewai chat
```
<Note>
Ensure you execute these commands from your CrewAI project's root directory.
</Note>
<Note>
IMPORTANT: Set the `chat_llm` property in your `crew.py` file to enable this command.
```python
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
chat_llm="gpt-4o", # LLM for chat orchestration
)
```
</Note>
### 10. API Keys
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.

View File

@@ -323,6 +323,91 @@ flow.kickoff()
By providing both unstructured and structured state management options, CrewAI Flows empowers developers to build AI workflows that are both flexible and robust, catering to a wide range of application requirements.
## Flow Persistence
The @persist decorator enables automatic state persistence in CrewAI Flows, allowing you to maintain flow state across restarts or different workflow executions. This decorator can be applied at either the class level or method level, providing flexibility in how you manage state persistence.
### Class-Level Persistence
When applied at the class level, the @persist decorator automatically persists all flow method states:
```python
@persist # Using SQLiteFlowPersistence by default
class MyFlow(Flow[MyState]):
@start()
def initialize_flow(self):
# This method will automatically have its state persisted
self.state.counter = 1
print("Initialized flow. State ID:", self.state.id)
@listen(initialize_flow)
def next_step(self):
# The state (including self.state.id) is automatically reloaded
self.state.counter += 1
print("Flow state is persisted. Counter:", self.state.counter)
```
### Method-Level Persistence
For more granular control, you can apply @persist to specific methods:
```python
class AnotherFlow(Flow[dict]):
@persist # Persists only this method's state
@start()
def begin(self):
if "runs" not in self.state:
self.state["runs"] = 0
self.state["runs"] += 1
print("Method-level persisted runs:", self.state["runs"])
```
### How It Works
1. **Unique State Identification**
- Each flow state automatically receives a unique UUID
- The ID is preserved across state updates and method calls
- Supports both structured (Pydantic BaseModel) and unstructured (dictionary) states
2. **Default SQLite Backend**
- SQLiteFlowPersistence is the default storage backend
- States are automatically saved to a local SQLite database
- Robust error handling ensures clear messages if database operations fail
3. **Error Handling**
- Comprehensive error messages for database operations
- Automatic state validation during save and load
- Clear feedback when persistence operations encounter issues
### Important Considerations
- **State Types**: Both structured (Pydantic BaseModel) and unstructured (dictionary) states are supported
- **Automatic ID**: The `id` field is automatically added if not present
- **State Recovery**: Failed or restarted flows can automatically reload their previous state
- **Custom Implementation**: You can provide your own FlowPersistence implementation for specialized storage needs
### Technical Advantages
1. **Precise Control Through Low-Level Access**
- Direct access to persistence operations for advanced use cases
- Fine-grained control via method-level persistence decorators
- Built-in state inspection and debugging capabilities
- Full visibility into state changes and persistence operations
2. **Enhanced Reliability**
- Automatic state recovery after system failures or restarts
- Transaction-based state updates for data integrity
- Comprehensive error handling with clear error messages
- Robust validation during state save and load operations
3. **Extensible Architecture**
- Customizable persistence backend through FlowPersistence interface
- Support for specialized storage solutions beyond SQLite
- Compatible with both structured (Pydantic) and unstructured (dict) states
- Seamless integration with existing CrewAI flow patterns
The persistence system's architecture emphasizes technical precision and customization options, allowing developers to maintain full control over state management while benefiting from built-in reliability features.
## Flow Control
### Conditional Logic: `or`

View File

@@ -93,6 +93,12 @@ result = crew.kickoff(inputs={"question": "What city does John live in and how o
Here's another example with the `CrewDoclingSource`. The CrewDoclingSource is actually quite versatile and can handle multiple file formats including TXT, PDF, DOCX, HTML, and more.
<Note>
You need to install `docling` for the following example to work: `uv add docling`
</Note>
```python Code
from crewai import LLM, Agent, Crew, Process, Task
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
@@ -282,6 +288,7 @@ The `embedder` parameter supports various embedding model providers that include
- `ollama`: Local embeddings with Ollama
- `vertexai`: Google Cloud VertexAI embeddings
- `cohere`: Cohere's embedding models
- `voyageai`: VoyageAI's embedding models
- `bedrock`: AWS Bedrock embeddings
- `huggingface`: Hugging Face models
- `watson`: IBM Watson embeddings
@@ -317,6 +324,13 @@ agent = Agent(
verbose=True,
allow_delegation=False,
llm=gemini_llm,
embedder={
"provider": "google",
"config": {
"model": "models/text-embedding-004",
"api_key": GEMINI_API_KEY,
}
}
)
task = Task(

View File

@@ -243,6 +243,9 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
# llm: bedrock/amazon.titan-text-express-v1
# llm: bedrock/meta.llama2-70b-chat-v1
# Amazon SageMaker Models - Enterprise-grade
# llm: sagemaker/<my-endpoint>
# Mistral Models - Open source alternative
# llm: mistral/mistral-large-latest
# llm: mistral/mistral-medium-latest
@@ -506,6 +509,21 @@ Learn how to get the most out of your LLM configuration:
)
```
</Accordion>
<Accordion title="Amazon SageMaker">
```python Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
Example usage:
```python Code
llm = LLM(
model="sagemaker/<my-endpoint>"
)
```
</Accordion>
<Accordion title="Mistral">
```python Code

View File

@@ -293,6 +293,26 @@ my_crew = Crew(
}
)
```
### Using VoyageAI embeddings
```python Code
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "voyageai",
"config": {
"api_key": "YOUR_API_KEY",
"model_name": "<model_name>"
}
}
)
```
### Using HuggingFace embeddings
```python Code

View File

@@ -23,6 +23,7 @@ LiteLLM supports a wide range of providers, including but not limited to:
- Azure OpenAI
- AWS (Bedrock, SageMaker)
- Cohere
- VoyageAI
- Hugging Face
- Ollama
- Mistral AI

View File

@@ -15,10 +15,48 @@ icon: wrench
If you need to update Python, visit [python.org/downloads](https://python.org/downloads)
</Note>
# Setting Up Your Environment
Before installing CrewAI, it's recommended to set up a virtual environment. This helps isolate your project dependencies and avoid conflicts.
<Steps>
<Step title="Create a Virtual Environment">
Choose your preferred method to create a virtual environment:
**Using venv (Python's built-in tool):**
```shell Terminal
python3 -m venv .venv
```
**Using conda:**
```shell Terminal
conda create -n crewai-env python=3.12
```
</Step>
<Step title="Activate the Virtual Environment">
Activate your virtual environment based on your platform:
**On macOS/Linux (venv):**
```shell Terminal
source .venv/bin/activate
```
**On Windows (venv):**
```shell Terminal
.venv\Scripts\activate
```
**Using conda (all platforms):**
```shell Terminal
conda activate crewai-env
```
</Step>
</Steps>
# Installing CrewAI
CrewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently.
Let's get you set up! 🚀
Now let's get you set up! 🚀
<Steps>
<Step title="Install CrewAI">
@@ -72,9 +110,9 @@ Let's get you set up! 🚀
# Creating a New Project
<Info>
<Tip>
We recommend using the YAML Template scaffolding for a structured approach to defining agents and tasks.
</Info>
</Tip>
<Steps>
<Step title="Generate Project Structure">
@@ -104,7 +142,18 @@ Let's get you set up! 🚀
└── tasks.yaml
```
</Frame>
</Step>
</Step>
<Step title="Install Additional Tools">
You can install additional tools using UV:
```shell Terminal
uv add <tool-name>
```
<Tip>
UV is our preferred package manager as it's significantly faster than pip and provides better dependency resolution.
</Tip>
</Step>
<Step title="Customize Your Project">
Your project will contain these essential files:

View File

@@ -278,7 +278,7 @@ email_summarizer:
Summarize emails into a concise and clear summary
backstory: >
You will create a 5 bullet point summary of the report
llm: mixtal_llm
llm: openai/gpt-4o
```
<Tip>

View File

@@ -1,78 +1,118 @@
---
title: Composio Tool
description: The `ComposioTool` is a wrapper around the composio set of tools and gives your agent access to a wide variety of tools from the Composio SDK.
description: Composio provides 250+ production-ready tools for AI agents with flexible authentication management.
icon: gear-code
---
# `ComposioTool`
# `ComposioToolSet`
## Description
Composio is an integration platform that allows you to connect your AI agents to 250+ tools. Key features include:
This tools is a wrapper around the composio set of tools and gives your agent access to a wide variety of tools from the Composio SDK.
- **Enterprise-Grade Authentication**: Built-in support for OAuth, API Keys, JWT with automatic token refresh
- **Full Observability**: Detailed tool usage logs, execution timestamps, and more
## Installation
To incorporate this tool into your project, follow the installation instructions below:
To incorporate Composio tools into your project, follow the instructions below:
```shell
pip install composio-core
pip install 'crewai[tools]'
pip install composio-crewai
pip install crewai
```
after the installation is complete, either run `composio login` or export your composio API key as `COMPOSIO_API_KEY`.
After the installation is complete, either run `composio login` or export your composio API key as `COMPOSIO_API_KEY`. Get your Composio API key from [here](https://app.composio.dev)
## Example
The following example demonstrates how to initialize the tool and execute a github action:
1. Initialize Composio tools
1. Initialize Composio toolset
```python Code
from composio import App
from crewai_tools import ComposioTool
from crewai import Agent, Task
from composio_crewai import ComposioToolSet, App, Action
from crewai import Agent, Task, Crew
tools = [ComposioTool.from_action(action=Action.GITHUB_ACTIVITY_STAR_REPO_FOR_AUTHENTICATED_USER)]
toolset = ComposioToolSet()
```
If you don't know what action you want to use, use `from_app` and `tags` filter to get relevant actions
2. Connect your GitHub account
<CodeGroup>
```shell CLI
composio add github
```
```python Code
tools = ComposioTool.from_app(App.GITHUB, tags=["important"])
request = toolset.initiate_connection(app=App.GITHUB)
print(f"Open this URL to authenticate: {request.redirectUrl}")
```
</CodeGroup>
or use `use_case` to search relevant actions
3. Get Tools
- Retrieving all the tools from an app (not recommended for production):
```python Code
tools = ComposioTool.from_app(App.GITHUB, use_case="Star a github repository")
tools = toolset.get_tools(apps=[App.GITHUB])
```
2. Define agent
- Filtering tools based on tags:
```python Code
tag = "users"
filtered_action_enums = toolset.find_actions_by_tags(
App.GITHUB,
tags=[tag],
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
- Filtering tools based on use case:
```python Code
use_case = "Star a repository on GitHub"
filtered_action_enums = toolset.find_actions_by_use_case(
App.GITHUB, use_case=use_case, advanced=False
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
<Tip>Set `advanced` to True to get actions for complex use cases</Tip>
- Using specific tools:
In this demo, we will use the `GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER` action from the GitHub app.
```python Code
tools = toolset.get_tools(
actions=[Action.GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER]
)
```
Learn more about filtering actions [here](https://docs.composio.dev/patterns/tools/use-tools/use-specific-actions)
4. Define agent
```python Code
crewai_agent = Agent(
role="Github Agent",
goal="You take action on Github using Github APIs",
backstory=(
"You are AI agent that is responsible for taking actions on Github "
"on users behalf. You need to take action on Github using Github APIs"
),
role="GitHub Agent",
goal="You take action on GitHub using GitHub APIs",
backstory="You are AI agent that is responsible for taking actions on GitHub on behalf of users using GitHub APIs",
verbose=True,
tools=tools,
llm= # pass an llm
)
```
3. Execute task
5. Execute task
```python Code
task = Task(
description="Star a repo ComposioHQ/composio on GitHub",
description="Star a repo composiohq/composio on GitHub",
agent=crewai_agent,
expected_output="if the star happened",
expected_output="Status of the operation",
)
task.execute()
crew = Crew(agents=[crewai_agent], tasks=[task])
crew.kickoff()
```
* More detailed list of tools can be found [here](https://app.composio.dev)
* More detailed list of tools can be found [here](https://app.composio.dev)

View File

@@ -1,6 +1,6 @@
[project]
name = "crewai"
version = "0.95.0"
version = "0.100.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<3.13"
@@ -11,27 +11,22 @@ dependencies = [
# Core Dependencies
"pydantic>=2.4.2",
"openai>=1.13.3",
"litellm==1.57.4",
"litellm==1.59.8",
"instructor>=1.3.3",
# Text Processing
"pdfplumber>=0.11.4",
"regex>=2024.9.11",
# Telemetry and Monitoring
"opentelemetry-api>=1.22.0",
"opentelemetry-sdk>=1.22.0",
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
# Data Handling
"chromadb>=0.5.23",
"openpyxl>=3.1.5",
"pyvis>=0.3.2",
# Authentication and Security
"auth0-python>=4.7.1",
"python-dotenv>=1.0.0",
# Configuration and Utils
"click>=8.1.7",
"appdirs>=1.4.4",
@@ -40,7 +35,8 @@ dependencies = [
"uv>=0.4.25",
"tomli-w>=1.1.0",
"tomli>=2.0.2",
"blinker>=1.9.0"
"blinker>=1.9.0",
"json5>=0.10.0",
]
[project.urls]
@@ -49,7 +45,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools>=0.25.5"]
tools = ["crewai-tools>=0.32.1"]
embeddings = [
"tiktoken~=0.7.0"
]

View File

@@ -14,7 +14,7 @@ warnings.filterwarnings(
category=UserWarning,
module="pydantic.main",
)
__version__ = "0.95.0"
__version__ = "0.100.0"
__all__ = [
"Agent",
"Crew",

View File

@@ -1,4 +1,3 @@
import os
import shutil
import subprocess
from typing import Any, Dict, List, Literal, Optional, Union
@@ -8,7 +7,6 @@ from pydantic import Field, InstanceOf, PrivateAttr, model_validator
from crewai.agents import CacheHandler
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.cli.constants import ENV_VARS, LITELLM_PARAMS
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context
@@ -63,6 +61,7 @@ class Agent(BaseAgent):
tools: Tools at agents disposal
step_callback: Callback to be executed after each step of the agent execution.
knowledge_sources: Knowledge sources for the agent.
embedder: Embedder configuration for the agent.
"""
_times_executed: int = PrivateAttr(default=0)
@@ -124,17 +123,10 @@ class Agent(BaseAgent):
default="safe",
description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).",
)
embedder_config: Optional[Dict[str, Any]] = Field(
embedder: Optional[Dict[str, Any]] = Field(
default=None,
description="Embedder configuration for the agent.",
)
knowledge_sources: Optional[List[BaseKnowledgeSource]] = Field(
default=None,
description="Knowledge sources for the agent.",
)
_knowledge: Optional[Knowledge] = PrivateAttr(
default=None,
)
@model_validator(mode="after")
def post_init_setup(self):
@@ -165,10 +157,11 @@ class Agent(BaseAgent):
if isinstance(self.knowledge_sources, list) and all(
isinstance(k, BaseKnowledgeSource) for k in self.knowledge_sources
):
self._knowledge = Knowledge(
self.knowledge = Knowledge(
sources=self.knowledge_sources,
embedder_config=self.embedder_config,
embedder=self.embedder,
collection_name=knowledge_agent_name,
storage=self.knowledge_storage or None,
)
except (TypeError, ValueError) as e:
raise ValueError(f"Invalid Knowledge Configuration: {str(e)}")
@@ -227,8 +220,8 @@ class Agent(BaseAgent):
if memory.strip() != "":
task_prompt += self.i18n.slice("memory").format(memory=memory)
if self._knowledge:
agent_knowledge_snippets = self._knowledge.query([task.prompt()])
if self.knowledge:
agent_knowledge_snippets = self.knowledge.query([task.prompt()])
if agent_knowledge_snippets:
agent_knowledge_context = extract_knowledge_context(
agent_knowledge_snippets
@@ -261,6 +254,9 @@ class Agent(BaseAgent):
}
)["output"]
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
self._times_executed += 1
if self._times_executed > self.max_retry_limit:
raise e

View File

@@ -18,6 +18,8 @@ from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.tools_handler import ToolsHandler
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.tools import BaseTool
from crewai.tools.base_tool import Tool
from crewai.utilities import I18N, Logger, RPMController
@@ -48,6 +50,8 @@ class BaseAgent(ABC, BaseModel):
cache_handler (InstanceOf[CacheHandler]): An instance of the CacheHandler class.
tools_handler (InstanceOf[ToolsHandler]): An instance of the ToolsHandler class.
max_tokens: Maximum number of tokens for the agent to generate in a response.
knowledge_sources: Knowledge sources for the agent.
knowledge_storage: Custom knowledge storage for the agent.
Methods:
@@ -130,6 +134,17 @@ class BaseAgent(ABC, BaseModel):
max_tokens: Optional[int] = Field(
default=None, description="Maximum number of tokens for the agent's execution."
)
knowledge: Optional[Knowledge] = Field(
default=None, description="Knowledge for the agent."
)
knowledge_sources: Optional[List[BaseKnowledgeSource]] = Field(
default=None,
description="Knowledge sources for the agent.",
)
knowledge_storage: Optional[Any] = Field(
default=None,
description="Custom knowledge storage for the agent.",
)
@model_validator(mode="before")
@classmethod
@@ -256,13 +271,44 @@ class BaseAgent(ABC, BaseModel):
"tools_handler",
"cache_handler",
"llm",
"knowledge_sources",
"knowledge_storage",
"knowledge",
}
# Copy llm and clear callbacks
# Copy llm
existing_llm = shallow_copy(self.llm)
copied_knowledge = shallow_copy(self.knowledge)
copied_knowledge_storage = shallow_copy(self.knowledge_storage)
# Properly copy knowledge sources if they exist
existing_knowledge_sources = None
if self.knowledge_sources:
# Create a shared storage instance for all knowledge sources
shared_storage = (
self.knowledge_sources[0].storage if self.knowledge_sources else None
)
existing_knowledge_sources = []
for source in self.knowledge_sources:
copied_source = (
source.model_copy()
if hasattr(source, "model_copy")
else shallow_copy(source)
)
# Ensure all copied sources use the same storage instance
copied_source.storage = shared_storage
existing_knowledge_sources.append(copied_source)
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
copied_agent = type(self)(**copied_data, llm=existing_llm, tools=self.tools)
copied_agent = type(self)(
**copied_data,
llm=existing_llm,
tools=self.tools,
knowledge_sources=existing_knowledge_sources,
knowledge=copied_knowledge,
knowledge_storage=copied_knowledge_storage,
)
return copied_agent

View File

@@ -25,7 +25,7 @@ class OutputConverter(BaseModel, ABC):
llm: Any = Field(description="The language model to be used to convert the text.")
model: Any = Field(description="The model to be used to convert the text.")
instructions: str = Field(description="Conversion instructions to the LLM.")
max_attempts: Optional[int] = Field(
max_attempts: int = Field(
description="Max number of attempts to try to get the output formatted.",
default=3,
)

View File

@@ -2,25 +2,26 @@ from crewai.types.usage_metrics import UsageMetrics
class TokenProcess:
total_tokens: int = 0
prompt_tokens: int = 0
cached_prompt_tokens: int = 0
completion_tokens: int = 0
successful_requests: int = 0
def __init__(self) -> None:
self.total_tokens: int = 0
self.prompt_tokens: int = 0
self.cached_prompt_tokens: int = 0
self.completion_tokens: int = 0
self.successful_requests: int = 0
def sum_prompt_tokens(self, tokens: int):
self.prompt_tokens = self.prompt_tokens + tokens
self.total_tokens = self.total_tokens + tokens
def sum_prompt_tokens(self, tokens: int) -> None:
self.prompt_tokens += tokens
self.total_tokens += tokens
def sum_completion_tokens(self, tokens: int):
self.completion_tokens = self.completion_tokens + tokens
self.total_tokens = self.total_tokens + tokens
def sum_completion_tokens(self, tokens: int) -> None:
self.completion_tokens += tokens
self.total_tokens += tokens
def sum_cached_prompt_tokens(self, tokens: int):
self.cached_prompt_tokens = self.cached_prompt_tokens + tokens
def sum_cached_prompt_tokens(self, tokens: int) -> None:
self.cached_prompt_tokens += tokens
def sum_successful_requests(self, requests: int):
self.successful_requests = self.successful_requests + requests
def sum_successful_requests(self, requests: int) -> None:
self.successful_requests += requests
def get_summary(self) -> UsageMetrics:
return UsageMetrics(

View File

@@ -13,6 +13,7 @@ from crewai.agents.parser import (
OutputParserException,
)
from crewai.agents.tools_handler import ToolsHandler
from crewai.llm import LLM
from crewai.tools.base_tool import BaseTool
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N, Printer
@@ -54,7 +55,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
callbacks: List[Any] = [],
):
self._i18n: I18N = I18N()
self.llm = llm
self.llm: LLM = llm
self.task = task
self.agent = agent
self.crew = crew
@@ -80,10 +81,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self.tool_name_to_tool_map: Dict[str, BaseTool] = {
tool.name: tool for tool in self.tools
}
if self.llm.stop:
self.llm.stop = list(set(self.llm.stop + self.stop))
else:
self.llm.stop = self.stop
self.stop = stop_words
self.llm.stop = list(set(self.llm.stop + self.stop))
def invoke(self, inputs: Dict[str, str]) -> Dict[str, Any]:
if "system" in self.prompt:
@@ -98,7 +97,16 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._show_start_logs()
self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
formatted_answer = self._invoke_loop()
try:
formatted_answer = self._invoke_loop()
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
else:
self._handle_unknown_error(e)
raise e
if self.ask_for_human_input:
formatted_answer = self._handle_human_feedback(formatted_answer)
@@ -124,7 +132,6 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._enforce_rpm_limit()
answer = self._get_llm_response()
formatted_answer = self._process_llm_response(answer)
if isinstance(formatted_answer, AgentAction):
@@ -142,13 +149,32 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
formatted_answer = self._handle_output_parser_exception(e)
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
if self._is_context_length_exceeded(e):
self._handle_context_length()
continue
else:
self._handle_unknown_error(e)
raise e
finally:
self.iterations += 1
self._show_logs(formatted_answer)
return formatted_answer
def _handle_unknown_error(self, exception: Exception) -> None:
"""Handle unknown errors by informing the user."""
self._printer.print(
content="An unknown error occurred. Please check the details below.",
color="red",
)
self._printer.print(
content=f"Error details: {exception}",
color="red",
)
def _has_reached_max_iterations(self) -> bool:
"""Check if the maximum number of iterations has been reached."""
return self.iterations >= self.max_iter
@@ -160,10 +186,17 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
def _get_llm_response(self) -> str:
"""Call the LLM and return the response, handling any invalid responses."""
answer = self.llm.call(
self.messages,
callbacks=self.callbacks,
)
try:
answer = self.llm.call(
self.messages,
callbacks=self.callbacks,
)
except Exception as e:
self._printer.print(
content=f"Error during LLM call: {e}",
color="red",
)
raise e
if not answer:
self._printer.print(
@@ -184,7 +217,6 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE in e.error:
answer = answer.split("Observation:")[0].strip()
self.iterations += 1
return self._format_answer(answer)
def _handle_agent_action(

View File

@@ -350,7 +350,10 @@ def chat():
Start a conversation with the Crew, collecting user-supplied inputs,
and using the Chat LLM to generate responses.
"""
click.echo("Starting a conversation with the Crew")
click.secho(
"\nStarting a conversation with the Crew\n" "Type 'exit' or Ctrl+C to quit.\n",
)
run_chat()

View File

@@ -1,17 +1,52 @@
import json
import platform
import re
import sys
import threading
import time
from pathlib import Path
from typing import Any, Dict, List, Optional, Set, Tuple
import click
import tomli
from packaging import version
from crewai.cli.utils import read_toml
from crewai.cli.version import get_crewai_version
from crewai.crew import Crew
from crewai.llm import LLM
from crewai.types.crew_chat import ChatInputField, ChatInputs
from crewai.utilities.llm_utils import create_llm
MIN_REQUIRED_VERSION = "0.98.0"
def check_conversational_crews_version(
crewai_version: str, pyproject_data: dict
) -> bool:
"""
Check if the installed crewAI version supports conversational crews.
Args:
crewai_version: The current version of crewAI.
pyproject_data: Dictionary containing pyproject.toml data.
Returns:
bool: True if version check passes, False otherwise.
"""
try:
if version.parse(crewai_version) < version.parse(MIN_REQUIRED_VERSION):
click.secho(
"You are using an older version of crewAI that doesn't support conversational crews. "
"Run 'uv upgrade crewai' to get the latest version.",
fg="red",
)
return False
except version.InvalidVersion:
click.secho("Invalid crewAI version format detected.", fg="red")
return False
return True
def run_chat():
"""
@@ -19,20 +54,47 @@ def run_chat():
Incorporates crew_name, crew_description, and input fields to build a tool schema.
Exits if crew_name or crew_description are missing.
"""
crewai_version = get_crewai_version()
pyproject_data = read_toml()
if not check_conversational_crews_version(crewai_version, pyproject_data):
return
crew, crew_name = load_crew_and_name()
chat_llm = initialize_chat_llm(crew)
if not chat_llm:
return
crew_chat_inputs = generate_crew_chat_inputs(crew, crew_name, chat_llm)
crew_tool_schema = generate_crew_tool_schema(crew_chat_inputs)
system_message = build_system_message(crew_chat_inputs)
# Call the LLM to generate the introductory message
introductory_message = chat_llm.call(
messages=[{"role": "system", "content": system_message}]
# Indicate that the crew is being analyzed
click.secho(
"\nAnalyzing crew and required inputs - this may take 3 to 30 seconds "
"depending on the complexity of your crew.",
fg="white",
)
click.secho(f"\nAssistant: {introductory_message}\n", fg="green")
# Start loading indicator
loading_complete = threading.Event()
loading_thread = threading.Thread(target=show_loading, args=(loading_complete,))
loading_thread.start()
try:
crew_chat_inputs = generate_crew_chat_inputs(crew, crew_name, chat_llm)
crew_tool_schema = generate_crew_tool_schema(crew_chat_inputs)
system_message = build_system_message(crew_chat_inputs)
# Call the LLM to generate the introductory message
introductory_message = chat_llm.call(
messages=[{"role": "system", "content": system_message}]
)
finally:
# Stop loading indicator
loading_complete.set()
loading_thread.join()
# Indicate that the analysis is complete
click.secho("\nFinished analyzing crew.\n", fg="white")
click.secho(f"Assistant: {introductory_message}\n", fg="green")
messages = [
{"role": "system", "content": system_message},
@@ -43,15 +105,17 @@ def run_chat():
crew_chat_inputs.crew_name: create_tool_function(crew, messages),
}
click.secho(
"\nEntering an interactive chat loop with function-calling.\n"
"Type 'exit' or Ctrl+C to quit.\n",
fg="cyan",
)
chat_loop(chat_llm, messages, crew_tool_schema, available_functions)
def show_loading(event: threading.Event):
"""Display animated loading dots while processing."""
while not event.is_set():
print(".", end="", flush=True)
time.sleep(1)
print()
def initialize_chat_llm(crew: Crew) -> Optional[LLM]:
"""Initializes the chat LLM and handles exceptions."""
try:
@@ -85,7 +149,7 @@ def build_system_message(crew_chat_inputs: ChatInputs) -> str:
"Please keep your responses concise and friendly. "
"If a user asks a question outside the crew's scope, provide a brief answer and remind them of the crew's purpose. "
"After calling the tool, be prepared to take user feedback and make adjustments as needed. "
"If you are ever unsure about a user's request or need clarification, ask the user for more information."
"If you are ever unsure about a user's request or need clarification, ask the user for more information. "
"Before doing anything else, introduce yourself with a friendly message like: 'Hey! I'm here to help you with [crew's purpose]. Could you please provide me with [inputs] so we can get started?' "
"For example: 'Hey! I'm here to help you with uncovering and reporting cutting-edge developments through thorough research and detailed analysis. Could you please provide me with a topic you're interested in? This will help us generate a comprehensive research report and detailed analysis.'"
f"\nCrew Name: {crew_chat_inputs.crew_name}"
@@ -102,25 +166,33 @@ def create_tool_function(crew: Crew, messages: List[Dict[str, str]]) -> Any:
return run_crew_tool_with_messages
def flush_input():
"""Flush any pending input from the user."""
if platform.system() == "Windows":
# Windows platform
import msvcrt
while msvcrt.kbhit():
msvcrt.getch()
else:
# Unix-like platforms (Linux, macOS)
import termios
termios.tcflush(sys.stdin, termios.TCIFLUSH)
def chat_loop(chat_llm, messages, crew_tool_schema, available_functions):
"""Main chat loop for interacting with the user."""
while True:
try:
user_input = click.prompt("You", type=str)
if user_input.strip().lower() in ["exit", "quit"]:
click.echo("Exiting chat. Goodbye!")
break
# Flush any pending input before accepting new input
flush_input()
messages.append({"role": "user", "content": user_input})
final_response = chat_llm.call(
messages=messages,
tools=[crew_tool_schema],
available_functions=available_functions,
user_input = get_user_input()
handle_user_input(
user_input, chat_llm, messages, crew_tool_schema, available_functions
)
messages.append({"role": "assistant", "content": final_response})
click.secho(f"\nAssistant: {final_response}\n", fg="green")
except KeyboardInterrupt:
click.echo("\nExiting chat. Goodbye!")
break
@@ -129,6 +201,55 @@ def chat_loop(chat_llm, messages, crew_tool_schema, available_functions):
break
def get_user_input() -> str:
"""Collect multi-line user input with exit handling."""
click.secho(
"\nYou (type your message below. Press 'Enter' twice when you're done):",
fg="blue",
)
user_input_lines = []
while True:
line = input()
if line.strip().lower() == "exit":
return "exit"
if line == "":
break
user_input_lines.append(line)
return "\n".join(user_input_lines)
def handle_user_input(
user_input: str,
chat_llm: LLM,
messages: List[Dict[str, str]],
crew_tool_schema: Dict[str, Any],
available_functions: Dict[str, Any],
) -> None:
if user_input.strip().lower() == "exit":
click.echo("Exiting chat. Goodbye!")
return
if not user_input.strip():
click.echo("Empty message. Please provide input or type 'exit' to quit.")
return
messages.append({"role": "user", "content": user_input})
# Indicate that assistant is processing
click.echo()
click.secho("Assistant is processing your input. Please wait...", fg="green")
# Process assistant's response
final_response = chat_llm.call(
messages=messages,
tools=[crew_tool_schema],
available_functions=available_functions,
)
messages.append({"role": "assistant", "content": final_response})
click.secho(f"\nAssistant: {final_response}\n", fg="green")
def generate_crew_tool_schema(crew_inputs: ChatInputs) -> dict:
"""
Dynamically build a Littellm 'function' schema for the given crew.
@@ -323,10 +444,10 @@ def generate_input_description_with_ai(input_name: str, crew: Crew, chat_llm) ->
):
# Replace placeholders with input names
task_description = placeholder_pattern.sub(
lambda m: m.group(1), task.description
lambda m: m.group(1), task.description or ""
)
expected_output = placeholder_pattern.sub(
lambda m: m.group(1), task.expected_output
lambda m: m.group(1), task.expected_output or ""
)
context_texts.append(f"Task Description: {task_description}")
context_texts.append(f"Expected Output: {expected_output}")
@@ -337,10 +458,10 @@ def generate_input_description_with_ai(input_name: str, crew: Crew, chat_llm) ->
or f"{{{input_name}}}" in agent.backstory
):
# Replace placeholders with input names
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role)
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal)
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role or "")
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal or "")
agent_backstory = placeholder_pattern.sub(
lambda m: m.group(1), agent.backstory
lambda m: m.group(1), agent.backstory or ""
)
context_texts.append(f"Agent Role: {agent_role}")
context_texts.append(f"Agent Goal: {agent_goal}")
@@ -381,18 +502,20 @@ def generate_crew_description_with_ai(crew: Crew, chat_llm) -> str:
for task in crew.tasks:
# Replace placeholders with input names
task_description = placeholder_pattern.sub(
lambda m: m.group(1), task.description
lambda m: m.group(1), task.description or ""
)
expected_output = placeholder_pattern.sub(
lambda m: m.group(1), task.expected_output
lambda m: m.group(1), task.expected_output or ""
)
context_texts.append(f"Task Description: {task_description}")
context_texts.append(f"Expected Output: {expected_output}")
for agent in crew.agents:
# Replace placeholders with input names
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role)
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal)
agent_backstory = placeholder_pattern.sub(lambda m: m.group(1), agent.backstory)
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role or "")
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal or "")
agent_backstory = placeholder_pattern.sub(
lambda m: m.group(1), agent.backstory or ""
)
context_texts.append(f"Agent Role: {agent_role}")
context_texts.append(f"Agent Goal: {agent_goal}")
context_texts.append(f"Agent Backstory: {agent_backstory}")

View File

@@ -1,2 +1,3 @@
.env
__pycache__/
.DS_Store

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.95.0,<1.0.0"
"crewai[tools]>=0.100.0,<1.0.0"
]
[project.scripts]

View File

@@ -1,3 +1,4 @@
.env
__pycache__/
lib/
.DS_Store

View File

@@ -3,7 +3,7 @@ from random import randint
from pydantic import BaseModel
from crewai.flow.flow import Flow, listen, start
from crewai.flow import Flow, listen, start
from {{folder_name}}.crews.poem_crew.poem_crew import PoemCrew

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.95.0,<1.0.0",
"crewai[tools]>=0.100.0,<1.0.0",
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.95.0"
"crewai[tools]>=0.100.0"
]
[tool.crewai]

View File

@@ -4,6 +4,7 @@ import re
import uuid
import warnings
from concurrent.futures import Future
from copy import copy as shallow_copy
from hashlib import md5
from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Union
@@ -37,7 +38,6 @@ from crewai.tasks.task_output import TaskOutput
from crewai.telemetry import Telemetry
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import Tool
from crewai.types.crew_chat import ChatInputs
from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities import I18N, FileHandler, Logger, RPMController
from crewai.utilities.constants import TRAINING_DATA_FILE
@@ -84,6 +84,7 @@ class Crew(BaseModel):
step_callback: Callback to be executed after each step for every agents execution.
share_crew: Whether you want to share the complete crew information and execution with crewAI to make the library better, and allow us to train models.
planning: Plan the crew execution and add the plan to the crew.
chat_llm: The language model used for orchestrating chat interactions with the crew.
"""
__hash__ = object.__hash__ # type: ignore
@@ -210,8 +211,9 @@ class Crew(BaseModel):
default=None,
description="LLM used to handle chatting with the crew.",
)
_knowledge: Optional[Knowledge] = PrivateAttr(
knowledge: Optional[Knowledge] = Field(
default=None,
description="Knowledge for the crew.",
)
@field_validator("id", mode="before")
@@ -289,7 +291,7 @@ class Crew(BaseModel):
if isinstance(self.knowledge_sources, list) and all(
isinstance(k, BaseKnowledgeSource) for k in self.knowledge_sources
):
self._knowledge = Knowledge(
self.knowledge = Knowledge(
sources=self.knowledge_sources,
embedder_config=self.embedder,
collection_name="crew",
@@ -991,8 +993,8 @@ class Crew(BaseModel):
return result
def query_knowledge(self, query: List[str]) -> Union[List[Dict[str, Any]], None]:
if self._knowledge:
return self._knowledge.query(query)
if self.knowledge:
return self.knowledge.query(query)
return None
def fetch_inputs(self) -> Set[str]:
@@ -1036,6 +1038,8 @@ class Crew(BaseModel):
"_telemetry",
"agents",
"tasks",
"knowledge_sources",
"knowledge",
}
cloned_agents = [agent.copy() for agent in self.agents]
@@ -1043,6 +1047,9 @@ class Crew(BaseModel):
task_mapping = {}
cloned_tasks = []
existing_knowledge_sources = shallow_copy(self.knowledge_sources)
existing_knowledge = shallow_copy(self.knowledge)
for task in self.tasks:
cloned_task = task.copy(cloned_agents, task_mapping)
cloned_tasks.append(cloned_task)
@@ -1062,7 +1069,13 @@ class Crew(BaseModel):
copied_data.pop("agents", None)
copied_data.pop("tasks", None)
copied_crew = Crew(**copied_data, agents=cloned_agents, tasks=cloned_tasks)
copied_crew = Crew(
**copied_data,
agents=cloned_agents,
tasks=cloned_tasks,
knowledge_sources=existing_knowledge_sources,
knowledge=existing_knowledge,
)
return copied_crew

View File

@@ -1,3 +1,5 @@
from crewai.flow.flow import Flow
from crewai.flow.flow import Flow, start, listen, or_, and_, router
from crewai.flow.persistence import persist
__all__ = ["Flow", "start", "listen", "or_", "and_", "router", "persist"]
__all__ = ["Flow"]

View File

@@ -1,5 +1,6 @@
import asyncio
import inspect
import logging
from typing import (
Any,
Callable,
@@ -25,15 +26,70 @@ from crewai.flow.flow_events import (
MethodExecutionStartedEvent,
)
from crewai.flow.flow_visualizer import plot_flow
from crewai.flow.persistence.base import FlowPersistence
from crewai.flow.utils import get_possible_return_constants
from crewai.telemetry import Telemetry
from crewai.utilities.printer import Printer
logger = logging.getLogger(__name__)
class FlowState(BaseModel):
"""Base model for all flow states, ensuring each state has a unique ID."""
id: str = Field(default_factory=lambda: str(uuid4()), description="Unique identifier for the flow state")
T = TypeVar("T", bound=Union[FlowState, Dict[str, Any]])
id: str = Field(
default_factory=lambda: str(uuid4()),
description="Unique identifier for the flow state",
)
# Type variables with explicit bounds
T = TypeVar(
"T", bound=Union[Dict[str, Any], BaseModel]
) # Generic flow state type parameter
StateT = TypeVar(
"StateT", bound=Union[Dict[str, Any], BaseModel]
) # State validation type parameter
def ensure_state_type(state: Any, expected_type: Type[StateT]) -> StateT:
"""Ensure state matches expected type with proper validation.
Args:
state: State instance to validate
expected_type: Expected type for the state
Returns:
Validated state instance
Raises:
TypeError: If state doesn't match expected type
ValueError: If state validation fails
"""
"""Ensure state matches expected type with proper validation.
Args:
state: State instance to validate
expected_type: Expected type for the state
Returns:
Validated state instance
Raises:
TypeError: If state doesn't match expected type
ValueError: If state validation fails
"""
if expected_type is dict:
if not isinstance(state, dict):
raise TypeError(f"Expected dict, got {type(state).__name__}")
return cast(StateT, state)
if isinstance(expected_type, type) and issubclass(expected_type, BaseModel):
if not isinstance(state, expected_type):
raise TypeError(
f"Expected {expected_type.__name__}, got {type(state).__name__}"
)
return cast(StateT, state)
raise TypeError(f"Invalid expected_type: {expected_type}")
def start(condition: Optional[Union[str, dict, Callable]] = None) -> Callable:
@@ -77,6 +133,7 @@ def start(condition: Optional[Union[str, dict, Callable]] = None) -> Callable:
>>> def complex_start(self):
... pass
"""
def decorator(func):
func.__is_start_method__ = True
if condition is not None:
@@ -101,6 +158,7 @@ def start(condition: Optional[Union[str, dict, Callable]] = None) -> Callable:
return decorator
def listen(condition: Union[str, dict, Callable]) -> Callable:
"""
Creates a listener that executes when specified conditions are met.
@@ -137,6 +195,7 @@ def listen(condition: Union[str, dict, Callable]) -> Callable:
>>> def handle_completion(self):
... pass
"""
def decorator(func):
if isinstance(condition, str):
func.__trigger_methods__ = [condition]
@@ -201,6 +260,7 @@ def router(condition: Union[str, dict, Callable]) -> Callable:
... return CONTINUE
... return STOP
"""
def decorator(func):
func.__is_router__ = True
if isinstance(condition, str):
@@ -224,6 +284,7 @@ def router(condition: Union[str, dict, Callable]) -> Callable:
return decorator
def or_(*conditions: Union[str, dict, Callable]) -> dict:
"""
Combines multiple conditions with OR logic for flow control.
@@ -326,21 +387,32 @@ class FlowMeta(type):
routers = set()
for attr_name, attr_value in dct.items():
if hasattr(attr_value, "__is_start_method__"):
start_methods.append(attr_name)
# Check for any flow-related attributes
if (
hasattr(attr_value, "__is_flow_method__")
or hasattr(attr_value, "__is_start_method__")
or hasattr(attr_value, "__trigger_methods__")
or hasattr(attr_value, "__is_router__")
):
# Register start methods
if hasattr(attr_value, "__is_start_method__"):
start_methods.append(attr_name)
# Register listeners and routers
if hasattr(attr_value, "__trigger_methods__"):
methods = attr_value.__trigger_methods__
condition_type = getattr(attr_value, "__condition_type__", "OR")
listeners[attr_name] = (condition_type, methods)
elif hasattr(attr_value, "__trigger_methods__"):
methods = attr_value.__trigger_methods__
condition_type = getattr(attr_value, "__condition_type__", "OR")
listeners[attr_name] = (condition_type, methods)
if hasattr(attr_value, "__is_router__") and attr_value.__is_router__:
routers.add(attr_name)
possible_returns = get_possible_return_constants(attr_value)
if possible_returns:
router_paths[attr_name] = possible_returns
if (
hasattr(attr_value, "__is_router__")
and attr_value.__is_router__
):
routers.add(attr_name)
possible_returns = get_possible_return_constants(attr_value)
if possible_returns:
router_paths[attr_name] = possible_returns
setattr(cls, "_start_methods", start_methods)
setattr(cls, "_listeners", listeners)
@@ -351,7 +423,12 @@ class FlowMeta(type):
class Flow(Generic[T], metaclass=FlowMeta):
"""Base class for all flows.
Type parameter T must be either Dict[str, Any] or a subclass of BaseModel."""
_telemetry = Telemetry()
_printer = Printer()
_start_methods: List[str] = []
_listeners: Dict[str, tuple[str, List[str]]] = {}
@@ -367,53 +444,130 @@ class Flow(Generic[T], metaclass=FlowMeta):
_FlowGeneric.__name__ = f"{cls.__name__}[{item.__name__}]"
return _FlowGeneric
def __init__(self) -> None:
def __init__(
self,
persistence: Optional[FlowPersistence] = None,
**kwargs: Any,
) -> None:
"""Initialize a new Flow instance.
Args:
persistence: Optional persistence backend for storing flow states
**kwargs: Additional state values to initialize or override
"""
# Initialize basic instance attributes
self._methods: Dict[str, Callable] = {}
self._state: T = self._create_initial_state()
self._method_execution_counts: Dict[str, int] = {}
self._pending_and_listeners: Dict[str, Set[str]] = {}
self._method_outputs: List[Any] = [] # List to store all method outputs
self._persistence: Optional[FlowPersistence] = persistence
# Initialize state with initial values
self._state = self._create_initial_state()
# Apply any additional kwargs
if kwargs:
self._initialize_state(kwargs)
self._telemetry.flow_creation_span(self.__class__.__name__)
# Register all flow-related methods
for method_name in dir(self):
if callable(getattr(self, method_name)) and not method_name.startswith(
"__"
):
self._methods[method_name] = getattr(self, method_name)
if not method_name.startswith("_"):
method = getattr(self, method_name)
# Check for any flow-related attributes
if (
hasattr(method, "__is_flow_method__")
or hasattr(method, "__is_start_method__")
or hasattr(method, "__trigger_methods__")
or hasattr(method, "__is_router__")
):
# Ensure method is bound to this instance
if not hasattr(method, "__self__"):
method = method.__get__(self, self.__class__)
self._methods[method_name] = method
def _create_initial_state(self) -> T:
"""Create and initialize flow state with UUID and default values.
Returns:
New state instance with UUID and default values initialized
Raises:
ValueError: If structured state model lacks 'id' field
TypeError: If state is neither BaseModel nor dictionary
"""
# Handle case where initial_state is None but we have a type parameter
if self.initial_state is None and hasattr(self, "_initial_state_T"):
state_type = getattr(self, "_initial_state_T")
if isinstance(state_type, type):
if issubclass(state_type, FlowState):
return state_type() # type: ignore
# Create instance without id, then set it
instance = state_type()
if not hasattr(instance, "id"):
setattr(instance, "id", str(uuid4()))
return cast(T, instance)
elif issubclass(state_type, BaseModel):
# Create a new type that includes the ID field
class StateWithId(state_type, FlowState): # type: ignore
pass
return StateWithId() # type: ignore
instance = StateWithId()
if not hasattr(instance, "id"):
setattr(instance, "id", str(uuid4()))
return cast(T, instance)
elif state_type is dict:
return cast(T, {"id": str(uuid4())})
# Handle case where no initial state is provided
if self.initial_state is None:
return {"id": str(uuid4())} # type: ignore
return cast(T, {"id": str(uuid4())})
# Handle case where initial_state is a type (class)
if isinstance(self.initial_state, type):
if issubclass(self.initial_state, FlowState):
return self.initial_state() # type: ignore
return cast(T, self.initial_state()) # Uses model defaults
elif issubclass(self.initial_state, BaseModel):
# Create a new type that includes the ID field
class StateWithId(self.initial_state, FlowState): # type: ignore
pass
return StateWithId() # type: ignore
# Validate that the model has an id field
model_fields = getattr(self.initial_state, "model_fields", None)
if not model_fields or "id" not in model_fields:
raise ValueError("Flow state model must have an 'id' field")
return cast(T, self.initial_state()) # Uses model defaults
elif self.initial_state is dict:
return cast(T, {"id": str(uuid4())})
# Handle dictionary case
if isinstance(self.initial_state, dict) and "id" not in self.initial_state:
self.initial_state["id"] = str(uuid4())
# Handle dictionary instance case
if isinstance(self.initial_state, dict):
new_state = dict(self.initial_state) # Copy to avoid mutations
if "id" not in new_state:
new_state["id"] = str(uuid4())
return cast(T, new_state)
return self.initial_state # type: ignore
# Handle BaseModel instance case
if isinstance(self.initial_state, BaseModel):
model = cast(BaseModel, self.initial_state)
if not hasattr(model, "id"):
raise ValueError("Flow state model must have an 'id' field")
# Create new instance with same values to avoid mutations
if hasattr(model, "model_dump"):
# Pydantic v2
state_dict = model.model_dump()
elif hasattr(model, "dict"):
# Pydantic v1
state_dict = model.dict()
else:
# Fallback for other BaseModel implementations
state_dict = {
k: v for k, v in model.__dict__.items() if not k.startswith("_")
}
# Create new instance of the same class
model_class = type(model)
return cast(T, model_class(**state_dict))
raise TypeError(
f"Initial state must be dict or BaseModel, got {type(self.initial_state)}"
)
@property
def state(self) -> T:
@@ -424,53 +578,158 @@ class Flow(Generic[T], metaclass=FlowMeta):
"""Returns the list of all outputs from executed methods."""
return self._method_outputs
@property
def flow_id(self) -> str:
"""Returns the unique identifier of this flow instance.
This property provides a consistent way to access the flow's unique identifier
regardless of the underlying state implementation (dict or BaseModel).
Returns:
str: The flow's unique identifier, or an empty string if not found
Note:
This property safely handles both dictionary and BaseModel state types,
returning an empty string if the ID cannot be retrieved rather than raising
an exception.
Example:
```python
flow = MyFlow()
print(f"Current flow ID: {flow.flow_id}") # Safely get flow ID
```
"""
try:
if not hasattr(self, '_state'):
return ""
if isinstance(self._state, dict):
return str(self._state.get("id", ""))
elif isinstance(self._state, BaseModel):
return str(getattr(self._state, "id", ""))
return ""
except (AttributeError, TypeError):
return "" # Safely handle any unexpected attribute access issues
def _initialize_state(self, inputs: Dict[str, Any]) -> None:
"""Initialize or update flow state with new inputs.
Args:
inputs: Dictionary of state values to set/update
Raises:
ValueError: If validation fails for structured state
TypeError: If state is neither BaseModel nor dictionary
"""
if isinstance(self._state, dict):
# Preserve the ID when updating unstructured state
# For dict states, preserve existing fields unless overridden
current_id = self._state.get("id")
self._state.update(inputs)
# Only update specified fields
for k, v in inputs.items():
self._state[k] = v
# Ensure ID is preserved or generated
if current_id:
self._state["id"] = current_id
elif "id" not in self._state:
self._state["id"] = str(uuid4())
elif isinstance(self._state, BaseModel):
# Structured state
# For BaseModel states, preserve existing fields unless overridden
try:
def create_model_with_extra_forbid(
base_model: Type[BaseModel],
) -> Type[BaseModel]:
class ModelWithExtraForbid(base_model): # type: ignore
model_config = base_model.model_config.copy()
model_config["extra"] = "forbid"
return ModelWithExtraForbid
# Get current state as dict, preserving the ID if it exists
state_model = cast(BaseModel, self._state)
current_state = (
state_model.model_dump()
if hasattr(state_model, "model_dump")
else state_model.dict()
if hasattr(state_model, "dict")
else {
k: v
for k, v in state_model.__dict__.items()
if not k.startswith("_")
model = cast(BaseModel, self._state)
# Get current state as dict
if hasattr(model, "model_dump"):
current_state = model.model_dump()
elif hasattr(model, "dict"):
current_state = model.dict()
else:
current_state = {
k: v for k, v in model.__dict__.items() if not k.startswith("_")
}
)
ModelWithExtraForbid = create_model_with_extra_forbid(
self._state.__class__
)
self._state = cast(
T, ModelWithExtraForbid(**{**current_state, **inputs})
)
# Create new state with preserved fields and updates
new_state = {**current_state, **inputs}
# Create new instance with merged state
model_class = type(model)
if hasattr(model_class, "model_validate"):
# Pydantic v2
self._state = cast(T, model_class.model_validate(new_state))
elif hasattr(model_class, "parse_obj"):
# Pydantic v1
self._state = cast(T, model_class.parse_obj(new_state))
else:
# Fallback for other BaseModel implementations
self._state = cast(T, model_class(**new_state))
except ValidationError as e:
raise ValueError(f"Invalid inputs for structured state: {e}") from e
else:
raise TypeError("State must be a BaseModel instance or a dictionary.")
def _restore_state(self, stored_state: Dict[str, Any]) -> None:
"""Restore flow state from persistence.
Args:
stored_state: Previously stored state to restore
Raises:
ValueError: If validation fails for structured state
TypeError: If state is neither BaseModel nor dictionary
"""
# When restoring from persistence, use the stored ID
stored_id = stored_state.get("id")
if not stored_id:
raise ValueError("Stored state must have an 'id' field")
if isinstance(self._state, dict):
# For dict states, update all fields from stored state
self._state.clear()
self._state.update(stored_state)
elif isinstance(self._state, BaseModel):
# For BaseModel states, create new instance with stored values
model = cast(BaseModel, self._state)
if hasattr(model, "model_validate"):
# Pydantic v2
self._state = cast(T, type(model).model_validate(stored_state))
elif hasattr(model, "parse_obj"):
# Pydantic v1
self._state = cast(T, type(model).parse_obj(stored_state))
else:
# Fallback for other BaseModel implementations
self._state = cast(T, type(model)(**stored_state))
else:
raise TypeError(f"State must be dict or BaseModel, got {type(self._state)}")
def kickoff(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
"""Start the flow execution.
Args:
inputs: Optional dictionary containing input values and potentially a state ID to restore
"""
# Handle state restoration if ID is provided in inputs
if inputs and 'id' in inputs and self._persistence is not None:
restore_uuid = inputs['id']
stored_state = self._persistence.load_state(restore_uuid)
# Override the id in the state if it exists in inputs
if 'id' in inputs:
if isinstance(self._state, dict):
self._state['id'] = inputs['id']
elif isinstance(self._state, BaseModel):
setattr(self._state, 'id', inputs['id'])
if stored_state:
self._log_flow_event(f"Loading flow state from memory for UUID: {restore_uuid}", color="yellow")
# Restore the state
self._restore_state(stored_state)
else:
self._log_flow_event(f"No flow state found for UUID: {restore_uuid}", color="red")
# Apply any additional inputs after restoration
filtered_inputs = {k: v for k, v in inputs.items() if k != 'id'}
if filtered_inputs:
self._initialize_state(filtered_inputs)
# Start flow execution
self.event_emitter.send(
self,
event=FlowStartedEvent(
@@ -478,9 +737,11 @@ class Flow(Generic[T], metaclass=FlowMeta):
flow_name=self.__class__.__name__,
),
)
self._log_flow_event(f"Flow started with ID: {self.flow_id}", color="bold_magenta")
if inputs is not None:
if inputs is not None and 'id' not in inputs:
self._initialize_state(inputs)
return asyncio.run(self.kickoff_async())
async def kickoff_async(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
@@ -723,6 +984,30 @@ class Flow(Generic[T], metaclass=FlowMeta):
traceback.print_exc()
def _log_flow_event(self, message: str, color: str = "yellow", level: str = "info") -> None:
"""Centralized logging method for flow events.
This method provides a consistent interface for logging flow-related events,
combining both console output with colors and proper logging levels.
Args:
message: The message to log
color: Color to use for console output (default: yellow)
Available colors: purple, red, bold_green, bold_purple,
bold_blue, yellow, yellow
level: Log level to use (default: info)
Supported levels: info, warning
Note:
This method uses the Printer utility for colored console output
and the standard logging module for log level support.
"""
self._printer.print(message, color=color)
if level == "info":
logger.info(message)
elif level == "warning":
logger.warning(message)
def plot(self, filename: str = "crewai_flow") -> None:
self._telemetry.flow_plotting_span(
self.__class__.__name__, list(self._methods.keys())

View File

@@ -0,0 +1,18 @@
"""
CrewAI Flow Persistence.
This module provides interfaces and implementations for persisting flow states.
"""
from typing import Any, Dict, TypeVar, Union
from pydantic import BaseModel
from crewai.flow.persistence.base import FlowPersistence
from crewai.flow.persistence.decorators import persist
from crewai.flow.persistence.sqlite import SQLiteFlowPersistence
__all__ = ["FlowPersistence", "persist", "SQLiteFlowPersistence"]
StateType = TypeVar('StateType', bound=Union[Dict[str, Any], BaseModel])
DictStateType = Dict[str, Any]

View File

@@ -0,0 +1,53 @@
"""Base class for flow state persistence."""
import abc
from typing import Any, Dict, Optional, Union
from pydantic import BaseModel
class FlowPersistence(abc.ABC):
"""Abstract base class for flow state persistence.
This class defines the interface that all persistence implementations must follow.
It supports both structured (Pydantic BaseModel) and unstructured (dict) states.
"""
@abc.abstractmethod
def init_db(self) -> None:
"""Initialize the persistence backend.
This method should handle any necessary setup, such as:
- Creating tables
- Establishing connections
- Setting up indexes
"""
pass
@abc.abstractmethod
def save_state(
self,
flow_uuid: str,
method_name: str,
state_data: Union[Dict[str, Any], BaseModel]
) -> None:
"""Persist the flow state after method completion.
Args:
flow_uuid: Unique identifier for the flow instance
method_name: Name of the method that just completed
state_data: Current state data (either dict or Pydantic model)
"""
pass
@abc.abstractmethod
def load_state(self, flow_uuid: str) -> Optional[Dict[str, Any]]:
"""Load the most recent state for a given flow UUID.
Args:
flow_uuid: Unique identifier for the flow instance
Returns:
The most recent state as a dictionary, or None if no state exists
"""
pass

View File

@@ -0,0 +1,252 @@
"""
Decorators for flow state persistence.
Example:
```python
from crewai.flow.flow import Flow, start
from crewai.flow.persistence import persist, SQLiteFlowPersistence
class MyFlow(Flow):
@start()
@persist(SQLiteFlowPersistence())
def sync_method(self):
# Synchronous method implementation
pass
@start()
@persist(SQLiteFlowPersistence())
async def async_method(self):
# Asynchronous method implementation
await some_async_operation()
```
"""
import asyncio
import functools
import logging
from typing import (
Any,
Callable,
Optional,
Type,
TypeVar,
Union,
cast,
)
from pydantic import BaseModel
from crewai.flow.persistence.base import FlowPersistence
from crewai.flow.persistence.sqlite import SQLiteFlowPersistence
from crewai.utilities.printer import Printer
logger = logging.getLogger(__name__)
T = TypeVar("T")
# Constants for log messages
LOG_MESSAGES = {
"save_state": "Saving flow state to memory for ID: {}",
"save_error": "Failed to persist state for method {}: {}",
"state_missing": "Flow instance has no state",
"id_missing": "Flow state must have an 'id' field for persistence"
}
class PersistenceDecorator:
"""Class to handle flow state persistence with consistent logging."""
_printer = Printer() # Class-level printer instance
@classmethod
def persist_state(cls, flow_instance: Any, method_name: str, persistence_instance: FlowPersistence) -> None:
"""Persist flow state with proper error handling and logging.
This method handles the persistence of flow state data, including proper
error handling and colored console output for status updates.
Args:
flow_instance: The flow instance whose state to persist
method_name: Name of the method that triggered persistence
persistence_instance: The persistence backend to use
Raises:
ValueError: If flow has no state or state lacks an ID
RuntimeError: If state persistence fails
AttributeError: If flow instance lacks required state attributes
"""
try:
state = getattr(flow_instance, 'state', None)
if state is None:
raise ValueError("Flow instance has no state")
flow_uuid: Optional[str] = None
if isinstance(state, dict):
flow_uuid = state.get('id')
elif isinstance(state, BaseModel):
flow_uuid = getattr(state, 'id', None)
if not flow_uuid:
raise ValueError("Flow state must have an 'id' field for persistence")
# Log state saving with consistent message
cls._printer.print(LOG_MESSAGES["save_state"].format(flow_uuid), color="cyan")
logger.info(LOG_MESSAGES["save_state"].format(flow_uuid))
try:
persistence_instance.save_state(
flow_uuid=flow_uuid,
method_name=method_name,
state_data=state,
)
except Exception as e:
error_msg = LOG_MESSAGES["save_error"].format(method_name, str(e))
cls._printer.print(error_msg, color="red")
logger.error(error_msg)
raise RuntimeError(f"State persistence failed: {str(e)}") from e
except AttributeError:
error_msg = LOG_MESSAGES["state_missing"]
cls._printer.print(error_msg, color="red")
logger.error(error_msg)
raise ValueError(error_msg)
except (TypeError, ValueError) as e:
error_msg = LOG_MESSAGES["id_missing"]
cls._printer.print(error_msg, color="red")
logger.error(error_msg)
raise ValueError(error_msg) from e
def persist(persistence: Optional[FlowPersistence] = None):
"""Decorator to persist flow state.
This decorator can be applied at either the class level or method level.
When applied at the class level, it automatically persists all flow method
states. When applied at the method level, it persists only that method's
state.
Args:
persistence: Optional FlowPersistence implementation to use.
If not provided, uses SQLiteFlowPersistence.
Returns:
A decorator that can be applied to either a class or method
Raises:
ValueError: If the flow state doesn't have an 'id' field
RuntimeError: If state persistence fails
Example:
@persist # Class-level persistence with default SQLite
class MyFlow(Flow[MyState]):
@start()
def begin(self):
pass
"""
def decorator(target: Union[Type, Callable[..., T]]) -> Union[Type, Callable[..., T]]:
"""Decorator that handles both class and method decoration."""
actual_persistence = persistence or SQLiteFlowPersistence()
if isinstance(target, type):
# Class decoration
original_init = getattr(target, "__init__")
@functools.wraps(original_init)
def new_init(self: Any, *args: Any, **kwargs: Any) -> None:
if 'persistence' not in kwargs:
kwargs['persistence'] = actual_persistence
original_init(self, *args, **kwargs)
setattr(target, "__init__", new_init)
# Store original methods to preserve their decorators
original_methods = {}
for name, method in target.__dict__.items():
if callable(method) and (
hasattr(method, "__is_start_method__") or
hasattr(method, "__trigger_methods__") or
hasattr(method, "__condition_type__") or
hasattr(method, "__is_flow_method__") or
hasattr(method, "__is_router__")
):
original_methods[name] = method
# Create wrapped versions of the methods that include persistence
for name, method in original_methods.items():
if asyncio.iscoroutinefunction(method):
# Create a closure to capture the current name and method
def create_async_wrapper(method_name: str, original_method: Callable):
@functools.wraps(original_method)
async def method_wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
result = await original_method(self, *args, **kwargs)
PersistenceDecorator.persist_state(self, method_name, actual_persistence)
return result
return method_wrapper
wrapped = create_async_wrapper(name, method)
# Preserve all original decorators and attributes
for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]:
if hasattr(method, attr):
setattr(wrapped, attr, getattr(method, attr))
setattr(wrapped, "__is_flow_method__", True)
# Update the class with the wrapped method
setattr(target, name, wrapped)
else:
# Create a closure to capture the current name and method
def create_sync_wrapper(method_name: str, original_method: Callable):
@functools.wraps(original_method)
def method_wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
result = original_method(self, *args, **kwargs)
PersistenceDecorator.persist_state(self, method_name, actual_persistence)
return result
return method_wrapper
wrapped = create_sync_wrapper(name, method)
# Preserve all original decorators and attributes
for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]:
if hasattr(method, attr):
setattr(wrapped, attr, getattr(method, attr))
setattr(wrapped, "__is_flow_method__", True)
# Update the class with the wrapped method
setattr(target, name, wrapped)
return target
else:
# Method decoration
method = target
setattr(method, "__is_flow_method__", True)
if asyncio.iscoroutinefunction(method):
@functools.wraps(method)
async def method_async_wrapper(flow_instance: Any, *args: Any, **kwargs: Any) -> T:
method_coro = method(flow_instance, *args, **kwargs)
if asyncio.iscoroutine(method_coro):
result = await method_coro
else:
result = method_coro
PersistenceDecorator.persist_state(flow_instance, method.__name__, actual_persistence)
return result
for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]:
if hasattr(method, attr):
setattr(method_async_wrapper, attr, getattr(method, attr))
setattr(method_async_wrapper, "__is_flow_method__", True)
return cast(Callable[..., T], method_async_wrapper)
else:
@functools.wraps(method)
def method_sync_wrapper(flow_instance: Any, *args: Any, **kwargs: Any) -> T:
result = method(flow_instance, *args, **kwargs)
PersistenceDecorator.persist_state(flow_instance, method.__name__, actual_persistence)
return result
for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]:
if hasattr(method, attr):
setattr(method_sync_wrapper, attr, getattr(method, attr))
setattr(method_sync_wrapper, "__is_flow_method__", True)
return cast(Callable[..., T], method_sync_wrapper)
return decorator

View File

@@ -0,0 +1,123 @@
"""
SQLite-based implementation of flow state persistence.
"""
import json
import sqlite3
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, Optional, Union
from pydantic import BaseModel
from crewai.flow.persistence.base import FlowPersistence
class SQLiteFlowPersistence(FlowPersistence):
"""SQLite-based implementation of flow state persistence.
This class provides a simple, file-based persistence implementation using SQLite.
It's suitable for development and testing, or for production use cases with
moderate performance requirements.
"""
db_path: str # Type annotation for instance variable
def __init__(self, db_path: Optional[str] = None):
"""Initialize SQLite persistence.
Args:
db_path: Path to the SQLite database file. If not provided, uses
db_storage_path() from utilities.paths.
Raises:
ValueError: If db_path is invalid
"""
from crewai.utilities.paths import db_storage_path
# Get path from argument or default location
path = db_path or str(Path(db_storage_path()) / "flow_states.db")
if not path:
raise ValueError("Database path must be provided")
self.db_path = path # Now mypy knows this is str
self.init_db()
def init_db(self) -> None:
"""Create the necessary tables if they don't exist."""
with sqlite3.connect(self.db_path) as conn:
conn.execute("""
CREATE TABLE IF NOT EXISTS flow_states (
id INTEGER PRIMARY KEY AUTOINCREMENT,
flow_uuid TEXT NOT NULL,
method_name TEXT NOT NULL,
timestamp DATETIME NOT NULL,
state_json TEXT NOT NULL
)
""")
# Add index for faster UUID lookups
conn.execute("""
CREATE INDEX IF NOT EXISTS idx_flow_states_uuid
ON flow_states(flow_uuid)
""")
def save_state(
self,
flow_uuid: str,
method_name: str,
state_data: Union[Dict[str, Any], BaseModel],
) -> None:
"""Save the current flow state to SQLite.
Args:
flow_uuid: Unique identifier for the flow instance
method_name: Name of the method that just completed
state_data: Current state data (either dict or Pydantic model)
"""
# Convert state_data to dict, handling both Pydantic and dict cases
if isinstance(state_data, BaseModel):
state_dict = dict(state_data) # Use dict() for better type compatibility
elif isinstance(state_data, dict):
state_dict = state_data
else:
raise ValueError(
f"state_data must be either a Pydantic BaseModel or dict, got {type(state_data)}"
)
with sqlite3.connect(self.db_path) as conn:
conn.execute("""
INSERT INTO flow_states (
flow_uuid,
method_name,
timestamp,
state_json
) VALUES (?, ?, ?, ?)
""", (
flow_uuid,
method_name,
datetime.utcnow().isoformat(),
json.dumps(state_dict),
))
def load_state(self, flow_uuid: str) -> Optional[Dict[str, Any]]:
"""Load the most recent state for a given flow UUID.
Args:
flow_uuid: Unique identifier for the flow instance
Returns:
The most recent state as a dictionary, or None if no state exists
"""
with sqlite3.connect(self.db_path) as conn:
cursor = conn.execute("""
SELECT state_json
FROM flow_states
WHERE flow_uuid = ?
ORDER BY id DESC
LIMIT 1
""", (flow_uuid,))
row = cursor.fetchone()
if row:
return json.loads(row[0])
return None

View File

@@ -15,20 +15,20 @@ class Knowledge(BaseModel):
Args:
sources: List[BaseKnowledgeSource] = Field(default_factory=list)
storage: Optional[KnowledgeStorage] = Field(default=None)
embedder_config: Optional[Dict[str, Any]] = None
embedder: Optional[Dict[str, Any]] = None
"""
sources: List[BaseKnowledgeSource] = Field(default_factory=list)
model_config = ConfigDict(arbitrary_types_allowed=True)
storage: Optional[KnowledgeStorage] = Field(default=None)
embedder_config: Optional[Dict[str, Any]] = None
embedder: Optional[Dict[str, Any]] = None
collection_name: Optional[str] = None
def __init__(
self,
collection_name: str,
sources: List[BaseKnowledgeSource],
embedder_config: Optional[Dict[str, Any]] = None,
embedder: Optional[Dict[str, Any]] = None,
storage: Optional[KnowledgeStorage] = None,
**data,
):
@@ -37,25 +37,23 @@ class Knowledge(BaseModel):
self.storage = storage
else:
self.storage = KnowledgeStorage(
embedder_config=embedder_config, collection_name=collection_name
embedder=embedder, collection_name=collection_name
)
self.sources = sources
self.storage.initialize_knowledge_storage()
for source in sources:
source.storage = self.storage
source.add()
self._add_sources()
def query(self, query: List[str], limit: int = 3) -> List[Dict[str, Any]]:
"""
Query across all knowledge sources to find the most relevant information.
Returns the top_k most relevant chunks.
Raises:
ValueError: If storage is not initialized.
"""
if self.storage is None:
raise ValueError("Storage is not initialized.")
results = self.storage.search(
query,
limit,
@@ -63,6 +61,9 @@ class Knowledge(BaseModel):
return results
def _add_sources(self):
for source in self.sources:
source.storage = self.storage
source.add()
try:
for source in self.sources:
source.storage = self.storage
source.add()
except Exception as e:
raise e

View File

@@ -29,7 +29,13 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
def validate_file_path(cls, v, info):
"""Validate that at least one of file_path or file_paths is provided."""
# Single check if both are None, O(1) instead of nested conditions
if v is None and info.data.get("file_path" if info.field_name == "file_paths" else "file_paths") is None:
if (
v is None
and info.data.get(
"file_path" if info.field_name == "file_paths" else "file_paths"
)
is None
):
raise ValueError("Either file_path or file_paths must be provided")
return v

View File

@@ -8,6 +8,7 @@ try:
from docling.exceptions import ConversionError
from docling_core.transforms.chunker.hierarchical_chunker import HierarchicalChunker
from docling_core.types.doc.document import DoclingDocument
DOCLING_AVAILABLE = True
except ImportError:
DOCLING_AVAILABLE = False
@@ -38,8 +39,8 @@ class CrewDoclingSource(BaseKnowledgeSource):
file_paths: List[Union[Path, str]] = Field(default_factory=list)
chunks: List[str] = Field(default_factory=list)
safe_file_paths: List[Union[Path, str]] = Field(default_factory=list)
content: List[DoclingDocument] = Field(default_factory=list)
document_converter: DocumentConverter = Field(
content: List["DoclingDocument"] = Field(default_factory=list)
document_converter: "DocumentConverter" = Field(
default_factory=lambda: DocumentConverter(
allowed_formats=[
InputFormat.MD,
@@ -65,7 +66,7 @@ class CrewDoclingSource(BaseKnowledgeSource):
self.safe_file_paths = self.validate_content()
self.content = self._load_content()
def _load_content(self) -> List[DoclingDocument]:
def _load_content(self) -> List["DoclingDocument"]:
try:
return self._convert_source_to_docling_documents()
except ConversionError as e:
@@ -87,11 +88,11 @@ class CrewDoclingSource(BaseKnowledgeSource):
self.chunks.extend(list(new_chunks_iterable))
self._save_documents()
def _convert_source_to_docling_documents(self) -> List[DoclingDocument]:
def _convert_source_to_docling_documents(self) -> List["DoclingDocument"]:
conv_results_iter = self.document_converter.convert_all(self.safe_file_paths)
return [result.document for result in conv_results_iter]
def _chunk_doc(self, doc: DoclingDocument) -> Iterator[str]:
def _chunk_doc(self, doc: "DoclingDocument") -> Iterator[str]:
chunker = HierarchicalChunker()
for chunk in chunker.chunk(doc):
yield chunk.text

View File

@@ -48,11 +48,11 @@ class KnowledgeStorage(BaseKnowledgeStorage):
def __init__(
self,
embedder_config: Optional[Dict[str, Any]] = None,
embedder: Optional[Dict[str, Any]] = None,
collection_name: Optional[str] = None,
):
self.collection_name = collection_name
self._set_embedder_config(embedder_config)
self._set_embedder_config(embedder)
def search(
self,
@@ -99,7 +99,7 @@ class KnowledgeStorage(BaseKnowledgeStorage):
)
if self.app:
self.collection = self.app.get_or_create_collection(
name=collection_name, embedding_function=self.embedder_config
name=collection_name, embedding_function=self.embedder
)
else:
raise Exception("Vector Database Client not initialized")
@@ -187,17 +187,15 @@ class KnowledgeStorage(BaseKnowledgeStorage):
api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"
)
def _set_embedder_config(
self, embedder_config: Optional[Dict[str, Any]] = None
) -> None:
def _set_embedder_config(self, embedder: Optional[Dict[str, Any]] = None) -> None:
"""Set the embedding configuration for the knowledge storage.
Args:
embedder_config (Optional[Dict[str, Any]]): Configuration dictionary for the embedder.
If None or empty, defaults to the default embedding function.
"""
self.embedder_config = (
EmbeddingConfigurator().configure_embedder(embedder_config)
if embedder_config
self.embedder = (
EmbeddingConfigurator().configure_embedder(embedder)
if embedder
else self._create_default_embedding_function()
)

View File

@@ -142,7 +142,6 @@ class LLM:
self.temperature = temperature
self.top_p = top_p
self.n = n
self.stop = stop
self.max_completion_tokens = max_completion_tokens
self.max_tokens = max_tokens
self.presence_penalty = presence_penalty
@@ -160,37 +159,63 @@ class LLM:
litellm.drop_params = True
# Normalize self.stop to always be a List[str]
if stop is None:
self.stop: List[str] = []
elif isinstance(stop, str):
self.stop = [stop]
else:
self.stop = stop
self.set_callbacks(callbacks)
self.set_env_callbacks()
def call(
self,
messages: List[Dict[str, str]],
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> str:
"""
High-level call method that:
1) Calls litellm.completion
2) Checks for function/tool calls
3) If a tool call is found:
a) executes the function
b) returns the result
4) If no tool call, returns the text response
High-level llm call method that:
1) Accepts either a string or a list of messages
2) Converts string input to the required message format
3) Calls litellm.completion
4) Handles function/tool calls if any
5) Returns the final text response or tool result
:param messages: The conversation messages
:param tools: Optional list of function schemas for function calling
:param callbacks: Optional list of callbacks
:param available_functions: A dictionary mapping function_name -> actual Python function
:return: Final text response from the LLM or the tool result
Parameters:
- messages (Union[str, List[Dict[str, str]]]): The input messages for the LLM.
- If a string is provided, it will be converted into a message list with a single entry.
- If a list of dictionaries is provided, each dictionary should have 'role' and 'content' keys.
- tools (Optional[List[dict]]): A list of tool schemas for function calling.
- callbacks (Optional[List[Any]]): A list of callback functions to be executed.
- available_functions (Optional[Dict[str, Any]]): A dictionary mapping function names to actual Python functions.
Returns:
- str: The final text response from the LLM or the result of a tool function call.
Examples:
---------
# Example 1: Using a string input
response = llm.call("Return the name of a random city in the world.")
print(response)
# Example 2: Using a list of messages
messages = [{"role": "user", "content": "What is the capital of France?"}]
response = llm.call(messages)
print(response)
"""
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
with suppress_warnings():
if callbacks and len(callbacks) > 0:
self.set_callbacks(callbacks)
try:
# --- 1) Make the completion call
# --- 1) Prepare the parameters for the completion call
params = {
"model": self.model,
"messages": messages,
@@ -211,11 +236,13 @@ class LLM:
"api_version": self.api_version,
"api_key": self.api_key,
"stream": False,
"tools": tools, # pass the tool schema
"tools": tools,
}
# Remove None values from params
params = {k: v for k, v in params.items() if v is not None}
# --- 2) Make the completion call
response = litellm.completion(**params)
response_message = cast(Choices, cast(ModelResponse, response).choices)[
0
@@ -223,11 +250,24 @@ class LLM:
text_response = response_message.content or ""
tool_calls = getattr(response_message, "tool_calls", [])
# --- 2) If no tool calls, return the text response
# --- 3) Handle callbacks with usage info
if callbacks and len(callbacks) > 0:
for callback in callbacks:
if hasattr(callback, "log_success_event"):
usage_info = getattr(response, "usage", None)
if usage_info:
callback.log_success_event(
kwargs=params,
response_obj={"usage": usage_info},
start_time=0,
end_time=0,
)
# --- 4) If no tool calls, return the text response
if not tool_calls or not available_functions:
return text_response
# --- 3) Handle the tool call
# --- 5) Handle the tool call
tool_call = tool_calls[0]
function_name = tool_call.function.name
@@ -242,7 +282,6 @@ class LLM:
try:
# Call the actual tool function
result = fn(**function_args)
return result
except Exception as e:

View File

@@ -1,12 +1,17 @@
import json
import logging
import sqlite3
from pathlib import Path
from typing import Any, Dict, List, Optional
from crewai.task import Task
from crewai.utilities import Printer
from crewai.utilities.crew_json_encoder import CrewJSONEncoder
from crewai.utilities.errors import DatabaseError, DatabaseOperationError
from crewai.utilities.paths import db_storage_path
logger = logging.getLogger(__name__)
class KickoffTaskOutputsSQLiteStorage:
"""
@@ -14,15 +19,24 @@ class KickoffTaskOutputsSQLiteStorage:
"""
def __init__(
self, db_path: str = f"{db_storage_path()}/latest_kickoff_task_outputs.db"
self, db_path: Optional[str] = None
) -> None:
if db_path is None:
# Get the parent directory of the default db path and create our db file there
db_path = str(Path(db_storage_path()) / "latest_kickoff_task_outputs.db")
self.db_path = db_path
self._printer: Printer = Printer()
self._initialize_db()
def _initialize_db(self):
"""
Initializes the SQLite database and creates LTM table
def _initialize_db(self) -> None:
"""Initialize the SQLite database and create the latest_kickoff_task_outputs table.
This method sets up the database schema for storing task outputs. It creates
a table with columns for task_id, expected_output, output (as JSON),
task_index, inputs (as JSON), was_replayed flag, and timestamp.
Raises:
DatabaseOperationError: If database initialization fails due to SQLite errors.
"""
try:
with sqlite3.connect(self.db_path) as conn:
@@ -43,10 +57,9 @@ class KickoffTaskOutputsSQLiteStorage:
conn.commit()
except sqlite3.Error as e:
self._printer.print(
content=f"SAVING KICKOFF TASK OUTPUTS ERROR: An error occurred during database initialization: {e}",
color="red",
)
error_msg = DatabaseError.format_error(DatabaseError.INIT_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)
def add(
self,
@@ -55,9 +68,22 @@ class KickoffTaskOutputsSQLiteStorage:
task_index: int,
was_replayed: bool = False,
inputs: Dict[str, Any] = {},
):
) -> None:
"""Add a new task output record to the database.
Args:
task: The Task object containing task details.
output: Dictionary containing the task's output data.
task_index: Integer index of the task in the sequence.
was_replayed: Boolean indicating if this was a replay execution.
inputs: Dictionary of input parameters used for the task.
Raises:
DatabaseOperationError: If saving the task output fails due to SQLite errors.
"""
try:
with sqlite3.connect(self.db_path) as conn:
conn.execute("BEGIN TRANSACTION")
cursor = conn.cursor()
cursor.execute(
"""
@@ -76,21 +102,31 @@ class KickoffTaskOutputsSQLiteStorage:
)
conn.commit()
except sqlite3.Error as e:
self._printer.print(
content=f"SAVING KICKOFF TASK OUTPUTS ERROR: An error occurred during database initialization: {e}",
color="red",
)
error_msg = DatabaseError.format_error(DatabaseError.SAVE_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)
def update(
self,
task_index: int,
**kwargs,
):
"""
Updates an existing row in the latest_kickoff_task_outputs table based on task_index.
**kwargs: Any,
) -> None:
"""Update an existing task output record in the database.
Updates fields of a task output record identified by task_index. The fields
to update are provided as keyword arguments.
Args:
task_index: Integer index of the task to update.
**kwargs: Arbitrary keyword arguments representing fields to update.
Values that are dictionaries will be JSON encoded.
Raises:
DatabaseOperationError: If updating the task output fails due to SQLite errors.
"""
try:
with sqlite3.connect(self.db_path) as conn:
conn.execute("BEGIN TRANSACTION")
cursor = conn.cursor()
fields = []
@@ -110,14 +146,23 @@ class KickoffTaskOutputsSQLiteStorage:
conn.commit()
if cursor.rowcount == 0:
self._printer.print(
f"No row found with task_index {task_index}. No update performed.",
color="red",
)
logger.warning(f"No row found with task_index {task_index}. No update performed.")
except sqlite3.Error as e:
self._printer.print(f"UPDATE KICKOFF TASK OUTPUTS ERROR: {e}", color="red")
error_msg = DatabaseError.format_error(DatabaseError.UPDATE_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)
def load(self) -> Optional[List[Dict[str, Any]]]:
def load(self) -> List[Dict[str, Any]]:
"""Load all task output records from the database.
Returns:
List of dictionaries containing task output records, ordered by task_index.
Each dictionary contains: task_id, expected_output, output, task_index,
inputs, was_replayed, and timestamp.
Raises:
DatabaseOperationError: If loading task outputs fails due to SQLite errors.
"""
try:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
@@ -144,23 +189,26 @@ class KickoffTaskOutputsSQLiteStorage:
return results
except sqlite3.Error as e:
self._printer.print(
content=f"LOADING KICKOFF TASK OUTPUTS ERROR: An error occurred while querying kickoff task outputs: {e}",
color="red",
)
return None
error_msg = DatabaseError.format_error(DatabaseError.LOAD_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)
def delete_all(self):
"""
Deletes all rows from the latest_kickoff_task_outputs table.
def delete_all(self) -> None:
"""Delete all task output records from the database.
This method removes all records from the latest_kickoff_task_outputs table.
Use with caution as this operation cannot be undone.
Raises:
DatabaseOperationError: If deleting task outputs fails due to SQLite errors.
"""
try:
with sqlite3.connect(self.db_path) as conn:
conn.execute("BEGIN TRANSACTION")
cursor = conn.cursor()
cursor.execute("DELETE FROM latest_kickoff_task_outputs")
conn.commit()
except sqlite3.Error as e:
self._printer.print(
content=f"ERROR: Failed to delete all kickoff task outputs: {e}",
color="red",
)
error_msg = DatabaseError.format_error(DatabaseError.DELETE_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)

View File

@@ -1,5 +1,6 @@
import json
import sqlite3
from pathlib import Path
from typing import Any, Dict, List, Optional, Union
from crewai.utilities import Printer
@@ -12,10 +13,15 @@ class LTMSQLiteStorage:
"""
def __init__(
self, db_path: str = f"{db_storage_path()}/long_term_memory_storage.db"
self, db_path: Optional[str] = None
) -> None:
if db_path is None:
# Get the parent directory of the default db path and create our db file there
db_path = str(Path(db_storage_path()) / "long_term_memory_storage.db")
self.db_path = db_path
self._printer: Printer = Printer()
# Ensure parent directory exists
Path(self.db_path).parent.mkdir(parents=True, exist_ok=True)
self._initialize_db()
def _initialize_db(self):

View File

@@ -1,12 +1,13 @@
import ast
import datetime
import json
import re
import time
from difflib import SequenceMatcher
from json import JSONDecodeError
from textwrap import dedent
from typing import Any, Dict, List, Union
from typing import Any, Dict, List, Optional, Union
import json5
from json_repair import repair_json
import crewai.utilities.events as events
@@ -407,28 +408,55 @@ class ToolUsage:
)
return self._tool_calling(tool_string)
def _validate_tool_input(self, tool_input: str) -> Dict[str, Any]:
def _validate_tool_input(self, tool_input: Optional[str]) -> Dict[str, Any]:
if tool_input is None:
return {}
if not isinstance(tool_input, str) or not tool_input.strip():
raise Exception(
"Tool input must be a valid dictionary in JSON or Python literal format"
)
# Attempt 1: Parse as JSON
try:
# Replace Python literals with JSON equivalents
replacements = {
r"'": '"',
r"None": "null",
r"True": "true",
r"False": "false",
}
for pattern, replacement in replacements.items():
tool_input = re.sub(pattern, replacement, tool_input)
arguments = json.loads(tool_input)
except json.JSONDecodeError:
# Attempt to repair JSON string
repaired_input = repair_json(tool_input)
try:
arguments = json.loads(repaired_input)
except json.JSONDecodeError as e:
raise Exception(f"Invalid tool input JSON: {e}")
if isinstance(arguments, dict):
return arguments
except (JSONDecodeError, TypeError):
pass # Continue to the next parsing attempt
return arguments
# Attempt 2: Parse as Python literal
try:
arguments = ast.literal_eval(tool_input)
if isinstance(arguments, dict):
return arguments
except (ValueError, SyntaxError):
pass # Continue to the next parsing attempt
# Attempt 3: Parse as JSON5
try:
arguments = json5.loads(tool_input)
if isinstance(arguments, dict):
return arguments
except (JSONDecodeError, ValueError, TypeError):
pass # Continue to the next parsing attempt
# Attempt 4: Repair JSON
try:
repaired_input = repair_json(tool_input)
self._printer.print(
content=f"Repaired JSON: {repaired_input}", color="blue"
)
arguments = json.loads(repaired_input)
if isinstance(arguments, dict):
return arguments
except Exception as e:
self._printer.print(content=f"Failed to repair JSON: {e}", color="red")
# If all parsing attempts fail, raise an error
raise Exception(
"Tool input must be a valid dictionary in JSON or Python literal format"
)
def on_tool_error(self, tool: Any, tool_calling: ToolCalling, e: Exception) -> None:
event_data = self._prepare_event_data(tool, tool_calling)

View File

@@ -26,17 +26,24 @@ class Converter(OutputConverter):
if self.llm.supports_function_calling():
return self._create_instructor().to_pydantic()
else:
return self.llm.call(
response = self.llm.call(
[
{"role": "system", "content": self.instructions},
{"role": "user", "content": self.text},
]
)
return self.model.model_validate_json(response)
except ValidationError as e:
if current_attempt < self.max_attempts:
return self.to_pydantic(current_attempt + 1)
raise ConverterError(
f"Failed to convert text into a Pydantic model due to the following validation error: {e}"
)
except Exception as e:
if current_attempt < self.max_attempts:
return self.to_pydantic(current_attempt + 1)
return ConverterError(
f"Failed to convert text into a pydantic model due to the following error: {e}"
raise ConverterError(
f"Failed to convert text into a Pydantic model due to the following error: {e}"
)
def to_json(self, current_attempt=1):
@@ -66,7 +73,6 @@ class Converter(OutputConverter):
llm=self.llm,
model=self.model,
content=self.text,
instructions=self.instructions,
)
return inst
@@ -187,10 +193,15 @@ def convert_with_instructions(
def get_conversion_instructions(model: Type[BaseModel], llm: Any) -> str:
instructions = "I'm gonna convert this raw text into valid JSON."
instructions = "Please convert the following text into valid JSON."
if llm.supports_function_calling():
model_schema = PydanticSchemaParser(model=model).get_schema()
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
instructions += (
f"\n\nThe JSON should follow this schema:\n```json\n{model_schema}\n```"
)
else:
model_description = generate_model_description(model)
instructions += f"\n\nThe JSON should follow this format:\n{model_description}"
return instructions
@@ -230,9 +241,13 @@ def generate_model_description(model: Type[BaseModel]) -> str:
origin = get_origin(field_type)
args = get_args(field_type)
if origin is Union and type(None) in args:
if origin is Union or (origin is None and len(args) > 0):
# Handle both Union and the new '|' syntax
non_none_args = [arg for arg in args if arg is not type(None)]
return f"Optional[{describe_field(non_none_args[0])}]"
if len(non_none_args) == 1:
return f"Optional[{describe_field(non_none_args[0])}]"
else:
return f"Optional[Union[{', '.join(describe_field(arg) for arg in non_none_args)}]]"
elif origin is list:
return f"List[{describe_field(args[0])}]"
elif origin is dict:
@@ -241,8 +256,10 @@ def generate_model_description(model: Type[BaseModel]) -> str:
return f"Dict[{key_type}, {value_type}]"
elif isinstance(field_type, type) and issubclass(field_type, BaseModel):
return generate_model_description(field_type)
else:
elif hasattr(field_type, "__name__"):
return field_type.__name__
else:
return str(field_type)
fields = model.__annotations__
field_descriptions = [

View File

@@ -14,6 +14,7 @@ class EmbeddingConfigurator:
"vertexai": self._configure_vertexai,
"google": self._configure_google,
"cohere": self._configure_cohere,
"voyageai": self._configure_voyageai,
"bedrock": self._configure_bedrock,
"huggingface": self._configure_huggingface,
"watson": self._configure_watson,
@@ -42,7 +43,6 @@ class EmbeddingConfigurator:
raise Exception(
f"Unsupported embedding provider: {provider}, supported providers: {list(self.embedding_functions.keys())}"
)
return self.embedding_functions[provider](config, model_name)
@staticmethod
@@ -124,6 +124,17 @@ class EmbeddingConfigurator:
api_key=config.get("api_key"),
)
@staticmethod
def _configure_voyageai(config, model_name):
from chromadb.utils.embedding_functions.voyageai_embedding_function import (
VoyageAIEmbeddingFunction,
)
return VoyageAIEmbeddingFunction(
model_name=model_name,
api_key=config.get("api_key"),
)
@staticmethod
def _configure_bedrock(config, model_name):
from chromadb.utils.embedding_functions.amazon_bedrock_embedding_function import (

View File

@@ -0,0 +1,39 @@
"""Error message definitions for CrewAI database operations."""
from typing import Optional
class DatabaseOperationError(Exception):
"""Base exception class for database operation errors."""
def __init__(self, message: str, original_error: Optional[Exception] = None):
"""Initialize the database operation error.
Args:
message: The error message to display
original_error: The original exception that caused this error, if any
"""
super().__init__(message)
self.original_error = original_error
class DatabaseError:
"""Standardized error message templates for database operations."""
INIT_ERROR: str = "Database initialization error: {}"
SAVE_ERROR: str = "Error saving task outputs: {}"
UPDATE_ERROR: str = "Error updating task outputs: {}"
LOAD_ERROR: str = "Error loading task outputs: {}"
DELETE_ERROR: str = "Error deleting task outputs: {}"
@classmethod
def format_error(cls, template: str, error: Exception) -> str:
"""Format an error message with the given template and error.
Args:
template: The error message template to use
error: The exception to format into the template
Returns:
The formatted error message
"""
return template.format(str(error))

View File

@@ -96,9 +96,9 @@ class TaskEvaluator:
final_aggregated_data = ""
for _, data in output_training_data.items():
final_aggregated_data += (
f"Initial Output:\n{data['initial_output']}\n\n"
f"Human Feedback:\n{data['human_feedback']}\n\n"
f"Improved Output:\n{data['improved_output']}\n\n"
f"Initial Output:\n{data.get('initial_output', '')}\n\n"
f"Human Feedback:\n{data.get('human_feedback', '')}\n\n"
f"Improved Output:\n{data.get('improved_output', '')}\n\n"
)
evaluation_query = (

View File

@@ -11,12 +11,10 @@ class InternalInstructor:
model: Type,
agent: Optional[Any] = None,
llm: Optional[str] = None,
instructions: Optional[str] = None,
):
self.content = content
self.agent = agent
self.llm = llm
self.instructions = instructions
self.model = model
self._client = None
self.set_instructor()
@@ -31,10 +29,7 @@ class InternalInstructor:
import instructor
from litellm import completion
self._client = instructor.from_litellm(
completion,
mode=instructor.Mode.TOOLS,
)
self._client = instructor.from_litellm(completion)
def to_json(self):
model = self.to_pydantic()
@@ -42,8 +37,6 @@ class InternalInstructor:
def to_pydantic(self):
messages = [{"role": "user", "content": self.content}]
if self.instructions:
messages.append({"role": "system", "content": self.instructions})
model = self._client.chat.completions.create(
model=self.llm.model, response_model=self.model, messages=messages
)

View File

@@ -5,14 +5,18 @@ import appdirs
"""Path management utilities for CrewAI storage and configuration."""
def db_storage_path():
"""Returns the path for database storage."""
def db_storage_path() -> str:
"""Returns the path for SQLite database storage.
Returns:
str: Full path to the SQLite database file
"""
app_name = get_project_directory_name()
app_author = "CrewAI"
data_dir = Path(appdirs.user_data_dir(app_name, app_author))
data_dir.mkdir(parents=True, exist_ok=True)
return data_dir
return str(data_dir)
def get_project_directory_name():
@@ -24,4 +28,4 @@ def get_project_directory_name():
else:
cwd = Path.cwd()
project_directory_name = cwd.name
return project_directory_name
return project_directory_name

View File

@@ -21,6 +21,16 @@ class Printer:
self._print_yellow(content)
elif color == "bold_yellow":
self._print_bold_yellow(content)
elif color == "cyan":
self._print_cyan(content)
elif color == "bold_cyan":
self._print_bold_cyan(content)
elif color == "magenta":
self._print_magenta(content)
elif color == "bold_magenta":
self._print_bold_magenta(content)
elif color == "green":
self._print_green(content)
else:
print(content)
@@ -44,3 +54,18 @@ class Printer:
def _print_bold_yellow(self, content):
print("\033[1m\033[93m {}\033[00m".format(content))
def _print_cyan(self, content):
print("\033[96m {}\033[00m".format(content))
def _print_bold_cyan(self, content):
print("\033[1m\033[96m {}\033[00m".format(content))
def _print_magenta(self, content):
print("\033[35m {}\033[00m".format(content))
def _print_bold_magenta(self, content):
print("\033[1m\033[35m {}\033[00m".format(content))
def _print_green(self, content):
print("\033[32m {}\033[00m".format(content))

View File

@@ -1,4 +1,4 @@
from typing import Type, Union, get_args, get_origin
from typing import Dict, List, Type, Union, get_args, get_origin
from pydantic import BaseModel
@@ -10,40 +10,83 @@ class PydanticSchemaParser(BaseModel):
"""
Public method to get the schema of a Pydantic model.
:param model: The Pydantic model class to generate schema for.
:return: String representation of the model schema.
"""
return self._get_model_schema(self.model)
return "{\n" + self._get_model_schema(self.model) + "\n}"
def _get_model_schema(self, model, depth=0) -> str:
indent = " " * depth
lines = [f"{indent}{{"]
for field_name, field in model.model_fields.items():
field_type_str = self._get_field_type(field, depth + 1)
lines.append(f"{indent} {field_name}: {field_type_str},")
lines[-1] = lines[-1].rstrip(",") # Remove trailing comma from last item
lines.append(f"{indent}}}")
return "\n".join(lines)
def _get_model_schema(self, model: Type[BaseModel], depth: int = 0) -> str:
indent = " " * 4 * depth
lines = [
f"{indent} {field_name}: {self._get_field_type(field, depth + 1)}"
for field_name, field in model.model_fields.items()
]
return ",\n".join(lines)
def _get_field_type(self, field, depth) -> str:
def _get_field_type(self, field, depth: int) -> str:
field_type = field.annotation
if get_origin(field_type) is list:
origin = get_origin(field_type)
if origin in {list, List}:
list_item_type = get_args(field_type)[0]
if isinstance(list_item_type, type) and issubclass(
list_item_type, BaseModel
):
nested_schema = self._get_model_schema(list_item_type, depth + 1)
return f"List[\n{nested_schema}\n{' ' * 4 * depth}]"
return self._format_list_type(list_item_type, depth)
if origin in {dict, Dict}:
key_type, value_type = get_args(field_type)
return f"Dict[{key_type.__name__}, {value_type.__name__}]"
if origin is Union:
return self._format_union_type(field_type, depth)
if isinstance(field_type, type) and issubclass(field_type, BaseModel):
nested_schema = self._get_model_schema(field_type, depth)
nested_indent = " " * 4 * depth
return f"{field_type.__name__}\n{nested_indent}{{\n{nested_schema}\n{nested_indent}}}"
return field_type.__name__
def _format_list_type(self, list_item_type, depth: int) -> str:
if isinstance(list_item_type, type) and issubclass(list_item_type, BaseModel):
nested_schema = self._get_model_schema(list_item_type, depth + 1)
nested_indent = " " * 4 * (depth)
return f"List[\n{nested_indent}{{\n{nested_schema}\n{nested_indent}}}\n{nested_indent}]"
return f"List[{list_item_type.__name__}]"
def _format_union_type(self, field_type, depth: int) -> str:
args = get_args(field_type)
if type(None) in args:
# It's an Optional type
non_none_args = [arg for arg in args if arg is not type(None)]
if len(non_none_args) == 1:
inner_type = self._get_field_type_for_annotation(
non_none_args[0], depth
)
return f"Optional[{inner_type}]"
else:
return f"List[{list_item_type.__name__}]"
elif get_origin(field_type) is Union:
union_args = get_args(field_type)
if type(None) in union_args:
non_none_type = next(arg for arg in union_args if arg is not type(None))
return f"Optional[{self._get_field_type(field.__class__(annotation=non_none_type), depth)}]"
else:
return f"Union[{', '.join(arg.__name__ for arg in union_args)}]"
elif isinstance(field_type, type) and issubclass(field_type, BaseModel):
return self._get_model_schema(field_type, depth)
# Union with None and multiple other types
inner_types = ", ".join(
self._get_field_type_for_annotation(arg, depth)
for arg in non_none_args
)
return f"Optional[Union[{inner_types}]]"
else:
return getattr(field_type, "__name__", str(field_type))
# General Union type
inner_types = ", ".join(
self._get_field_type_for_annotation(arg, depth) for arg in args
)
return f"Union[{inner_types}]"
def _get_field_type_for_annotation(self, annotation, depth: int) -> str:
origin = get_origin(annotation)
if origin in {list, List}:
list_item_type = get_args(annotation)[0]
return self._format_list_type(list_item_type, depth)
if origin in {dict, Dict}:
key_type, value_type = get_args(annotation)
return f"Dict[{key_type.__name__}, {value_type.__name__}]"
if origin is Union:
return self._format_union_type(annotation, depth)
if isinstance(annotation, type) and issubclass(annotation, BaseModel):
nested_schema = self._get_model_schema(annotation, depth)
nested_indent = " " * 4 * depth
return f"{annotation.__name__}\n{nested_indent}{{\n{nested_schema}\n{nested_indent}}}"
return annotation.__name__

View File

@@ -23,11 +23,15 @@ class TokenCalcHandler(CustomLogger):
with warnings.catch_warnings():
warnings.simplefilter("ignore", UserWarning)
usage: Usage = response_obj["usage"]
self.token_cost_process.sum_successful_requests(1)
self.token_cost_process.sum_prompt_tokens(usage.prompt_tokens)
self.token_cost_process.sum_completion_tokens(usage.completion_tokens)
if usage.prompt_tokens_details:
self.token_cost_process.sum_cached_prompt_tokens(
usage.prompt_tokens_details.cached_tokens
)
if isinstance(response_obj, dict) and "usage" in response_obj:
usage: Usage = response_obj["usage"]
if usage:
self.token_cost_process.sum_successful_requests(1)
if hasattr(usage, "prompt_tokens"):
self.token_cost_process.sum_prompt_tokens(usage.prompt_tokens)
if hasattr(usage, "completion_tokens"):
self.token_cost_process.sum_completion_tokens(usage.completion_tokens)
if hasattr(usage, "prompt_tokens_details") and usage.prompt_tokens_details:
self.token_cost_process.sum_cached_prompt_tokens(
usage.prompt_tokens_details.cached_tokens
)

View File

@@ -10,6 +10,7 @@ from crewai import Agent, Crew, Task
from crewai.agents.cache import CacheHandler
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.agents.parser import AgentAction, CrewAgentParser, OutputParserException
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from crewai.llm import LLM
from crewai.tools import tool
@@ -114,35 +115,6 @@ def test_custom_llm_temperature_preservation():
assert agent.llm.temperature == 0.7
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_execute_task():
from langchain_openai import ChatOpenAI
from crewai import Task
agent = Agent(
role="Math Tutor",
goal="Solve math problems accurately",
backstory="You are an experienced math tutor with a knack for explaining complex concepts simply.",
llm=ChatOpenAI(temperature=0.7, model="gpt-4o-mini"),
)
task = Task(
description="Calculate the area of a circle with radius 5 cm.",
expected_output="The calculated area of the circle in square centimeters.",
agent=agent,
)
result = agent.execute_task(task)
assert result is not None
assert (
result
== "The calculated area of the circle is approximately 78.5 square centimeters."
)
assert "square centimeters" in result.lower()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_execution():
agent = Agent(
@@ -1629,3 +1601,181 @@ def test_agent_with_knowledge_sources():
# Assert that the agent provides the correct information
assert "red" in result.raw.lower()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_with_knowledge_sources_works_with_copy():
content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content)
with patch(
"crewai.knowledge.source.base_knowledge_source.BaseKnowledgeSource",
autospec=True,
) as MockKnowledgeSource:
mock_knowledge_source_instance = MockKnowledgeSource.return_value
mock_knowledge_source_instance.__class__ = BaseKnowledgeSource
mock_knowledge_source_instance.sources = [string_source]
agent = Agent(
role="Information Agent",
goal="Provide information based on knowledge sources",
backstory="You have access to specific knowledge sources.",
llm=LLM(model="gpt-4o-mini"),
knowledge_sources=[string_source],
)
with patch(
"crewai.knowledge.storage.knowledge_storage.KnowledgeStorage"
) as MockKnowledgeStorage:
mock_knowledge_storage = MockKnowledgeStorage.return_value
agent.knowledge_storage = mock_knowledge_storage
agent_copy = agent.copy()
assert agent_copy.role == agent.role
assert agent_copy.goal == agent.goal
assert agent_copy.backstory == agent.backstory
assert agent_copy.knowledge_sources is not None
assert len(agent_copy.knowledge_sources) == 1
assert isinstance(agent_copy.knowledge_sources[0], StringKnowledgeSource)
assert agent_copy.knowledge_sources[0].content == content
assert isinstance(agent_copy.llm, LLM)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_litellm_auth_error_handling():
"""Test that LiteLLM authentication errors are handled correctly and not retried."""
from litellm import AuthenticationError as LiteLLMAuthenticationError
# Create an agent with a mocked LLM and max_retry_limit=0
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm=LLM(model="gpt-4"),
max_retry_limit=0, # Disable retries for authentication errors
)
# Create a task
task = Task(
description="Test task",
expected_output="Test output",
agent=agent,
)
# Mock the LLM call to raise AuthenticationError
with (
patch.object(LLM, "call") as mock_llm_call,
pytest.raises(LiteLLMAuthenticationError, match="Invalid API key"),
):
mock_llm_call.side_effect = LiteLLMAuthenticationError(
message="Invalid API key", llm_provider="openai", model="gpt-4"
)
agent.execute_task(task)
# Verify the call was only made once (no retries)
mock_llm_call.assert_called_once()
def test_crew_agent_executor_litellm_auth_error():
"""Test that CrewAgentExecutor handles LiteLLM authentication errors by raising them."""
from litellm.exceptions import AuthenticationError
from crewai.agents.tools_handler import ToolsHandler
from crewai.utilities import Printer
# Create an agent and executor
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm=LLM(model="gpt-4", api_key="invalid_api_key"),
)
task = Task(
description="Test task",
expected_output="Test output",
agent=agent,
)
# Create executor with all required parameters
executor = CrewAgentExecutor(
agent=agent,
task=task,
llm=agent.llm,
crew=None,
prompt={"system": "You are a test agent", "user": "Execute the task: {input}"},
max_iter=5,
tools=[],
tools_names="",
stop_words=[],
tools_description="",
tools_handler=ToolsHandler(),
)
# Mock the LLM call to raise AuthenticationError
with (
patch.object(LLM, "call") as mock_llm_call,
patch.object(Printer, "print") as mock_printer,
pytest.raises(AuthenticationError) as exc_info,
):
mock_llm_call.side_effect = AuthenticationError(
message="Invalid API key", llm_provider="openai", model="gpt-4"
)
executor.invoke(
{
"input": "test input",
"tool_names": "",
"tools": "",
}
)
# Verify error handling messages
error_message = f"Error during LLM call: {str(mock_llm_call.side_effect)}"
mock_printer.assert_any_call(
content=error_message,
color="red",
)
# Verify the call was only made once (no retries)
mock_llm_call.assert_called_once()
# Assert that the exception was raised and has the expected attributes
assert exc_info.type is AuthenticationError
assert "Invalid API key".lower() in exc_info.value.message.lower()
assert exc_info.value.llm_provider == "openai"
assert exc_info.value.model == "gpt-4"
def test_litellm_anthropic_error_handling():
"""Test that AnthropicError from LiteLLM is handled correctly and not retried."""
from litellm.llms.anthropic.common_utils import AnthropicError
# Create an agent with a mocked LLM that uses an Anthropic model
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm=LLM(model="claude-3.5-sonnet-20240620"),
max_retry_limit=0,
)
# Create a task
task = Task(
description="Test task",
expected_output="Test output",
agent=agent,
)
# Mock the LLM call to raise AnthropicError
with (
patch.object(LLM, "call") as mock_llm_call,
pytest.raises(AnthropicError, match="Test Anthropic error"),
):
mock_llm_call.side_effect = AnthropicError(
status_code=500,
message="Test Anthropic error",
)
agent.execute_task(task)
# Verify the LLM call was only made once (no retries)
mock_llm_call.assert_called_once()

View File

@@ -2,21 +2,21 @@ interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer(*args:
Any, **kwargs: Any) -> Any\nTool Description: get_final_answer() - Get the final
answer but don''t give it yet, just re-use this tool non-stop. \nTool
Arguments: {}\n\nUse the following format:\n\nThought: you should always think
about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n"}, {"role": "user", "content":
"\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect criteria
for your final answer: The final answer\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nBegin! This is VERY important
to you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}], "model": "gpt-4o"}'
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [get_final_answer], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"}, {"role": "user",
"content": "\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect
criteria for your final answer: The final answer\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nBegin! This is VERY
important to you, use the tools available and give your best Final Answer, your
job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
@@ -25,16 +25,13 @@ interactions:
connection:
- keep-alive
content-length:
- '1325'
- '1367'
content-type:
- application/json
cookie:
- _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000;
__cf_bm=3giyBOIM0GNudFELtsBWYXwLrpLBTNLsh81wfXgu2tg-1727226247-1.0.1.1-ugUDz0c5EhmfVpyGtcdedlIWeDGuy2q0tXQTKVpv83HZhvxgBcS7SBL1wS4rapPM38yhfEcfwA79ARt3HQEzKA
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
@@ -44,30 +41,35 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.47.0
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-ABAtOWmVjvzQ9X58tKAUcOF4gmXwx\",\n \"object\":
\"chat.completion\",\n \"created\": 1727226842,\n \"model\": \"gpt-4o-2024-05-13\",\n
content: "{\n \"id\": \"chatcmpl-AsXdf4OZKCZSigmN4k0gyh67NciqP\",\n \"object\":
\"chat.completion\",\n \"created\": 1737562383,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I need to use the get_final_answer
tool to determine the final answer.\\nAction: get_final_answer\\nAction Input:
{}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 274,\n \"completion_tokens\":
27,\n \"total_tokens\": 301,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
\"assistant\",\n \"content\": \"```\\nThought: I have to use the available
tool to get the final answer. Let's proceed with executing it.\\nAction: get_final_answer\\nAction
Input: {}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
274,\n \"completion_tokens\": 33,\n \"total_tokens\": 307,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_50cad350e4\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c8727b3492f31e6-MIA
- 9060d43e3be1d690-IAD
Connection:
- keep-alive
Content-Encoding:
@@ -75,19 +77,27 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 25 Sep 2024 01:14:03 GMT
- Wed, 22 Jan 2025 16:13:03 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=_Jcp7wnO_mXdvOnborCN6j8HwJxJXbszedJC1l7pFUg-1737562383-1.0.1.1-pDSLXlg.nKjG4wsT7mTJPjUvOX1UJITiS4MqKp6yfMWwRSJINsW1qC48SAcjBjakx2H5I1ESVk9JtUpUFDtf4g;
path=/; expires=Wed, 22-Jan-25 16:43:03 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=x3SYvzL2nq_PTBGtE8R9cl5CkeaaDzZFQIrYfo91S2s-1737562383916-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '348'
- '791'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -99,45 +109,59 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999682'
- '29999680'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_be929caac49706f487950548bdcdd46e
- req_eeed99acafd3aeb1e3d4a6c8063192b0
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer(*args:
Any, **kwargs: Any) -> Any\nTool Description: get_final_answer() - Get the final
answer but don''t give it yet, just re-use this tool non-stop. \nTool
Arguments: {}\n\nUse the following format:\n\nThought: you should always think
about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n"}, {"role": "user", "content":
"\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect criteria
for your final answer: The final answer\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nBegin! This is VERY important
to you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}, {"role": "user", "content": "Thought: I need to use the
get_final_answer tool to determine the final answer.\nAction: get_final_answer\nAction
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [get_final_answer], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"}, {"role": "user",
"content": "\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect
criteria for your final answer: The final answer\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nBegin! This is VERY
important to you, use the tools available and give your best Final Answer, your
job depends on it!\n\nThought:"}, {"role": "assistant", "content": "```\nThought:
I have to use the available tool to get the final answer. Let''s proceed with
executing it.\nAction: get_final_answer\nAction Input: {}\nObservation: I encountered
an error: Error on parsing tool.\nMoving on then. I MUST either use a tool (use
one at time) OR give my best final answer not both at the same time. When responding,
I must use the following format:\n\n```\nThought: you should always think about
what to do\nAction: the action to take, should be one of [get_final_answer]\nAction
Input: the input to the action, dictionary enclosed in curly braces\nObservation:
the result of the action\n```\nThis Thought/Action/Action Input/Result can repeat
N times. Once I know the final answer, I must return the following format:\n\n```\nThought:
I now can give a great answer\nFinal Answer: Your final answer must be the great
and the most complete as possible, it must be outcome described\n\n```"}, {"role":
"assistant", "content": "```\nThought: I have to use the available tool to get
the final answer. Let''s proceed with executing it.\nAction: get_final_answer\nAction
Input: {}\nObservation: I encountered an error: Error on parsing tool.\nMoving
on then. I MUST either use a tool (use one at time) OR give my best final answer
not both at the same time. To Use the following format:\n\nThought: you should
always think about what to do\nAction: the action to take, should be one of
[get_final_answer]\nAction Input: the input to the action, dictionary enclosed
in curly braces\nObservation: the result of the action\n... (this Thought/Action/Action
Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal
not both at the same time. When responding, I must use the following format:\n\n```\nThought:
you should always think about what to do\nAction: the action to take, should
be one of [get_final_answer]\nAction Input: the input to the action, dictionary
enclosed in curly braces\nObservation: the result of the action\n```\nThis Thought/Action/Action
Input/Result can repeat N times. Once I know the final answer, I must return
the following format:\n\n```\nThought: I now can give a great answer\nFinal
Answer: Your final answer must be the great and the most complete as possible,
it must be outcome described\n\n \nNow it''s time you MUST give your absolute
it must be outcome described\n\n```\nNow it''s time you MUST give your absolute
best final answer. You''ll ignore all previous instructions, stop using any
tools, and just return your absolute BEST Final answer."}], "model": "gpt-4o"}'
tools, and just return your absolute BEST Final answer."}], "model": "gpt-4o",
"stop": ["\nObservation:"]}'
headers:
accept:
- application/json
@@ -146,16 +170,16 @@ interactions:
connection:
- keep-alive
content-length:
- '2320'
- '3445'
content-type:
- application/json
cookie:
- _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000;
__cf_bm=3giyBOIM0GNudFELtsBWYXwLrpLBTNLsh81wfXgu2tg-1727226247-1.0.1.1-ugUDz0c5EhmfVpyGtcdedlIWeDGuy2q0tXQTKVpv83HZhvxgBcS7SBL1wS4rapPM38yhfEcfwA79ARt3HQEzKA
- __cf_bm=_Jcp7wnO_mXdvOnborCN6j8HwJxJXbszedJC1l7pFUg-1737562383-1.0.1.1-pDSLXlg.nKjG4wsT7mTJPjUvOX1UJITiS4MqKp6yfMWwRSJINsW1qC48SAcjBjakx2H5I1ESVk9JtUpUFDtf4g;
_cfuvid=x3SYvzL2nq_PTBGtE8R9cl5CkeaaDzZFQIrYfo91S2s-1737562383916-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
@@ -165,29 +189,36 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.47.0
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-ABAtPaaeRfdNsZ3k06CfAmrEW8IJu\",\n \"object\":
\"chat.completion\",\n \"created\": 1727226843,\n \"model\": \"gpt-4o-2024-05-13\",\n
content: "{\n \"id\": \"chatcmpl-AsXdg9UrLvAiqWP979E6DszLsQ84k\",\n \"object\":
\"chat.completion\",\n \"created\": 1737562384,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Final Answer: The final answer\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 483,\n \"completion_tokens\":
6,\n \"total_tokens\": 489,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
\"assistant\",\n \"content\": \"```\\nThought: I now know the final answer\\nFinal
Answer: The final answer must be the great and the most complete as possible,
it must be outcome described.\\n```\",\n \"refusal\": null\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 719,\n \"completion_tokens\": 35,\n
\ \"total_tokens\": 754,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_50cad350e4\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c8727b9da1f31e6-MIA
- 9060d4441edad690-IAD
Connection:
- keep-alive
Content-Encoding:
@@ -195,7 +226,7 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 25 Sep 2024 01:14:03 GMT
- Wed, 22 Jan 2025 16:13:05 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -209,7 +240,7 @@ interactions:
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '188'
- '928'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -221,13 +252,13 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999445'
- '29999187'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 1ms
x-request-id:
- req_d8e32538689fe064627468bad802d9a8
- req_61fc7506e6db326ec572224aec81ef23
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,121 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Math Tutor. You are
an experienced math tutor with a knack for explaining complex concepts simply.\nYour
personal goal is: Solve math problems accurately\nTo give my best complete final
answer to the task use the exact following format:\n\nThought: I now can give
a great answer\nFinal Answer: Your final answer must be the great and the most
complete as possible, it must be outcome described.\n\nI MUST use these formats,
my job depends on it!"}, {"role": "user", "content": "\nCurrent Task: Calculate
the area of a circle with radius 5 cm.\n\nThis is the expect criteria for your
final answer: The calculated area of the circle in square centimeters.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "temperature":
0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '969'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AB7LEfa5gX4cncpI4avsK0CJG8pCb\",\n \"object\":
\"chat.completion\",\n \"created\": 1727213192,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer\\n\\nTo
calculate the area of a circle, we use the formula:\\n\\n\\\\[ A = \\\\pi r^2
\\\\]\\n\\nwhere \\\\( A \\\\) is the area, \\\\( \\\\pi \\\\) (approximately
3.14), and \\\\( r \\\\) is the radius of the circle.\\n\\nGiven that the radius
\\\\( r \\\\) is 5 cm, we can substitute this value into the formula:\\n\\n\\\\[
A = \\\\pi (5 \\\\, \\\\text{cm})^2 \\\\]\\n\\nCalculating this step-by-step:\\n\\n1.
First, square the radius:\\n \\\\[ (5 \\\\, \\\\text{cm})^2 = 25 \\\\, \\\\text{cm}^2
\\\\]\\n\\n2. Then, multiply by \\\\( \\\\pi \\\\):\\n \\\\[ A = \\\\pi \\\\times
25 \\\\, \\\\text{cm}^2 \\\\]\\n\\nUsing the approximate value of \\\\( \\\\pi
\\\\):\\n \\\\[ A \\\\approx 3.14 \\\\times 25 \\\\, \\\\text{cm}^2 \\\\]\\n
\ \\\\[ A \\\\approx 78.5 \\\\, \\\\text{cm}^2 \\\\]\\n\\nThus, the area of
the circle is approximately 78.5 square centimeters.\\n\\nFinal Answer: The
calculated area of the circle is approximately 78.5 square centimeters.\",\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 182,\n \"completion_tokens\":
270,\n \"total_tokens\": 452,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_1bb46167f9\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c85da71fcac1cf3-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 24 Sep 2024 21:26:34 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
path=/; expires=Tue, 24-Sep-24 21:56:34 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '2244'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999774'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_2e565b5f24c38968e4e923a47ecc6233
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,4 +1,87 @@
interactions:
- request:
body: !!binary |
CqcXCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkS/hYKEgoQY3Jld2FpLnRl
bGVtZXRyeRJ5ChBuJJtOdNaB05mOW/p3915eEgj2tkAd3rZcASoQVG9vbCBVc2FnZSBFcnJvcjAB
OYa7/URvKBUYQUpcFEVvKBUYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoPCgNsbG0SCAoG
Z3B0LTRvegIYAYUBAAEAABLJBwoQifhX01E5i+5laGdALAlZBBIIBuGM1aN+OPgqDENyZXcgQ3Jl
YXRlZDABORVGruBvKBUYQaipwOBvKBUYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5w
eXRob25fdmVyc2lvbhIICgYzLjEyLjdKLgoIY3Jld19rZXkSIgogN2U2NjA4OTg5ODU5YTY3ZWVj
ODhlZWY3ZmNlODUyMjVKMQoHY3Jld19pZBImCiRiOThiNWEwMC01YTI1LTQxMDctYjQwNS1hYmYz
MjBhOGYzYThKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAA
ShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgB
SuQCCgtjcmV3X2FnZW50cxLUAgrRAlt7ImtleSI6ICIyMmFjZDYxMWU0NGVmNWZhYzA1YjUzM2Q3
NWU4ODkzYiIsICJpZCI6ICJkNWIyMzM1YS0yMmIyLTQyZWEtYmYwNS03OTc3NmU3MmYzOTIiLCAi
cm9sZSI6ICJEYXRhIFNjaWVudGlzdCIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAy
MCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJn
cHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4
ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFsi
Z2V0IGdyZWV0aW5ncyJdfV1KkgIKCmNyZXdfdGFza3MSgwIKgAJbeyJrZXkiOiAiYTI3N2IzNGIy
YzE0NmYwYzU2YzVlMTM1NmU4ZjhhNTciLCAiaWQiOiAiMjJiZWMyMzEtY2QyMS00YzU4LTgyN2Ut
MDU4MWE4ZjBjMTExIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6
IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJEYXRhIFNjaWVudGlzdCIsICJhZ2VudF9rZXkiOiAiMjJh
Y2Q2MTFlNDRlZjVmYWMwNWI1MzNkNzVlODg5M2IiLCAidG9vbHNfbmFtZXMiOiBbImdldCBncmVl
dGluZ3MiXX1degIYAYUBAAEAABKOAgoQ5WYoxRtTyPjge4BduhL0rRIIv2U6rvWALfwqDFRhc2sg
Q3JlYXRlZDABOX068uBvKBUYQZkv8+BvKBUYSi4KCGNyZXdfa2V5EiIKIDdlNjYwODk4OTg1OWE2
N2VlYzg4ZWVmN2ZjZTg1MjI1SjEKB2NyZXdfaWQSJgokYjk4YjVhMDAtNWEyNS00MTA3LWI0MDUt
YWJmMzIwYThmM2E4Si4KCHRhc2tfa2V5EiIKIGEyNzdiMzRiMmMxNDZmMGM1NmM1ZTEzNTZlOGY4
YTU3SjEKB3Rhc2tfaWQSJgokMjJiZWMyMzEtY2QyMS00YzU4LTgyN2UtMDU4MWE4ZjBjMTExegIY
AYUBAAEAABKQAQoQXyeDtJDFnyp2Fjk9YEGTpxIIaNE7gbhPNYcqClRvb2wgVXNhZ2UwATkaXTvj
bygVGEGvx0rjbygVGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKHAoJdG9vbF9uYW1lEg8K
DUdldCBHcmVldGluZ3NKDgoIYXR0ZW1wdHMSAhgBegIYAYUBAAEAABLVBwoQMWfznt0qwauEzl7T
UOQxRBII9q+pUS5EdLAqDENyZXcgQ3JlYXRlZDABORONPORvKBUYQSAoS+RvKBUYShoKDmNyZXdh
aV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVyc2lvbhIICgYzLjEyLjdKLgoIY3Jld19r
ZXkSIgogYzMwNzYwMDkzMjY3NjE0NDRkNTdjNzFkMWRhM2YyN2NKMQoHY3Jld19pZBImCiQ3OTQw
MTkyNS1iOGU5LTQ3MDgtODUzMC00NDhhZmEzYmY4YjBKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVl
bnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVj
cmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSuoCCgtjcmV3X2FnZW50cxLaAgrXAlt7ImtleSI6ICI5
OGYzYjFkNDdjZTk2OWNmMDU3NzI3Yjc4NDE0MjVjZCIsICJpZCI6ICI5OTJkZjYyZi1kY2FiLTQy
OTUtOTIwNi05MDBkNDExNGIxZTkiLCAicm9sZSI6ICJGcmllbmRseSBOZWlnaGJvciIsICJ2ZXJi
b3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25f
Y2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJs
ZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9s
aW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFsiZGVjaWRlIGdyZWV0aW5ncyJdfV1KmAIKCmNyZXdf
dGFza3MSiQIKhgJbeyJrZXkiOiAiODBkN2JjZDQ5MDk5MjkwMDgzODMyZjBlOTgzMzgwZGYiLCAi
aWQiOiAiMmZmNjE5N2UtYmEyNy00YjczLWI0YTctNGZhMDQ4ZTYyYjQ3IiwgImFzeW5jX2V4ZWN1
dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJGcmll
bmRseSBOZWlnaGJvciIsICJhZ2VudF9rZXkiOiAiOThmM2IxZDQ3Y2U5NjljZjA1NzcyN2I3ODQx
NDI1Y2QiLCAidG9vbHNfbmFtZXMiOiBbImRlY2lkZSBncmVldGluZ3MiXX1degIYAYUBAAEAABKO
AgoQnjTp5boK7/+DQxztYIpqihIIgGnMUkBtzHEqDFRhc2sgQ3JlYXRlZDABOcpYcuRvKBUYQalE
c+RvKBUYSi4KCGNyZXdfa2V5EiIKIGMzMDc2MDA5MzI2NzYxNDQ0ZDU3YzcxZDFkYTNmMjdjSjEK
B2NyZXdfaWQSJgokNzk0MDE5MjUtYjhlOS00NzA4LTg1MzAtNDQ4YWZhM2JmOGIwSi4KCHRhc2tf
a2V5EiIKIDgwZDdiY2Q0OTA5OTI5MDA4MzgzMmYwZTk4MzM4MGRmSjEKB3Rhc2tfaWQSJgokMmZm
NjE5N2UtYmEyNy00YjczLWI0YTctNGZhMDQ4ZTYyYjQ3egIYAYUBAAEAABKTAQoQ26H9pLUgswDN
p9XhJwwL6BIIx3bw7mAvPYwqClRvb2wgVXNhZ2UwATmy7NPlbygVGEEvb+HlbygVGEoaCg5jcmV3
YWlfdmVyc2lvbhIICgYwLjg2LjBKHwoJdG9vbF9uYW1lEhIKEERlY2lkZSBHcmVldGluZ3NKDgoI
YXR0ZW1wdHMSAhgBegIYAYUBAAEAAA==
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '2986'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.27.0
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Fri, 27 Dec 2024 22:14:53 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nTo give my best complete final answer to the task
@@ -22,18 +105,20 @@ interactions:
- '824'
content-type:
- application/json
cookie:
- _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
@@ -47,8 +132,8 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AaqIIsTxhvf75xvuu7gQScIlRSKbW\",\n \"object\":
\"chat.completion\",\n \"created\": 1733344190,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
content: "{\n \"id\": \"chatcmpl-AjCtZLLrWi8ZASpP9bz6HaCV7xBIn\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: Hi\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
@@ -57,12 +142,12 @@ interactions:
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0705bf87c0\"\n}\n"
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ece8cfc3b1f4532-ATL
- 8f8caa83deca756b-SEA
Connection:
- keep-alive
Content-Encoding:
@@ -70,14 +155,14 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 04 Dec 2024 20:29:50 GMT
- Fri, 27 Dec 2024 22:14:53 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=QJZZjZ6eqnVamqUkw.Bx0mj7oBi3a_vGEH1VODcUxlg-1733344190-1.0.1.1-xyN0ekA9xIrSwEhRBmTiWJ3Pt72UYLU5owKfkz5yihVmMTfsr_Qz.ssGPJ5cuft066v1xVjb4zOSTdFmesMSKg;
path=/; expires=Wed, 04-Dec-24 20:59:50 GMT; domain=.api.openai.com; HttpOnly;
- __cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw;
path=/; expires=Fri, 27-Dec-24 22:44:53 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=eCIkP8GVPvpkg19eOhCquWFHm.RTQBQy4yHLGGEAH5c-1733344190334-0.0.1.1-604800000;
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
@@ -90,7 +175,7 @@ interactions:
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '313'
- '404'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -108,7 +193,7 @@ interactions:
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_9fd9a8ee688045dcf7ac5f6fdf689372
- req_6ac84634bff9193743c4b0911c09b4a6
http_version: HTTP/1.1
status_code: 200
- request:
@@ -131,20 +216,20 @@ interactions:
content-type:
- application/json
cookie:
- __cf_bm=QJZZjZ6eqnVamqUkw.Bx0mj7oBi3a_vGEH1VODcUxlg-1733344190-1.0.1.1-xyN0ekA9xIrSwEhRBmTiWJ3Pt72UYLU5owKfkz5yihVmMTfsr_Qz.ssGPJ5cuft066v1xVjb4zOSTdFmesMSKg;
_cfuvid=eCIkP8GVPvpkg19eOhCquWFHm.RTQBQy4yHLGGEAH5c-1733344190334-0.0.1.1-604800000
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
@@ -158,8 +243,8 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AaqIIaQlLyoyPmk909PvAIfA2TmJL\",\n \"object\":
\"chat.completion\",\n \"created\": 1733344190,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
content: "{\n \"id\": \"chatcmpl-AjCtZNlWdrrPZhq0MJDqd16sMuQEJ\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"True\",\n \"refusal\": null\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
@@ -168,12 +253,12 @@ interactions:
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0705bf87c0\"\n}\n"
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ece8d060b5e4532-ATL
- 8f8caa87094f756b-SEA
Connection:
- keep-alive
Content-Encoding:
@@ -181,7 +266,7 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 04 Dec 2024 20:29:50 GMT
- Fri, 27 Dec 2024 22:14:53 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -195,7 +280,7 @@ interactions:
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '375'
- '156'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -213,7 +298,7 @@ interactions:
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_be7cb475e0859a82c37ee3f2871ea5ea
- req_ec74bef2a9ef7b2144c03fd7f7bbeab0
http_version: HTTP/1.1
status_code: 200
- request:
@@ -242,20 +327,20 @@ interactions:
content-type:
- application/json
cookie:
- __cf_bm=QJZZjZ6eqnVamqUkw.Bx0mj7oBi3a_vGEH1VODcUxlg-1733344190-1.0.1.1-xyN0ekA9xIrSwEhRBmTiWJ3Pt72UYLU5owKfkz5yihVmMTfsr_Qz.ssGPJ5cuft066v1xVjb4zOSTdFmesMSKg;
_cfuvid=eCIkP8GVPvpkg19eOhCquWFHm.RTQBQy4yHLGGEAH5c-1733344190334-0.0.1.1-604800000
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
@@ -269,22 +354,23 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AaqIJAAxpVfUOdrsgYKHwfRlHv4RS\",\n \"object\":
\"chat.completion\",\n \"created\": 1733344191,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
content: "{\n \"id\": \"chatcmpl-AjCtZGv4f3h7GDdhyOy9G0sB1lRgC\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I now can give a great answer
\ \\nFinal Answer: Hello\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
188,\n \"completion_tokens\": 14,\n \"total_tokens\": 202,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
\"assistant\",\n \"content\": \"Thought: I understand the feedback and
will adjust my response accordingly. \\nFinal Answer: Hello\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 188,\n \"completion_tokens\":
18,\n \"total_tokens\": 206,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0705bf87c0\"\n}\n"
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ece8d090fc34532-ATL
- 8f8caa88cac4756b-SEA
Connection:
- keep-alive
Content-Encoding:
@@ -292,7 +378,7 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 04 Dec 2024 20:29:51 GMT
- Fri, 27 Dec 2024 22:14:54 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -306,7 +392,7 @@ interactions:
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '484'
- '358'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -324,7 +410,7 @@ interactions:
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_5bf4a565ad6c2567a1ed204ecac89134
- req_ae1ab6b206d28ded6fee3c83ed0c2ab7
http_version: HTTP/1.1
status_code: 200
- request:
@@ -346,20 +432,20 @@ interactions:
content-type:
- application/json
cookie:
- __cf_bm=QJZZjZ6eqnVamqUkw.Bx0mj7oBi3a_vGEH1VODcUxlg-1733344190-1.0.1.1-xyN0ekA9xIrSwEhRBmTiWJ3Pt72UYLU5owKfkz5yihVmMTfsr_Qz.ssGPJ5cuft066v1xVjb4zOSTdFmesMSKg;
_cfuvid=eCIkP8GVPvpkg19eOhCquWFHm.RTQBQy4yHLGGEAH5c-1733344190334-0.0.1.1-604800000
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
@@ -373,8 +459,8 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AaqIJqyG8vl9mxj2qDPZgaxyNLLIq\",\n \"object\":
\"chat.completion\",\n \"created\": 1733344191,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
content: "{\n \"id\": \"chatcmpl-AjCtaiHL4TY8Dssk0j2miqmjrzquy\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337694,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"False\",\n \"refusal\": null\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
@@ -383,12 +469,12 @@ interactions:
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0705bf87c0\"\n}\n"
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ece8d0cfdeb4532-ATL
- 8f8caa8bdd26756b-SEA
Connection:
- keep-alive
Content-Encoding:
@@ -396,7 +482,7 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 04 Dec 2024 20:29:51 GMT
- Fri, 27 Dec 2024 22:14:54 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -410,7 +496,7 @@ interactions:
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '341'
- '184'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -428,7 +514,7 @@ interactions:
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_5554bade8ceda00cf364b76a51b708ff
- req_652891f79c1104a7a8436275d78a69f1
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,117 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '939'
content-type:
- application/json
cookie:
- __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
_cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnuRlxiTxduAVoXHHY58Fvfbll5IS\",\n \"object\":
\"chat.completion\",\n \"created\": 1736458417,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: This is a test task, and the context or question from the coworker is
not specified. Therefore, my best effort would be to affirm my readiness to
answer accurately and in detail any question about Futel Football Club based
on the context described. If provided with specific information or questions,
I will ensure to respond comprehensively as required by my job directives.\",\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 177,\n \"completion_tokens\":
82,\n \"total_tokens\": 259,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_703d4ff298\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ff78bf7bd6cc002-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 09 Jan 2025 21:33:40 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '2263'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_7c1a31da73cd103e9f410f908e59187f
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,119 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '939'
content-type:
- application/json
cookie:
- __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
_cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnuRrFJZGKw8cIEshvuW1PKwFZFKs\",\n \"object\":
\"chat.completion\",\n \"created\": 1736458423,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: Although you mentioned this being a \\\"Test task\\\" and haven't provided
a specific question regarding Futel Football Club, your request appears to involve
ensuring accuracy and detail in responses. For a proper answer about Futel,
I'd be ready to provide details about the club's history, management, players,
match schedules, and recent performance statistics. Remember to ask specific
questions to receive a targeted response. If this were a real context where
information was shared, I would respond precisely to what's been asked regarding
Futel Football Club.\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
177,\n \"completion_tokens\": 113,\n \"total_tokens\": 290,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_703d4ff298\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ff78c1d0ecdc002-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 09 Jan 2025 21:33:47 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '3097'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_179e1d56e2b17303e40480baffbc7b08
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,114 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '939'
content-type:
- application/json
cookie:
- __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
_cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnuRqgg7eiHnDi2DOqdk99fiqOboz\",\n \"object\":
\"chat.completion\",\n \"created\": 1736458422,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: Your best answer to your coworker asking you this, accounting for the
context shared. You MUST return the actual complete content as the final answer,
not a summary.\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
177,\n \"completion_tokens\": 44,\n \"total_tokens\": 221,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_703d4ff298\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ff78c164ad2c002-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 09 Jan 2025 21:33:43 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '899'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_9f5226208edb90a27987aaf7e0ca03d3
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,119 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '939'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnuRjmwH5mrykLxQhFwTqqTiDtuTf\",\n \"object\":
\"chat.completion\",\n \"created\": 1736458415,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: As this is a test task, please note that Futel Football Club is fictional
and any specific details about it would not be available. However, if you have
specific questions or need information about a particular aspect of Futel or
any general football club inquiry, feel free to ask, and I'll do my best to
assist you with your query!\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
177,\n \"completion_tokens\": 79,\n \"total_tokens\": 256,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_703d4ff298\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ff78be5eebfc002-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 09 Jan 2025 21:33:37 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
path=/; expires=Thu, 09-Jan-25 22:03:37 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '2730'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_014478ba748f860d10ac250ca0ba824a
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,119 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Futel Official Infopoint.
Futel Football Club info\nYour personal goal is: Answer questions about Futel\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Test task\n\nThis is the expect criteria for your
final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '939'
content-type:
- application/json
cookie:
- __cf_bm=cwWdOaPJjFMNJaLtJfa8Kjqavswg5bzVRFzBX4gneGw-1736458417-1.0.1.1-bvf2HshgcMtgn7GdxqwySFDAIacGccDFfEXniBFTTDmbGMCiIIwf6t2DiwWnBldmUHixwc5kDO9gYs08g.feBA;
_cfuvid=WMw7PSqkYqQOieguBRs0uNkwNU92A.ZKbgDbCAcV3EQ-1736458417825-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AnuRofLgmzWcDya5LILqYwIJYgFoq\",\n \"object\":
\"chat.completion\",\n \"created\": 1736458420,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: As an official Futel Football Club infopoint, my responsibility is to
provide detailed and accurate information about the club. This includes answering
questions regarding team statistics, player performances, upcoming fixtures,
ticketing and fan zone details, club history, and community initiatives. Our
focus is to ensure that fans and stakeholders have access to the latest and
most precise information about the club's on and off-pitch activities. If there's
anything specific you need to know, just let me know, and I'll be more than
happy to assist!\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
177,\n \"completion_tokens\": 115,\n \"total_tokens\": 292,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_703d4ff298\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ff78c066f37c002-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 09 Jan 2025 21:33:42 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '2459'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_a146dd27f040f39a576750970cca0f52
http_version: HTTP/1.1
status_code: 200
version: 1

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,102 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "What is the capital of France?"}],
"model": "gpt-4o-mini"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '101'
content-type:
- application/json
cookie:
- _cfuvid=8NrWEBP3dDmc8p2.csR.EdsSwS8zFvzWI1kPICaK_fM-1737568015338-0.0.1.1-604800000;
__cf_bm=pKr3NwXmTZN9rMSlKvEX40VPKbrxF93QwDNHunL2v8Y-1737568015-1.0.1.1-nR0EA7hYIwWpIBYUI53d9xQrUnl5iML6lgz4AGJW4ZGPBDxFma3PZ2cBhlr_hE7wKa5fV3r32eMu_rNWMXD.eA
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AsZ6WjNfEOrHwwEEdSZZCRBiTpBMS\",\n \"object\":
\"chat.completion\",\n \"created\": 1737568016,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The capital of France is Paris.\",\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 14,\n \"completion_tokens\":
8,\n \"total_tokens\": 22,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 90615dc63b805cb1-RDU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 22 Jan 2025 17:46:56 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '355'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999974'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_cdbed69c9c63658eb552b07f1220df19
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,108 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "Return the name of a random
city in the world."}], "model": "gpt-4o-mini"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '117'
content-type:
- application/json
cookie:
- _cfuvid=3UeEmz_rnmsoZxrVUv32u35gJOi766GDWNe5_RTjiPk-1736537376739-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AsZ6UtbaNSMpNU9VJKxvn52t5eJTq\",\n \"object\":
\"chat.completion\",\n \"created\": 1737568014,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"How about \\\"Lisbon\\\"? It\u2019s the
capital city of Portugal, known for its rich history and vibrant culture.\",\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 18,\n \"completion_tokens\":
24,\n \"total_tokens\": 42,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 90615dbcaefb5cb1-RDU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 22 Jan 2025 17:46:55 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=pKr3NwXmTZN9rMSlKvEX40VPKbrxF93QwDNHunL2v8Y-1737568015-1.0.1.1-nR0EA7hYIwWpIBYUI53d9xQrUnl5iML6lgz4AGJW4ZGPBDxFma3PZ2cBhlr_hE7wKa5fV3r32eMu_rNWMXD.eA;
path=/; expires=Wed, 22-Jan-25 18:16:55 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=8NrWEBP3dDmc8p2.csR.EdsSwS8zFvzWI1kPICaK_fM-1737568015338-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '449'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999971'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_898373758d2eae3cd84814050b2588e3
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,102 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "Tell me a joke."}], "model":
"gpt-4o-mini"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '86'
content-type:
- application/json
cookie:
- _cfuvid=8NrWEBP3dDmc8p2.csR.EdsSwS8zFvzWI1kPICaK_fM-1737568015338-0.0.1.1-604800000;
__cf_bm=pKr3NwXmTZN9rMSlKvEX40VPKbrxF93QwDNHunL2v8Y-1737568015-1.0.1.1-nR0EA7hYIwWpIBYUI53d9xQrUnl5iML6lgz4AGJW4ZGPBDxFma3PZ2cBhlr_hE7wKa5fV3r32eMu_rNWMXD.eA
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AsZ6VyjuUcXYpChXmD8rUSy6nSGq8\",\n \"object\":
\"chat.completion\",\n \"created\": 1737568015,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Why did the scarecrow win an award? \\n\\nBecause
he was outstanding in his field!\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
12,\n \"completion_tokens\": 19,\n \"total_tokens\": 31,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 90615dc03b6c5cb1-RDU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 22 Jan 2025 17:46:56 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '825'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999979'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_4c1485d44e7461396d4a7316a63ff353
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,111 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "What is the square of 5?"}],
"model": "gpt-4o-mini", "tools": [{"type": "function", "function": {"name":
"square_number", "description": "Returns the square of a number.", "parameters":
{"type": "object", "properties": {"number": {"type": "integer", "description":
"The number to square"}}, "required": ["number"]}}}]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '361'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AsZL5nGOaVpcGnDOesTxBZPHhMoaS\",\n \"object\":
\"chat.completion\",\n \"created\": 1737568919,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_i6JVJ1KxX79A4WzFri98E03U\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"square_number\",\n
\ \"arguments\": \"{\\\"number\\\":5}\"\n }\n }\n
\ ],\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"tool_calls\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
58,\n \"completion_tokens\": 15,\n \"total_tokens\": 73,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 906173d229b905f6-IAD
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 22 Jan 2025 18:02:00 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=BYDpIoqfPZyRxl9xcFxkt4IzTUGe8irWQlZ.aYLt8Xc-1737568920-1.0.1.1-Y_cVFN7TbguWRBorSKZynVY02QUtYbsbHuR2gR1wJ8LHuqOF4xIxtK5iHVCpWWgIyPDol9xOXiqUkU8xRV_vHA;
path=/; expires=Wed, 22-Jan-25 18:32:00 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=etTqqA9SBOnENmrFAUBIexdW0v2ZeO1x9_Ek_WChlfU-1737568920137-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '642'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999976'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_388e63f9b8d4edc0dd153001f25388e5
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,107 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "What is the current year?"}],
"model": "gpt-4o-mini", "tools": [{"type": "function", "function": {"name":
"get_current_year", "description": "Returns the current year as a string.",
"parameters": {"type": "object", "properties": {}, "required": []}}}]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '295'
content-type:
- application/json
cookie:
- _cfuvid=8NrWEBP3dDmc8p2.csR.EdsSwS8zFvzWI1kPICaK_fM-1737568015338-0.0.1.1-604800000;
__cf_bm=pKr3NwXmTZN9rMSlKvEX40VPKbrxF93QwDNHunL2v8Y-1737568015-1.0.1.1-nR0EA7hYIwWpIBYUI53d9xQrUnl5iML6lgz4AGJW4ZGPBDxFma3PZ2cBhlr_hE7wKa5fV3r32eMu_rNWMXD.eA
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AsZJ8HKXQU9nTB7xbGAkKxqrg9BZ2\",\n \"object\":
\"chat.completion\",\n \"created\": 1737568798,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_mfvEs2jngeFloVZpZOHZVaKY\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"get_current_year\",\n
\ \"arguments\": \"{}\"\n }\n }\n ],\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"tool_calls\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 46,\n \"completion_tokens\":
12,\n \"total_tokens\": 58,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 906170e038281775-IAD
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 22 Jan 2025 17:59:59 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '416'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999975'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_4039a5e5772d1790a3131f0b1ea06139
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,4 +1,37 @@
# conftest.py
import os
import tempfile
from pathlib import Path
import pytest
from dotenv import load_dotenv
load_result = load_dotenv(override=True)
@pytest.fixture(autouse=True)
def setup_test_environment():
"""Set up test environment with a temporary directory for SQLite storage."""
with tempfile.TemporaryDirectory() as temp_dir:
# Create the directory with proper permissions
storage_dir = Path(temp_dir) / "crewai_test_storage"
storage_dir.mkdir(parents=True, exist_ok=True)
# Validate that the directory was created successfully
if not storage_dir.exists() or not storage_dir.is_dir():
raise RuntimeError(f"Failed to create test storage directory: {storage_dir}")
# Verify directory permissions
try:
# Try to create a test file to verify write permissions
test_file = storage_dir / ".permissions_test"
test_file.touch()
test_file.unlink()
except (OSError, IOError) as e:
raise RuntimeError(f"Test storage directory {storage_dir} is not writable: {e}")
# Set environment variable to point to the test storage directory
os.environ["CREWAI_STORAGE_DIR"] = str(storage_dir)
yield
# Cleanup is handled automatically when tempfile context exits

View File

@@ -14,6 +14,7 @@ from crewai.agent import Agent
from crewai.agents.cache import CacheHandler
from crewai.crew import Crew
from crewai.crews.crew_output import CrewOutput
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.process import Process
from crewai.project import crew
@@ -555,12 +556,12 @@ def test_crew_with_delegating_agents_should_not_override_task_tools():
_, kwargs = mock_execute_sync.call_args
tools = kwargs["tools"]
assert any(
isinstance(tool, TestTool) for tool in tools
), "TestTool should be present"
assert any(
"delegate" in tool.name.lower() for tool in tools
), "Delegation tool should be present"
assert any(isinstance(tool, TestTool) for tool in tools), (
"TestTool should be present"
)
assert any("delegate" in tool.name.lower() for tool in tools), (
"Delegation tool should be present"
)
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -619,12 +620,12 @@ def test_crew_with_delegating_agents_should_not_override_agent_tools():
_, kwargs = mock_execute_sync.call_args
tools = kwargs["tools"]
assert any(
isinstance(tool, TestTool) for tool in new_ceo.tools
), "TestTool should be present"
assert any(
"delegate" in tool.name.lower() for tool in tools
), "Delegation tool should be present"
assert any(isinstance(tool, TestTool) for tool in new_ceo.tools), (
"TestTool should be present"
)
assert any("delegate" in tool.name.lower() for tool in tools), (
"Delegation tool should be present"
)
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -748,17 +749,17 @@ def test_task_tools_override_agent_tools_with_allow_delegation():
used_tools = kwargs["tools"]
# Confirm AnotherTestTool is present but TestTool is not
assert any(
isinstance(tool, AnotherTestTool) for tool in used_tools
), "AnotherTestTool should be present"
assert not any(
isinstance(tool, TestTool) for tool in used_tools
), "TestTool should not be present among used tools"
assert any(isinstance(tool, AnotherTestTool) for tool in used_tools), (
"AnotherTestTool should be present"
)
assert not any(isinstance(tool, TestTool) for tool in used_tools), (
"TestTool should not be present among used tools"
)
# Confirm delegation tool(s) are present
assert any(
"delegate" in tool.name.lower() for tool in used_tools
), "Delegation tool should be present"
assert any("delegate" in tool.name.lower() for tool in used_tools), (
"Delegation tool should be present"
)
# Finally, make sure the agent's original tools remain unchanged
assert len(researcher_with_delegation.tools) == 1
@@ -1228,6 +1229,7 @@ def test_kickoff_for_each_empty_input():
assert results == []
@pytest.mark.vcr(filter_headers=["authorization"])
def test_kickoff_for_each_invalid_input():
"""Tests if kickoff_for_each raises TypeError for invalid input types."""
@@ -1465,7 +1467,6 @@ def test_dont_set_agents_step_callback_if_already_set():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_function_calling_llm():
from crewai import LLM
from crewai.tools import tool
@@ -1559,9 +1560,9 @@ def test_code_execution_flag_adds_code_tool_upon_kickoff():
# Verify that exactly one tool was used and it was a CodeInterpreterTool
assert len(used_tools) == 1, "Should have exactly one tool"
assert isinstance(
used_tools[0], CodeInterpreterTool
), "Tool should be CodeInterpreterTool"
assert isinstance(used_tools[0], CodeInterpreterTool), (
"Tool should be CodeInterpreterTool"
)
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -3106,9 +3107,9 @@ def test_fetch_inputs():
expected_placeholders = {"role_detail", "topic", "field"}
actual_placeholders = crew.fetch_inputs()
assert (
actual_placeholders == expected_placeholders
), f"Expected {expected_placeholders}, but got {actual_placeholders}"
assert actual_placeholders == expected_placeholders, (
f"Expected {expected_placeholders}, but got {actual_placeholders}"
)
def test_task_tools_preserve_code_execution_tools():
@@ -3181,20 +3182,20 @@ def test_task_tools_preserve_code_execution_tools():
used_tools = kwargs["tools"]
# Verify all expected tools are present
assert any(
isinstance(tool, TestTool) for tool in used_tools
), "Task's TestTool should be present"
assert any(
isinstance(tool, CodeInterpreterTool) for tool in used_tools
), "CodeInterpreterTool should be present"
assert any(
"delegate" in tool.name.lower() for tool in used_tools
), "Delegation tool should be present"
assert any(isinstance(tool, TestTool) for tool in used_tools), (
"Task's TestTool should be present"
)
assert any(isinstance(tool, CodeInterpreterTool) for tool in used_tools), (
"CodeInterpreterTool should be present"
)
assert any("delegate" in tool.name.lower() for tool in used_tools), (
"Delegation tool should be present"
)
# Verify the total number of tools (TestTool + CodeInterpreter + 2 delegation tools)
assert (
len(used_tools) == 4
), "Should have TestTool, CodeInterpreter, and 2 delegation tools"
assert len(used_tools) == 4, (
"Should have TestTool, CodeInterpreter, and 2 delegation tools"
)
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -3238,9 +3239,9 @@ def test_multimodal_flag_adds_multimodal_tools():
used_tools = kwargs["tools"]
# Check that the multimodal tool was added
assert any(
isinstance(tool, AddImageTool) for tool in used_tools
), "AddImageTool should be present when agent is multimodal"
assert any(isinstance(tool, AddImageTool) for tool in used_tools), (
"AddImageTool should be present when agent is multimodal"
)
# Verify we have exactly one tool (just the AddImageTool)
assert len(used_tools) == 1, "Should only have the AddImageTool"
@@ -3466,9 +3467,9 @@ def test_crew_guardrail_feedback_in_context():
assert len(execution_contexts) > 1, "Task should have been executed multiple times"
# Verify that the second execution included the guardrail feedback
assert (
"Output must contain the keyword 'IMPORTANT'" in execution_contexts[1]
), "Guardrail feedback should be included in retry context"
assert "Output must contain the keyword 'IMPORTANT'" in execution_contexts[1], (
"Guardrail feedback should be included in retry context"
)
# Verify final output meets guardrail requirements
assert "IMPORTANT" in result.raw, "Final output should contain required keyword"
@@ -3479,10 +3480,12 @@ def test_crew_guardrail_feedback_in_context():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_before_kickoff_callback():
from crewai.project import CrewBase, agent, before_kickoff, crew, task
from crewai.project import CrewBase, agent, before_kickoff, task
@CrewBase
class TestCrewClass:
from crewai.project import crew
agents_config = None
tasks_config = None
@@ -3491,7 +3494,6 @@ def test_before_kickoff_callback():
@before_kickoff
def modify_inputs(self, inputs):
self.inputs_modified = True
inputs["modified"] = True
return inputs
@@ -3509,7 +3511,7 @@ def test_before_kickoff_callback():
task = Task(
description="Test task description",
expected_output="Test expected output",
agent=self.my_agent(), # Use the agent instance
agent=self.my_agent(),
)
return task
@@ -3519,28 +3521,30 @@ def test_before_kickoff_callback():
test_crew_instance = TestCrewClass()
crew = test_crew_instance.crew()
test_crew = test_crew_instance.crew()
# Verify that the before_kickoff_callbacks are set
assert len(crew.before_kickoff_callbacks) == 1
assert len(test_crew.before_kickoff_callbacks) == 1
# Prepare inputs
inputs = {"initial": True}
# Call kickoff
crew.kickoff(inputs=inputs)
test_crew.kickoff(inputs=inputs)
# Check that the before_kickoff function was called and modified inputs
assert test_crew_instance.inputs_modified
assert inputs.get("modified") == True
assert inputs.get("modified")
@pytest.mark.vcr(filter_headers=["authorization"])
def test_before_kickoff_without_inputs():
from crewai.project import CrewBase, agent, before_kickoff, crew, task
from crewai.project import CrewBase, agent, before_kickoff, task
@CrewBase
class TestCrewClass:
from crewai.project import crew
agents_config = None
tasks_config = None
@@ -3578,12 +3582,12 @@ def test_before_kickoff_without_inputs():
# Instantiate the class
test_crew_instance = TestCrewClass()
# Build the crew
crew = test_crew_instance.crew()
test_crew = test_crew_instance.crew()
# Verify that the before_kickoff_callback is registered
assert len(crew.before_kickoff_callbacks) == 1
assert len(test_crew.before_kickoff_callbacks) == 1
# Call kickoff without passing inputs
output = crew.kickoff()
test_crew.kickoff()
# Check that the before_kickoff function was called
assert test_crew_instance.inputs_modified
@@ -3591,3 +3595,21 @@ def test_before_kickoff_without_inputs():
# Verify that the inputs were initialized and modified inside the before_kickoff method
assert test_crew_instance.received_inputs is not None
assert test_crew_instance.received_inputs.get("modified") is True
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_with_knowledge_sources_works_with_copy():
content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content)
crew = Crew(
agents=[researcher, writer],
tasks=[Task(description="test", expected_output="test", agent=researcher)],
knowledge_sources=[string_source],
)
crew_copy = crew.copy()
assert crew_copy.knowledge_sources == crew.knowledge_sources
assert len(crew_copy.agents) == len(crew.agents)
assert len(crew_copy.tasks) == len(crew.tasks)

View File

@@ -4,6 +4,7 @@ import pytest
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.llm import LLM
from crewai.tools import tool
from crewai.utilities.token_counter_callback import TokenCalcHandler
@@ -37,3 +38,119 @@ def test_llm_callback_replacement():
assert usage_metrics_1.successful_requests == 1
assert usage_metrics_2.successful_requests == 1
assert usage_metrics_1 == calc_handler_1.token_cost_process.get_summary()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_call_with_string_input():
llm = LLM(model="gpt-4o-mini")
# Test the call method with a string input
result = llm.call("Return the name of a random city in the world.")
assert isinstance(result, str)
assert len(result.strip()) > 0 # Ensure the response is not empty
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_call_with_string_input_and_callbacks():
llm = LLM(model="gpt-4o-mini")
calc_handler = TokenCalcHandler(token_cost_process=TokenProcess())
# Test the call method with a string input and callbacks
result = llm.call(
"Tell me a joke.",
callbacks=[calc_handler],
)
usage_metrics = calc_handler.token_cost_process.get_summary()
assert isinstance(result, str)
assert len(result.strip()) > 0
assert usage_metrics.successful_requests == 1
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_call_with_message_list():
llm = LLM(model="gpt-4o-mini")
messages = [{"role": "user", "content": "What is the capital of France?"}]
# Test the call method with a list of messages
result = llm.call(messages)
assert isinstance(result, str)
assert "Paris" in result
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_call_with_tool_and_string_input():
llm = LLM(model="gpt-4o-mini")
def get_current_year() -> str:
"""Returns the current year as a string."""
from datetime import datetime
return str(datetime.now().year)
# Create tool schema
tool_schema = {
"type": "function",
"function": {
"name": "get_current_year",
"description": "Returns the current year as a string.",
"parameters": {
"type": "object",
"properties": {},
"required": [],
},
},
}
# Available functions mapping
available_functions = {"get_current_year": get_current_year}
# Test the call method with a string input and tool
result = llm.call(
"What is the current year?",
tools=[tool_schema],
available_functions=available_functions,
)
assert isinstance(result, str)
assert result == get_current_year()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_call_with_tool_and_message_list():
llm = LLM(model="gpt-4o-mini")
def square_number(number: int) -> int:
"""Returns the square of a number."""
return number * number
# Create tool schema
tool_schema = {
"type": "function",
"function": {
"name": "square_number",
"description": "Returns the square of a number.",
"parameters": {
"type": "object",
"properties": {
"number": {"type": "integer", "description": "The number to square"}
},
"required": ["number"],
},
},
}
# Available functions mapping
available_functions = {"square_number": square_number}
messages = [{"role": "user", "content": "What is the square of 5?"}]
# Test the call method with messages and tool
result = llm.call(
messages,
tools=[tool_schema],
available_functions=available_functions,
)
assert isinstance(result, int)
assert result == 25

View File

@@ -0,0 +1,112 @@
"""Test that persisted state properly overrides default values."""
from crewai.flow.flow import Flow, FlowState, listen, start
from crewai.flow.persistence import persist
class PoemState(FlowState):
"""Test state model with default values that should be overridden."""
sentence_count: int = 1000 # Default that should be overridden
has_set_count: bool = False # Track whether we've set the count
poem_type: str = ""
def test_default_value_override():
"""Test that persisted state values override class defaults."""
@persist()
class PoemFlow(Flow[PoemState]):
initial_state = PoemState
@start()
def set_sentence_count(self):
if self.state.has_set_count and self.state.sentence_count == 2:
self.state.sentence_count = 3
elif self.state.has_set_count and self.state.sentence_count == 1000:
self.state.sentence_count = 1000
elif self.state.has_set_count and self.state.sentence_count == 5:
self.state.sentence_count = 5
else:
self.state.sentence_count = 2
self.state.has_set_count = True
# First run - should set sentence_count to 2
flow1 = PoemFlow()
flow1.kickoff()
original_uuid = flow1.state.id
assert flow1.state.sentence_count == 2
# Second run - should load sentence_count=2 instead of default 1000
flow2 = PoemFlow()
flow2.kickoff(inputs={"id": original_uuid})
assert flow2.state.sentence_count == 3 # Should load 2, not default 1000
# Fourth run - explicit override should work
flow3 = PoemFlow()
flow3.kickoff(inputs={
"id": original_uuid,
"has_set_count": True,
"sentence_count": 5, # Override persisted value
})
assert flow3.state.sentence_count == 5 # Should use override value
# Third run - should not load sentence_count=2 instead of default 1000
flow4 = PoemFlow()
flow4.kickoff(inputs={"has_set_count": True})
assert flow4.state.sentence_count == 1000 # Should load 1000, not 2
def test_multi_step_default_override():
"""Test default value override with multiple start methods."""
@persist()
class MultiStepPoemFlow(Flow[PoemState]):
initial_state = PoemState
@start()
def set_sentence_count(self):
print("Setting sentence count")
if not self.state.has_set_count:
self.state.sentence_count = 3
self.state.has_set_count = True
@listen(set_sentence_count)
def set_poem_type(self):
print("Setting poem type")
if self.state.sentence_count == 3:
self.state.poem_type = "haiku"
elif self.state.sentence_count == 5:
self.state.poem_type = "limerick"
else:
self.state.poem_type = "free_verse"
@listen(set_poem_type)
def finished(self):
print("finished")
# First run - should set both sentence count and poem type
flow1 = MultiStepPoemFlow()
flow1.kickoff()
original_uuid = flow1.state.id
assert flow1.state.sentence_count == 3
assert flow1.state.poem_type == "haiku"
# Second run - should load persisted state and update poem type
flow2 = MultiStepPoemFlow()
flow2.kickoff(inputs={
"id": original_uuid,
"sentence_count": 5
})
assert flow2.state.sentence_count == 5
assert flow2.state.poem_type == "limerick"
# Third run - new flow without persisted state should use defaults
flow3 = MultiStepPoemFlow()
flow3.kickoff(inputs={
"id": original_uuid
})
assert flow3.state.sentence_count == 5
assert flow3.state.poem_type == "limerick"

View File

@@ -0,0 +1,176 @@
"""Test flow state persistence functionality."""
import os
from typing import Dict
import pytest
from pydantic import BaseModel
from crewai.flow.flow import Flow, FlowState, listen, start
from crewai.flow.persistence import persist
from crewai.flow.persistence.sqlite import SQLiteFlowPersistence
class TestState(FlowState):
"""Test state model with required id field."""
counter: int = 0
message: str = ""
def test_persist_decorator_saves_state(tmp_path):
"""Test that @persist decorator saves state in SQLite."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class TestFlow(Flow[Dict[str, str]]):
initial_state = dict() # Use dict instance as initial state
@start()
@persist(persistence)
def init_step(self):
self.state["message"] = "Hello, World!"
self.state["id"] = "test-uuid" # Ensure we have an ID for persistence
# Run flow and verify state is saved
flow = TestFlow(persistence=persistence)
flow.kickoff()
# Load state from DB and verify
saved_state = persistence.load_state(flow.state["id"])
assert saved_state is not None
assert saved_state["message"] == "Hello, World!"
def test_structured_state_persistence(tmp_path):
"""Test persistence with Pydantic model state."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class StructuredFlow(Flow[TestState]):
initial_state = TestState
@start()
@persist(persistence)
def count_up(self):
self.state.counter += 1
self.state.message = f"Count is {self.state.counter}"
# Run flow and verify state changes are saved
flow = StructuredFlow(persistence=persistence)
flow.kickoff()
# Load and verify state
saved_state = persistence.load_state(flow.state.id)
assert saved_state is not None
assert saved_state["counter"] == 1
assert saved_state["message"] == "Count is 1"
def test_flow_state_restoration(tmp_path):
"""Test restoring flow state from persistence with various restoration methods."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
# First flow execution to create initial state
class RestorableFlow(Flow[TestState]):
@start()
@persist(persistence)
def set_message(self):
if self.state.message == "":
self.state.message = "Original message"
if self.state.counter == 0:
self.state.counter = 42
# Create and persist initial state
flow1 = RestorableFlow(persistence=persistence)
flow1.kickoff()
original_uuid = flow1.state.id
# Test case 1: Restore using restore_uuid with field override
flow2 = RestorableFlow(persistence=persistence)
flow2.kickoff(inputs={
"id": original_uuid,
"counter": 43
})
# Verify state restoration and selective field override
assert flow2.state.id == original_uuid
assert flow2.state.message == "Original message" # Preserved
assert flow2.state.counter == 43 # Overridden
# Test case 2: Restore using kwargs['id']
flow3 = RestorableFlow(persistence=persistence)
flow3.kickoff(inputs={
"id": original_uuid,
"message": "Updated message"
})
# Verify state restoration and selective field override
assert flow3.state.id == original_uuid
assert flow3.state.counter == 43 # Preserved
assert flow3.state.message == "Updated message" # Overridden
def test_multiple_method_persistence(tmp_path):
"""Test state persistence across multiple method executions."""
db_path = os.path.join(tmp_path, "test_flows.db")
persistence = SQLiteFlowPersistence(db_path)
class MultiStepFlow(Flow[TestState]):
@start()
@persist(persistence)
def step_1(self):
if self.state.counter == 1:
self.state.counter = 99999
self.state.message = "Step 99999"
else:
self.state.counter = 1
self.state.message = "Step 1"
@listen(step_1)
@persist(persistence)
def step_2(self):
if self.state.counter == 1:
self.state.counter = 2
self.state.message = "Step 2"
flow = MultiStepFlow(persistence=persistence)
flow.kickoff()
flow2 = MultiStepFlow(persistence=persistence)
flow2.kickoff(inputs={"id": flow.state.id})
# Load final state
final_state = flow2.state
assert final_state is not None
assert final_state.counter == 2
assert final_state.message == "Step 2"
class NoPersistenceMultiStepFlow(Flow[TestState]):
@start()
@persist(persistence)
def step_1(self):
if self.state.counter == 1:
self.state.counter = 99999
self.state.message = "Step 99999"
else:
self.state.counter = 1
self.state.message = "Step 1"
@listen(step_1)
def step_2(self):
if self.state.counter == 1:
self.state.counter = 2
self.state.message = "Step 2"
flow = NoPersistenceMultiStepFlow(persistence=persistence)
flow.kickoff()
flow2 = NoPersistenceMultiStepFlow(persistence=persistence)
flow2.kickoff(inputs={"id": flow.state.id})
# Load final state
final_state = flow2.state
assert final_state.counter == 99999
assert final_state.message == "Step 99999"

View File

@@ -1,51 +0,0 @@
import pytest
from crewai import Agent
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
class InternalAgentTool(BaseAgentTool):
"""Concrete implementation of BaseAgentTool for testing."""
def _run(self, *args, **kwargs):
"""Implement required _run method."""
return "Test response"
@pytest.mark.parametrize(
"role_name,should_match",
[
("Futel Official Infopoint", True), # exact match
(' "Futel Official Infopoint" ', True), # extra quotes and spaces
("Futel Official Infopoint\n", True), # trailing newline
('"Futel Official Infopoint"', True), # embedded quotes
(" FUTEL\nOFFICIAL INFOPOINT ", True), # multiple whitespace and newline
],
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_tool_role_matching(role_name, should_match):
"""Test that agent tools can match roles regardless of case, whitespace, and special characters."""
# Create test agent
test_agent = Agent(
role="Futel Official Infopoint",
goal="Answer questions about Futel",
backstory="Futel Football Club info",
allow_delegation=False,
)
# Create test agent tool
agent_tool = InternalAgentTool(
name="test_tool", description="Test tool", agents=[test_agent]
)
# Test role matching
result = agent_tool._execute(agent_name=role_name, task="Test task", context=None)
if should_match:
assert (
"coworker mentioned not found" not in result.lower()
), f"Should find agent with role name: {role_name}"
else:
assert (
"coworker mentioned not found" in result.lower()
), f"Should not find agent with role name: {role_name}"

View File

@@ -231,3 +231,255 @@ def test_validate_tool_input_with_special_characters():
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_none_input():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
arguments = tool_usage._validate_tool_input(None)
assert arguments == {}
def test_validate_tool_input_valid_json():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
tool_input = '{"key": "value", "number": 42, "flag": true}'
expected_arguments = {"key": "value", "number": 42, "flag": True}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_python_dict():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
tool_input = "{'key': 'value', 'number': 42, 'flag': True}"
expected_arguments = {"key": "value", "number": 42, "flag": True}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_json5_unquoted_keys():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
tool_input = "{key: 'value', number: 42, flag: true}"
expected_arguments = {"key": "value", "number": 42, "flag": True}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_with_trailing_commas():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
tool_input = '{"key": "value", "number": 42, "flag": true,}'
expected_arguments = {"key": "value", "number": 42, "flag": True}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_invalid_input():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
invalid_inputs = [
"Just a string",
"['list', 'of', 'values']",
"12345",
"",
]
for invalid_input in invalid_inputs:
with pytest.raises(Exception) as e_info:
tool_usage._validate_tool_input(invalid_input)
assert (
"Tool input must be a valid dictionary in JSON or Python literal format"
in str(e_info.value)
)
# Test for None input separately
arguments = tool_usage._validate_tool_input(None)
assert arguments == {} # Expecting an empty dictionary
def test_validate_tool_input_complex_structure():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
tool_input = """
{
"user": {
"name": "Alice",
"age": 30
},
"items": [
{"id": 1, "value": "Item1"},
{"id": 2, "value": "Item2",}
],
"active": true,
}
"""
expected_arguments = {
"user": {"name": "Alice", "age": 30},
"items": [
{"id": 1, "value": "Item1"},
{"id": 2, "value": "Item2"},
],
"active": True,
}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_code_content():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
tool_input = '{"filename": "script.py", "content": "def hello():\\n print(\'Hello, world!\')"}'
expected_arguments = {
"filename": "script.py",
"content": "def hello():\n print('Hello, world!')",
}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_with_escaped_quotes():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
tool_input = '{"text": "He said, \\"Hello, world!\\""}'
expected_arguments = {"text": 'He said, "Hello, world!"'}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_large_json_content():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
# Simulate a large JSON content
tool_input = (
'{"data": ' + json.dumps([{"id": i, "value": i * 2} for i in range(1000)]) + "}"
)
expected_arguments = {"data": [{"id": i, "value": i * 2} for i in range(1000)]}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_none_input():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
arguments = tool_usage._validate_tool_input(None)
assert arguments == {} # Expecting an empty dictionary

View File

@@ -0,0 +1,114 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "Name: Alice, Age: 30"}], "model":
"gpt-4o-mini", "tool_choice": {"type": "function", "function": {"name": "SimpleModel"}},
"tools": [{"type": "function", "function": {"name": "SimpleModel", "description":
"Correctly extracted `SimpleModel` with all the required parameters with correct
types", "parameters": {"properties": {"name": {"title": "Name", "type": "string"},
"age": {"title": "Age", "type": "integer"}}, "required": ["age", "name"], "type":
"object"}}}]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '507'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-Aq4a4xDv8G0i4fbTtPJEI2B8UNBup\",\n \"object\":
\"chat.completion\",\n \"created\": 1736974028,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_uO5nec8hTk1fpYINM8TUafhe\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"SimpleModel\",\n
\ \"arguments\": \"{\\\"name\\\":\\\"Alice\\\",\\\"age\\\":30}\"\n
\ }\n }\n ],\n \"refusal\": null\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 79,\n \"completion_tokens\": 10,\n
\ \"total_tokens\": 89,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 9028b81aeb1cb05f-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 15 Jan 2025 20:47:08 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=PzayZLF04c14veGc.0ocVg3VHBbpzKRW8Hqox8L9U7c-1736974028-1.0.1.1-mZpK8.SH9l7K2z8Tvt6z.dURiVPjFqEz7zYEITfRwdr5z0razsSebZGN9IRPmI5XC_w5rbZW2Kg6hh5cenXinQ;
path=/; expires=Wed, 15-Jan-25 21:17:08 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=ciwC3n2Srn20xx4JhEUeN6Ap0tNBaE44S95nIilboQ0-1736974028496-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '439'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999978'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_a468000458b9d2848b7497b2e3d485a3
http_version: HTTP/1.1
status_code: 200
version: 1

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,869 @@
interactions:
- request:
body: '{"model": "llama3.2:3b", "prompt": "### User:\nName: Alice Llama, Age:
30\n\n### System:\nProduce JSON OUTPUT ONLY! Adhere to this format {\"name\":
\"function_name\", \"arguments\":{\"argument_name\": \"argument_value\"}} The
following functions are available to you:\n{''type'': ''function'', ''function'':
{''name'': ''SimpleModel'', ''description'': ''Correctly extracted `SimpleModel`
with all the required parameters with correct types'', ''parameters'': {''properties'':
{''name'': {''title'': ''Name'', ''type'': ''string''}, ''age'': {''title'':
''Age'', ''type'': ''integer''}}, ''required'': [''age'', ''name''], ''type'':
''object''}}}\n\n\n", "options": {}, "stream": false, "format": "json"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '657'
host:
- localhost:11434
user-agent:
- litellm/1.57.4
method: POST
uri: http://localhost:11434/api/generate
response:
content: '{"model":"llama3.2:3b","created_at":"2025-01-15T20:47:11.926411Z","response":"{\"name\":
\"SimpleModel\", \"arguments\":{\"name\": \"Alice Llama\", \"age\": 30}}","done":true,"done_reason":"stop","context":[128006,9125,128007,271,38766,1303,33025,2696,25,6790,220,2366,18,271,128009,128006,882,128007,271,14711,2724,512,678,25,30505,445,81101,11,13381,25,220,966,271,14711,744,512,1360,13677,4823,32090,27785,0,2467,6881,311,420,3645,5324,609,794,330,1723,1292,498,330,16774,23118,14819,1292,794,330,14819,3220,32075,578,2768,5865,527,2561,311,499,512,13922,1337,1232,364,1723,518,364,1723,1232,5473,609,1232,364,16778,1747,518,364,4789,1232,364,34192,398,28532,1595,16778,1747,63,449,682,279,2631,5137,449,4495,4595,518,364,14105,1232,5473,13495,1232,5473,609,1232,5473,2150,1232,364,678,518,364,1337,1232,364,928,25762,364,425,1232,5473,2150,1232,364,17166,518,364,1337,1232,364,11924,8439,2186,364,6413,1232,2570,425,518,364,609,4181,364,1337,1232,364,1735,23742,3818,128009,128006,78191,128007,271,5018,609,794,330,16778,1747,498,330,16774,23118,609,794,330,62786,445,81101,498,330,425,794,220,966,3500],"total_duration":3374470708,"load_duration":1075750500,"prompt_eval_count":167,"prompt_eval_duration":1871000000,"eval_count":24,"eval_duration":426000000}'
headers:
Content-Length:
- '1263'
Content-Type:
- application/json; charset=utf-8
Date:
- Wed, 15 Jan 2025 20:47:12 GMT
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"name": "llama3.2:3b"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '23'
content-type:
- application/json
host:
- localhost:11434
user-agent:
- litellm/1.57.4
method: POST
uri: http://localhost:11434/api/show
response:
content: "{\"license\":\"LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\\nLlama 3.2 Version
Release Date: September 25, 2024\\n\\n\u201CAgreement\u201D means the terms
and conditions for use, reproduction, distribution \\nand modification of the
Llama Materials set forth herein.\\n\\n\u201CDocumentation\u201D means the specifications,
manuals and documentation accompanying Llama 3.2\\ndistributed by Meta at https://llama.meta.com/doc/overview.\\n\\n\u201CLicensee\u201D
or \u201Cyou\u201D means you, or your employer or any other person or entity
(if you are \\nentering into this Agreement on such person or entity\u2019s
behalf), of the age required under\\napplicable laws, rules or regulations to
provide legal consent and that has legal authority\\nto bind your employer or
such other person or entity if you are entering in this Agreement\\non their
behalf.\\n\\n\u201CLlama 3.2\u201D means the foundational large language models
and software and algorithms, including\\nmachine-learning model code, trained
model weights, inference-enabling code, training-enabling code,\\nfine-tuning
enabling code and other elements of the foregoing distributed by Meta at \\nhttps://www.llama.com/llama-downloads.\\n\\n\u201CLlama
Materials\u201D means, collectively, Meta\u2019s proprietary Llama 3.2 and Documentation
(and \\nany portion thereof) made available under this Agreement.\\n\\n\u201CMeta\u201D
or \u201Cwe\u201D means Meta Platforms Ireland Limited (if you are located in
or, \\nif you are an entity, your principal place of business is in the EEA
or Switzerland) \\nand Meta Platforms, Inc. (if you are located outside of the
EEA or Switzerland). \\n\\n\\nBy clicking \u201CI Accept\u201D below or by using
or distributing any portion or element of the Llama Materials,\\nyou agree to
be bound by this Agreement.\\n\\n\\n1. License Rights and Redistribution.\\n\\n
\ a. Grant of Rights. You are granted a non-exclusive, worldwide, \\nnon-transferable
and royalty-free limited license under Meta\u2019s intellectual property or
other rights \\nowned by Meta embodied in the Llama Materials to use, reproduce,
distribute, copy, create derivative works \\nof, and make modifications to the
Llama Materials. \\n\\n b. Redistribution and Use. \\n\\n i. If
you distribute or make available the Llama Materials (or any derivative works
thereof), \\nor a product or service (including another AI model) that contains
any of them, you shall (A) provide\\na copy of this Agreement with any such
Llama Materials; and (B) prominently display \u201CBuilt with Llama\u201D\\non
a related website, user interface, blogpost, about page, or product documentation.
If you use the\\nLlama Materials or any outputs or results of the Llama Materials
to create, train, fine tune, or\\notherwise improve an AI model, which is distributed
or made available, you shall also include \u201CLlama\u201D\\nat the beginning
of any such AI model name.\\n\\n ii. If you receive Llama Materials,
or any derivative works thereof, from a Licensee as part\\nof an integrated
end user product, then Section 2 of this Agreement will not apply to you. \\n\\n
\ iii. You must retain in all copies of the Llama Materials that you distribute
the \\nfollowing attribution notice within a \u201CNotice\u201D text file distributed
as a part of such copies: \\n\u201CLlama 3.2 is licensed under the Llama 3.2
Community License, Copyright \xA9 Meta Platforms,\\nInc. All Rights Reserved.\u201D\\n\\n
\ iv. Your use of the Llama Materials must comply with applicable laws
and regulations\\n(including trade compliance laws and regulations) and adhere
to the Acceptable Use Policy for\\nthe Llama Materials (available at https://www.llama.com/llama3_2/use-policy),
which is hereby \\nincorporated by reference into this Agreement.\\n \\n2.
Additional Commercial Terms. If, on the Llama 3.2 version release date, the
monthly active users\\nof the products or services made available by or for
Licensee, or Licensee\u2019s affiliates, \\nis greater than 700 million monthly
active users in the preceding calendar month, you must request \\na license
from Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to\\nexercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.\\n\\n3. Disclaimer of Warranty.
UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND \\nRESULTS
THEREFROM ARE PROVIDED ON AN \u201CAS IS\u201D BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS\\nALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND
IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\\nOF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE\\nFOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS
AND ASSUME ANY RISKS ASSOCIATED\\nWITH YOUR USE OF THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS.\\n\\n4. Limitation of Liability. IN NO EVENT WILL META OR
ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, \\nWHETHER IN CONTRACT,
TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
\\nFOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,
EXEMPLARY OR PUNITIVE DAMAGES, EVEN \\nIF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.\\n\\n5. Intellectual Property.\\n\\n
\ a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, \\nneither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, \\nexcept as
required for reasonable and customary use in describing and redistributing the
Llama Materials or as \\nset forth in this Section 5(a). Meta hereby grants
you a license to use \u201CLlama\u201D (the \u201CMark\u201D) solely as required
\\nto comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s
brand guidelines (currently accessible \\nat https://about.meta.com/brand/resources/meta/company-brand/).
All goodwill arising out of your use of the Mark \\nwill inure to the benefit
of Meta.\\n\\n b. Subject to Meta\u2019s ownership of Llama Materials and
derivatives made by or for Meta, with respect to any\\n derivative works
and modifications of the Llama Materials that are made by you, as between you
and Meta,\\n you are and will be the owner of such derivative works and modifications.\\n\\n
\ c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or\\n counterclaim in a lawsuit) alleging
that the Llama Materials or Llama 3.2 outputs or results, or any portion\\n
\ of any of the foregoing, constitutes infringement of intellectual property
or other rights owned or licensable\\n by you, then any licenses granted
to you under this Agreement shall terminate as of the date such litigation or\\n
\ claim is filed or instituted. You will indemnify and hold harmless Meta
from and against any claim by any third\\n party arising out of or related
to your use or distribution of the Llama Materials.\\n\\n6. Term and Termination.
The term of this Agreement will commence upon your acceptance of this Agreement
or access\\nto the Llama Materials and will continue in full force and effect
until terminated in accordance with the terms\\nand conditions herein. Meta
may terminate this Agreement if you are in breach of any term or condition of
this\\nAgreement. Upon termination of this Agreement, you shall delete and cease
use of the Llama Materials. Sections 3,\\n4 and 7 shall survive the termination
of this Agreement. \\n\\n7. Governing Law and Jurisdiction. This Agreement will
be governed and construed under the laws of the State of \\nCalifornia without
regard to choice of law principles, and the UN Convention on Contracts for the
International\\nSale of Goods does not apply to this Agreement. The courts of
California shall have exclusive jurisdiction of\\nany dispute arising out of
this Agreement.\\n**Llama 3.2** **Acceptable Use Policy**\\n\\nMeta is committed
to promoting safe and fair use of its tools and features, including Llama 3.2.
If you access or use Llama 3.2, you agree to this Acceptable Use Policy (\u201C**Policy**\u201D).
The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\\n\\n**Prohibited
Uses**\\n\\nWe want everyone to use Llama 3.2 safely and responsibly. You agree
you will not use, or allow others to use, Llama 3.2 to:\\n\\n\\n\\n1. Violate
the law or others\u2019 rights, including to:\\n 1. Engage in, promote, generate,
contribute to, encourage, plan, incite, or further illegal or unlawful activity
or content, such as:\\n 1. Violence or terrorism\\n 2. Exploitation
or harm to children, including the solicitation, creation, acquisition, or dissemination
of child exploitative content or failure to report Child Sexual Abuse Material\\n
\ 3. Human trafficking, exploitation, and sexual violence\\n 4.
The illegal distribution of information or materials to minors, including obscene
materials, or failure to employ legally required age-gating in connection with
such information or materials.\\n 5. Sexual solicitation\\n 6.
Any other criminal activity\\n 1. Engage in, promote, incite, or facilitate
the harassment, abuse, threatening, or bullying of individuals or groups of
individuals\\n 2. Engage in, promote, incite, or facilitate discrimination
or other unlawful or harmful conduct in the provision of employment, employment
benefits, credit, housing, other economic benefits, or other essential goods
and services\\n 3. Engage in the unauthorized or unlicensed practice of any
profession including, but not limited to, financial, legal, medical/health,
or related professional practices\\n 4. Collect, process, disclose, generate,
or infer private or sensitive information about individuals, including information
about individuals\u2019 identity, health, or demographic information, unless
you have obtained the right to do so in accordance with applicable law\\n 5.
Engage in or facilitate any action or generate any content that infringes, misappropriates,
or otherwise violates any third-party rights, including the outputs or results
of any products or services using the Llama Materials\\n 6. Create, generate,
or facilitate the creation of malicious code, malware, computer viruses or do
anything else that could disable, overburden, interfere with or impair the proper
working, integrity, operation or appearance of a website or computer system\\n
\ 7. Engage in any action, or facilitate any action, to intentionally circumvent
or remove usage restrictions or other safety measures, or to enable functionality
disabled by Meta\\n2. Engage in, promote, incite, facilitate, or assist in the
planning or development of activities that present a risk of death or bodily
harm to individuals, including use of Llama 3.2 related to the following:\\n
\ 8. Military, warfare, nuclear industries or applications, espionage, use
for materials or activities that are subject to the International Traffic Arms
Regulations (ITAR) maintained by the United States Department of State or to
the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons
Convention Implementation Act of 1997\\n 9. Guns and illegal weapons (including
weapon development)\\n 10. Illegal drugs and regulated/controlled substances\\n
\ 11. Operation of critical infrastructure, transportation technologies, or
heavy machinery\\n 12. Self-harm or harm to others, including suicide, cutting,
and eating disorders\\n 13. Any content intended to incite or promote violence,
abuse, or any infliction of bodily harm to an individual\\n3. Intentionally
deceive or mislead others, including use of Llama 3.2 related to the following:\\n
\ 14. Generating, promoting, or furthering fraud or the creation or promotion
of disinformation\\n 15. Generating, promoting, or furthering defamatory
content, including the creation of defamatory statements, images, or other content\\n
\ 16. Generating, promoting, or further distributing spam\\n 17. Impersonating
another individual without consent, authorization, or legal right\\n 18.
Representing that the use of Llama 3.2 or outputs are human-generated\\n 19.
Generating or facilitating false online engagement, including fake reviews and
other means of fake online engagement\\n4. Fail to appropriately disclose to
end users any known dangers of your AI system\\n5. Interact with third party
tools, models, or software designed to generate unlawful content or engage in
unlawful or harmful conduct and/or represent that the outputs of such tools,
models, or software are associated with Meta or Llama 3.2\\n\\nWith respect
to any multimodal models included in Llama 3.2, the rights granted under Section
1(a) of the Llama 3.2 Community License Agreement are not being granted to you
if you are an individual domiciled in, or a company with a principal place of
business in, the European Union. This restriction does not apply to end users
of a product or service that incorporates any such multimodal models.\\n\\nPlease
report any violation of this Policy, software \u201Cbug,\u201D or other problems
that could lead to a violation of this Policy through one of the following means:\\n\\n\\n\\n*
Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues\\u0026h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\\n*
Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\\n*
Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\\n*
Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama
3.2: LlamaUseReport@meta.com\",\"modelfile\":\"# Modelfile generated by \\\"ollama
show\\\"\\n# To build a new Modelfile based on this, replace FROM with:\\n#
FROM llama3.2:3b\\n\\nFROM /Users/brandonhancock/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff\\nTEMPLATE
\\\"\\\"\\\"\\u003c|start_header_id|\\u003esystem\\u003c|end_header_id|\\u003e\\n\\nCutting
Knowledge Date: December 2023\\n\\n{{ if .System }}{{ .System }}\\n{{- end }}\\n{{-
if .Tools }}When you receive a tool call response, use the output to format
an answer to the orginal user question.\\n\\nYou are a helpful assistant with
tool calling capabilities.\\n{{- end }}\\u003c|eot_id|\\u003e\\n{{- range $i,
$_ := .Messages }}\\n{{- $last := eq (len (slice $.Messages $i)) 1 }}\\n{{-
if eq .Role \\\"user\\\" }}\\u003c|start_header_id|\\u003euser\\u003c|end_header_id|\\u003e\\n{{-
if and $.Tools $last }}\\n\\nGiven the following functions, please respond with
a JSON for a function call with its proper arguments that best answers the given
prompt.\\n\\nRespond in the format {\\\"name\\\": function name, \\\"parameters\\\":
dictionary of argument name and its value}. Do not use variables.\\n\\n{{ range
$.Tools }}\\n{{- . }}\\n{{ end }}\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{-
else }}\\n\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{- end }}{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- else if eq .Role \\\"assistant\\\" }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n{{-
if .ToolCalls }}\\n{{ range .ToolCalls }}\\n{\\\"name\\\": \\\"{{ .Function.Name
}}\\\", \\\"parameters\\\": {{ .Function.Arguments }}}{{ end }}\\n{{- else }}\\n\\n{{
.Content }}\\n{{- end }}{{ if not $last }}\\u003c|eot_id|\\u003e{{ end }}\\n{{-
else if eq .Role \\\"tool\\\" }}\\u003c|start_header_id|\\u003eipython\\u003c|end_header_id|\\u003e\\n\\n{{
.Content }}\\u003c|eot_id|\\u003e{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- end }}\\n{{- end }}\\\"\\\"\\\"\\nPARAMETER stop \\u003c|start_header_id|\\u003e\\nPARAMETER
stop \\u003c|end_header_id|\\u003e\\nPARAMETER stop \\u003c|eot_id|\\u003e\\nLICENSE
\\\"LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\\nLlama 3.2 Version Release Date:
September 25, 2024\\n\\n\u201CAgreement\u201D means the terms and conditions
for use, reproduction, distribution \\nand modification of the Llama Materials
set forth herein.\\n\\n\u201CDocumentation\u201D means the specifications, manuals
and documentation accompanying Llama 3.2\\ndistributed by Meta at https://llama.meta.com/doc/overview.\\n\\n\u201CLicensee\u201D
or \u201Cyou\u201D means you, or your employer or any other person or entity
(if you are \\nentering into this Agreement on such person or entity\u2019s
behalf), of the age required under\\napplicable laws, rules or regulations to
provide legal consent and that has legal authority\\nto bind your employer or
such other person or entity if you are entering in this Agreement\\non their
behalf.\\n\\n\u201CLlama 3.2\u201D means the foundational large language models
and software and algorithms, including\\nmachine-learning model code, trained
model weights, inference-enabling code, training-enabling code,\\nfine-tuning
enabling code and other elements of the foregoing distributed by Meta at \\nhttps://www.llama.com/llama-downloads.\\n\\n\u201CLlama
Materials\u201D means, collectively, Meta\u2019s proprietary Llama 3.2 and Documentation
(and \\nany portion thereof) made available under this Agreement.\\n\\n\u201CMeta\u201D
or \u201Cwe\u201D means Meta Platforms Ireland Limited (if you are located in
or, \\nif you are an entity, your principal place of business is in the EEA
or Switzerland) \\nand Meta Platforms, Inc. (if you are located outside of the
EEA or Switzerland). \\n\\n\\nBy clicking \u201CI Accept\u201D below or by using
or distributing any portion or element of the Llama Materials,\\nyou agree to
be bound by this Agreement.\\n\\n\\n1. License Rights and Redistribution.\\n\\n
\ a. Grant of Rights. You are granted a non-exclusive, worldwide, \\nnon-transferable
and royalty-free limited license under Meta\u2019s intellectual property or
other rights \\nowned by Meta embodied in the Llama Materials to use, reproduce,
distribute, copy, create derivative works \\nof, and make modifications to the
Llama Materials. \\n\\n b. Redistribution and Use. \\n\\n i. If
you distribute or make available the Llama Materials (or any derivative works
thereof), \\nor a product or service (including another AI model) that contains
any of them, you shall (A) provide\\na copy of this Agreement with any such
Llama Materials; and (B) prominently display \u201CBuilt with Llama\u201D\\non
a related website, user interface, blogpost, about page, or product documentation.
If you use the\\nLlama Materials or any outputs or results of the Llama Materials
to create, train, fine tune, or\\notherwise improve an AI model, which is distributed
or made available, you shall also include \u201CLlama\u201D\\nat the beginning
of any such AI model name.\\n\\n ii. If you receive Llama Materials,
or any derivative works thereof, from a Licensee as part\\nof an integrated
end user product, then Section 2 of this Agreement will not apply to you. \\n\\n
\ iii. You must retain in all copies of the Llama Materials that you distribute
the \\nfollowing attribution notice within a \u201CNotice\u201D text file distributed
as a part of such copies: \\n\u201CLlama 3.2 is licensed under the Llama 3.2
Community License, Copyright \xA9 Meta Platforms,\\nInc. All Rights Reserved.\u201D\\n\\n
\ iv. Your use of the Llama Materials must comply with applicable laws
and regulations\\n(including trade compliance laws and regulations) and adhere
to the Acceptable Use Policy for\\nthe Llama Materials (available at https://www.llama.com/llama3_2/use-policy),
which is hereby \\nincorporated by reference into this Agreement.\\n \\n2.
Additional Commercial Terms. If, on the Llama 3.2 version release date, the
monthly active users\\nof the products or services made available by or for
Licensee, or Licensee\u2019s affiliates, \\nis greater than 700 million monthly
active users in the preceding calendar month, you must request \\na license
from Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to\\nexercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.\\n\\n3. Disclaimer of Warranty.
UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND \\nRESULTS
THEREFROM ARE PROVIDED ON AN \u201CAS IS\u201D BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS\\nALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND
IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\\nOF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE\\nFOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS
AND ASSUME ANY RISKS ASSOCIATED\\nWITH YOUR USE OF THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS.\\n\\n4. Limitation of Liability. IN NO EVENT WILL META OR
ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, \\nWHETHER IN CONTRACT,
TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
\\nFOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,
EXEMPLARY OR PUNITIVE DAMAGES, EVEN \\nIF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.\\n\\n5. Intellectual Property.\\n\\n
\ a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, \\nneither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, \\nexcept as
required for reasonable and customary use in describing and redistributing the
Llama Materials or as \\nset forth in this Section 5(a). Meta hereby grants
you a license to use \u201CLlama\u201D (the \u201CMark\u201D) solely as required
\\nto comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s
brand guidelines (currently accessible \\nat https://about.meta.com/brand/resources/meta/company-brand/).
All goodwill arising out of your use of the Mark \\nwill inure to the benefit
of Meta.\\n\\n b. Subject to Meta\u2019s ownership of Llama Materials and
derivatives made by or for Meta, with respect to any\\n derivative works
and modifications of the Llama Materials that are made by you, as between you
and Meta,\\n you are and will be the owner of such derivative works and modifications.\\n\\n
\ c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or\\n counterclaim in a lawsuit) alleging
that the Llama Materials or Llama 3.2 outputs or results, or any portion\\n
\ of any of the foregoing, constitutes infringement of intellectual property
or other rights owned or licensable\\n by you, then any licenses granted
to you under this Agreement shall terminate as of the date such litigation or\\n
\ claim is filed or instituted. You will indemnify and hold harmless Meta
from and against any claim by any third\\n party arising out of or related
to your use or distribution of the Llama Materials.\\n\\n6. Term and Termination.
The term of this Agreement will commence upon your acceptance of this Agreement
or access\\nto the Llama Materials and will continue in full force and effect
until terminated in accordance with the terms\\nand conditions herein. Meta
may terminate this Agreement if you are in breach of any term or condition of
this\\nAgreement. Upon termination of this Agreement, you shall delete and cease
use of the Llama Materials. Sections 3,\\n4 and 7 shall survive the termination
of this Agreement. \\n\\n7. Governing Law and Jurisdiction. This Agreement will
be governed and construed under the laws of the State of \\nCalifornia without
regard to choice of law principles, and the UN Convention on Contracts for the
International\\nSale of Goods does not apply to this Agreement. The courts of
California shall have exclusive jurisdiction of\\nany dispute arising out of
this Agreement.\\\"\\nLICENSE \\\"**Llama 3.2** **Acceptable Use Policy**\\n\\nMeta
is committed to promoting safe and fair use of its tools and features, including
Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use
Policy (\u201C**Policy**\u201D). The most recent copy of this policy can be
found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\\n\\n**Prohibited
Uses**\\n\\nWe want everyone to use Llama 3.2 safely and responsibly. You agree
you will not use, or allow others to use, Llama 3.2 to:\\n\\n\\n\\n1. Violate
the law or others\u2019 rights, including to:\\n 1. Engage in, promote, generate,
contribute to, encourage, plan, incite, or further illegal or unlawful activity
or content, such as:\\n 1. Violence or terrorism\\n 2. Exploitation
or harm to children, including the solicitation, creation, acquisition, or dissemination
of child exploitative content or failure to report Child Sexual Abuse Material\\n
\ 3. Human trafficking, exploitation, and sexual violence\\n 4.
The illegal distribution of information or materials to minors, including obscene
materials, or failure to employ legally required age-gating in connection with
such information or materials.\\n 5. Sexual solicitation\\n 6.
Any other criminal activity\\n 1. Engage in, promote, incite, or facilitate
the harassment, abuse, threatening, or bullying of individuals or groups of
individuals\\n 2. Engage in, promote, incite, or facilitate discrimination
or other unlawful or harmful conduct in the provision of employment, employment
benefits, credit, housing, other economic benefits, or other essential goods
and services\\n 3. Engage in the unauthorized or unlicensed practice of any
profession including, but not limited to, financial, legal, medical/health,
or related professional practices\\n 4. Collect, process, disclose, generate,
or infer private or sensitive information about individuals, including information
about individuals\u2019 identity, health, or demographic information, unless
you have obtained the right to do so in accordance with applicable law\\n 5.
Engage in or facilitate any action or generate any content that infringes, misappropriates,
or otherwise violates any third-party rights, including the outputs or results
of any products or services using the Llama Materials\\n 6. Create, generate,
or facilitate the creation of malicious code, malware, computer viruses or do
anything else that could disable, overburden, interfere with or impair the proper
working, integrity, operation or appearance of a website or computer system\\n
\ 7. Engage in any action, or facilitate any action, to intentionally circumvent
or remove usage restrictions or other safety measures, or to enable functionality
disabled by Meta\\n2. Engage in, promote, incite, facilitate, or assist in the
planning or development of activities that present a risk of death or bodily
harm to individuals, including use of Llama 3.2 related to the following:\\n
\ 8. Military, warfare, nuclear industries or applications, espionage, use
for materials or activities that are subject to the International Traffic Arms
Regulations (ITAR) maintained by the United States Department of State or to
the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons
Convention Implementation Act of 1997\\n 9. Guns and illegal weapons (including
weapon development)\\n 10. Illegal drugs and regulated/controlled substances\\n
\ 11. Operation of critical infrastructure, transportation technologies, or
heavy machinery\\n 12. Self-harm or harm to others, including suicide, cutting,
and eating disorders\\n 13. Any content intended to incite or promote violence,
abuse, or any infliction of bodily harm to an individual\\n3. Intentionally
deceive or mislead others, including use of Llama 3.2 related to the following:\\n
\ 14. Generating, promoting, or furthering fraud or the creation or promotion
of disinformation\\n 15. Generating, promoting, or furthering defamatory
content, including the creation of defamatory statements, images, or other content\\n
\ 16. Generating, promoting, or further distributing spam\\n 17. Impersonating
another individual without consent, authorization, or legal right\\n 18.
Representing that the use of Llama 3.2 or outputs are human-generated\\n 19.
Generating or facilitating false online engagement, including fake reviews and
other means of fake online engagement\\n4. Fail to appropriately disclose to
end users any known dangers of your AI system\\n5. Interact with third party
tools, models, or software designed to generate unlawful content or engage in
unlawful or harmful conduct and/or represent that the outputs of such tools,
models, or software are associated with Meta or Llama 3.2\\n\\nWith respect
to any multimodal models included in Llama 3.2, the rights granted under Section
1(a) of the Llama 3.2 Community License Agreement are not being granted to you
if you are an individual domiciled in, or a company with a principal place of
business in, the European Union. This restriction does not apply to end users
of a product or service that incorporates any such multimodal models.\\n\\nPlease
report any violation of this Policy, software \u201Cbug,\u201D or other problems
that could lead to a violation of this Policy through one of the following means:\\n\\n\\n\\n*
Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues\\u0026h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\\n*
Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\\n*
Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\\n*
Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama
3.2: LlamaUseReport@meta.com\\\"\\n\",\"parameters\":\"stop \\\"\\u003c|start_header_id|\\u003e\\\"\\nstop
\ \\\"\\u003c|end_header_id|\\u003e\\\"\\nstop \\\"\\u003c|eot_id|\\u003e\\\"\",\"template\":\"\\u003c|start_header_id|\\u003esystem\\u003c|end_header_id|\\u003e\\n\\nCutting
Knowledge Date: December 2023\\n\\n{{ if .System }}{{ .System }}\\n{{- end }}\\n{{-
if .Tools }}When you receive a tool call response, use the output to format
an answer to the orginal user question.\\n\\nYou are a helpful assistant with
tool calling capabilities.\\n{{- end }}\\u003c|eot_id|\\u003e\\n{{- range $i,
$_ := .Messages }}\\n{{- $last := eq (len (slice $.Messages $i)) 1 }}\\n{{-
if eq .Role \\\"user\\\" }}\\u003c|start_header_id|\\u003euser\\u003c|end_header_id|\\u003e\\n{{-
if and $.Tools $last }}\\n\\nGiven the following functions, please respond with
a JSON for a function call with its proper arguments that best answers the given
prompt.\\n\\nRespond in the format {\\\"name\\\": function name, \\\"parameters\\\":
dictionary of argument name and its value}. Do not use variables.\\n\\n{{ range
$.Tools }}\\n{{- . }}\\n{{ end }}\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{-
else }}\\n\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{- end }}{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- else if eq .Role \\\"assistant\\\" }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n{{-
if .ToolCalls }}\\n{{ range .ToolCalls }}\\n{\\\"name\\\": \\\"{{ .Function.Name
}}\\\", \\\"parameters\\\": {{ .Function.Arguments }}}{{ end }}\\n{{- else }}\\n\\n{{
.Content }}\\n{{- end }}{{ if not $last }}\\u003c|eot_id|\\u003e{{ end }}\\n{{-
else if eq .Role \\\"tool\\\" }}\\u003c|start_header_id|\\u003eipython\\u003c|end_header_id|\\u003e\\n\\n{{
.Content }}\\u003c|eot_id|\\u003e{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- end }}\\n{{- end }}\",\"details\":{\"parent_model\":\"\",\"format\":\"gguf\",\"family\":\"llama\",\"families\":[\"llama\"],\"parameter_size\":\"3.2B\",\"quantization_level\":\"Q4_K_M\"},\"model_info\":{\"general.architecture\":\"llama\",\"general.basename\":\"Llama-3.2\",\"general.file_type\":15,\"general.finetune\":\"Instruct\",\"general.languages\":[\"en\",\"de\",\"fr\",\"it\",\"pt\",\"hi\",\"es\",\"th\"],\"general.parameter_count\":3212749888,\"general.quantization_version\":2,\"general.size_label\":\"3B\",\"general.tags\":[\"facebook\",\"meta\",\"pytorch\",\"llama\",\"llama-3\",\"text-generation\"],\"general.type\":\"model\",\"llama.attention.head_count\":24,\"llama.attention.head_count_kv\":8,\"llama.attention.key_length\":128,\"llama.attention.layer_norm_rms_epsilon\":0.00001,\"llama.attention.value_length\":128,\"llama.block_count\":28,\"llama.context_length\":131072,\"llama.embedding_length\":3072,\"llama.feed_forward_length\":8192,\"llama.rope.dimension_count\":128,\"llama.rope.freq_base\":500000,\"llama.vocab_size\":128256,\"tokenizer.ggml.bos_token_id\":128000,\"tokenizer.ggml.eos_token_id\":128009,\"tokenizer.ggml.merges\":null,\"tokenizer.ggml.model\":\"gpt2\",\"tokenizer.ggml.pre\":\"llama-bpe\",\"tokenizer.ggml.token_type\":null,\"tokenizer.ggml.tokens\":null},\"modified_at\":\"2024-12-31T11:53:14.529771974-05:00\"}"
headers:
Content-Type:
- application/json; charset=utf-8
Date:
- Wed, 15 Jan 2025 20:47:12 GMT
Transfer-Encoding:
- chunked
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"name": "llama3.2:3b"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '23'
content-type:
- application/json
host:
- localhost:11434
user-agent:
- litellm/1.57.4
method: POST
uri: http://localhost:11434/api/show
response:
content: "{\"license\":\"LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\\nLlama 3.2 Version
Release Date: September 25, 2024\\n\\n\u201CAgreement\u201D means the terms
and conditions for use, reproduction, distribution \\nand modification of the
Llama Materials set forth herein.\\n\\n\u201CDocumentation\u201D means the specifications,
manuals and documentation accompanying Llama 3.2\\ndistributed by Meta at https://llama.meta.com/doc/overview.\\n\\n\u201CLicensee\u201D
or \u201Cyou\u201D means you, or your employer or any other person or entity
(if you are \\nentering into this Agreement on such person or entity\u2019s
behalf), of the age required under\\napplicable laws, rules or regulations to
provide legal consent and that has legal authority\\nto bind your employer or
such other person or entity if you are entering in this Agreement\\non their
behalf.\\n\\n\u201CLlama 3.2\u201D means the foundational large language models
and software and algorithms, including\\nmachine-learning model code, trained
model weights, inference-enabling code, training-enabling code,\\nfine-tuning
enabling code and other elements of the foregoing distributed by Meta at \\nhttps://www.llama.com/llama-downloads.\\n\\n\u201CLlama
Materials\u201D means, collectively, Meta\u2019s proprietary Llama 3.2 and Documentation
(and \\nany portion thereof) made available under this Agreement.\\n\\n\u201CMeta\u201D
or \u201Cwe\u201D means Meta Platforms Ireland Limited (if you are located in
or, \\nif you are an entity, your principal place of business is in the EEA
or Switzerland) \\nand Meta Platforms, Inc. (if you are located outside of the
EEA or Switzerland). \\n\\n\\nBy clicking \u201CI Accept\u201D below or by using
or distributing any portion or element of the Llama Materials,\\nyou agree to
be bound by this Agreement.\\n\\n\\n1. License Rights and Redistribution.\\n\\n
\ a. Grant of Rights. You are granted a non-exclusive, worldwide, \\nnon-transferable
and royalty-free limited license under Meta\u2019s intellectual property or
other rights \\nowned by Meta embodied in the Llama Materials to use, reproduce,
distribute, copy, create derivative works \\nof, and make modifications to the
Llama Materials. \\n\\n b. Redistribution and Use. \\n\\n i. If
you distribute or make available the Llama Materials (or any derivative works
thereof), \\nor a product or service (including another AI model) that contains
any of them, you shall (A) provide\\na copy of this Agreement with any such
Llama Materials; and (B) prominently display \u201CBuilt with Llama\u201D\\non
a related website, user interface, blogpost, about page, or product documentation.
If you use the\\nLlama Materials or any outputs or results of the Llama Materials
to create, train, fine tune, or\\notherwise improve an AI model, which is distributed
or made available, you shall also include \u201CLlama\u201D\\nat the beginning
of any such AI model name.\\n\\n ii. If you receive Llama Materials,
or any derivative works thereof, from a Licensee as part\\nof an integrated
end user product, then Section 2 of this Agreement will not apply to you. \\n\\n
\ iii. You must retain in all copies of the Llama Materials that you distribute
the \\nfollowing attribution notice within a \u201CNotice\u201D text file distributed
as a part of such copies: \\n\u201CLlama 3.2 is licensed under the Llama 3.2
Community License, Copyright \xA9 Meta Platforms,\\nInc. All Rights Reserved.\u201D\\n\\n
\ iv. Your use of the Llama Materials must comply with applicable laws
and regulations\\n(including trade compliance laws and regulations) and adhere
to the Acceptable Use Policy for\\nthe Llama Materials (available at https://www.llama.com/llama3_2/use-policy),
which is hereby \\nincorporated by reference into this Agreement.\\n \\n2.
Additional Commercial Terms. If, on the Llama 3.2 version release date, the
monthly active users\\nof the products or services made available by or for
Licensee, or Licensee\u2019s affiliates, \\nis greater than 700 million monthly
active users in the preceding calendar month, you must request \\na license
from Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to\\nexercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.\\n\\n3. Disclaimer of Warranty.
UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND \\nRESULTS
THEREFROM ARE PROVIDED ON AN \u201CAS IS\u201D BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS\\nALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND
IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\\nOF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE\\nFOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS
AND ASSUME ANY RISKS ASSOCIATED\\nWITH YOUR USE OF THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS.\\n\\n4. Limitation of Liability. IN NO EVENT WILL META OR
ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, \\nWHETHER IN CONTRACT,
TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
\\nFOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,
EXEMPLARY OR PUNITIVE DAMAGES, EVEN \\nIF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.\\n\\n5. Intellectual Property.\\n\\n
\ a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, \\nneither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, \\nexcept as
required for reasonable and customary use in describing and redistributing the
Llama Materials or as \\nset forth in this Section 5(a). Meta hereby grants
you a license to use \u201CLlama\u201D (the \u201CMark\u201D) solely as required
\\nto comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s
brand guidelines (currently accessible \\nat https://about.meta.com/brand/resources/meta/company-brand/).
All goodwill arising out of your use of the Mark \\nwill inure to the benefit
of Meta.\\n\\n b. Subject to Meta\u2019s ownership of Llama Materials and
derivatives made by or for Meta, with respect to any\\n derivative works
and modifications of the Llama Materials that are made by you, as between you
and Meta,\\n you are and will be the owner of such derivative works and modifications.\\n\\n
\ c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or\\n counterclaim in a lawsuit) alleging
that the Llama Materials or Llama 3.2 outputs or results, or any portion\\n
\ of any of the foregoing, constitutes infringement of intellectual property
or other rights owned or licensable\\n by you, then any licenses granted
to you under this Agreement shall terminate as of the date such litigation or\\n
\ claim is filed or instituted. You will indemnify and hold harmless Meta
from and against any claim by any third\\n party arising out of or related
to your use or distribution of the Llama Materials.\\n\\n6. Term and Termination.
The term of this Agreement will commence upon your acceptance of this Agreement
or access\\nto the Llama Materials and will continue in full force and effect
until terminated in accordance with the terms\\nand conditions herein. Meta
may terminate this Agreement if you are in breach of any term or condition of
this\\nAgreement. Upon termination of this Agreement, you shall delete and cease
use of the Llama Materials. Sections 3,\\n4 and 7 shall survive the termination
of this Agreement. \\n\\n7. Governing Law and Jurisdiction. This Agreement will
be governed and construed under the laws of the State of \\nCalifornia without
regard to choice of law principles, and the UN Convention on Contracts for the
International\\nSale of Goods does not apply to this Agreement. The courts of
California shall have exclusive jurisdiction of\\nany dispute arising out of
this Agreement.\\n**Llama 3.2** **Acceptable Use Policy**\\n\\nMeta is committed
to promoting safe and fair use of its tools and features, including Llama 3.2.
If you access or use Llama 3.2, you agree to this Acceptable Use Policy (\u201C**Policy**\u201D).
The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\\n\\n**Prohibited
Uses**\\n\\nWe want everyone to use Llama 3.2 safely and responsibly. You agree
you will not use, or allow others to use, Llama 3.2 to:\\n\\n\\n\\n1. Violate
the law or others\u2019 rights, including to:\\n 1. Engage in, promote, generate,
contribute to, encourage, plan, incite, or further illegal or unlawful activity
or content, such as:\\n 1. Violence or terrorism\\n 2. Exploitation
or harm to children, including the solicitation, creation, acquisition, or dissemination
of child exploitative content or failure to report Child Sexual Abuse Material\\n
\ 3. Human trafficking, exploitation, and sexual violence\\n 4.
The illegal distribution of information or materials to minors, including obscene
materials, or failure to employ legally required age-gating in connection with
such information or materials.\\n 5. Sexual solicitation\\n 6.
Any other criminal activity\\n 1. Engage in, promote, incite, or facilitate
the harassment, abuse, threatening, or bullying of individuals or groups of
individuals\\n 2. Engage in, promote, incite, or facilitate discrimination
or other unlawful or harmful conduct in the provision of employment, employment
benefits, credit, housing, other economic benefits, or other essential goods
and services\\n 3. Engage in the unauthorized or unlicensed practice of any
profession including, but not limited to, financial, legal, medical/health,
or related professional practices\\n 4. Collect, process, disclose, generate,
or infer private or sensitive information about individuals, including information
about individuals\u2019 identity, health, or demographic information, unless
you have obtained the right to do so in accordance with applicable law\\n 5.
Engage in or facilitate any action or generate any content that infringes, misappropriates,
or otherwise violates any third-party rights, including the outputs or results
of any products or services using the Llama Materials\\n 6. Create, generate,
or facilitate the creation of malicious code, malware, computer viruses or do
anything else that could disable, overburden, interfere with or impair the proper
working, integrity, operation or appearance of a website or computer system\\n
\ 7. Engage in any action, or facilitate any action, to intentionally circumvent
or remove usage restrictions or other safety measures, or to enable functionality
disabled by Meta\\n2. Engage in, promote, incite, facilitate, or assist in the
planning or development of activities that present a risk of death or bodily
harm to individuals, including use of Llama 3.2 related to the following:\\n
\ 8. Military, warfare, nuclear industries or applications, espionage, use
for materials or activities that are subject to the International Traffic Arms
Regulations (ITAR) maintained by the United States Department of State or to
the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons
Convention Implementation Act of 1997\\n 9. Guns and illegal weapons (including
weapon development)\\n 10. Illegal drugs and regulated/controlled substances\\n
\ 11. Operation of critical infrastructure, transportation technologies, or
heavy machinery\\n 12. Self-harm or harm to others, including suicide, cutting,
and eating disorders\\n 13. Any content intended to incite or promote violence,
abuse, or any infliction of bodily harm to an individual\\n3. Intentionally
deceive or mislead others, including use of Llama 3.2 related to the following:\\n
\ 14. Generating, promoting, or furthering fraud or the creation or promotion
of disinformation\\n 15. Generating, promoting, or furthering defamatory
content, including the creation of defamatory statements, images, or other content\\n
\ 16. Generating, promoting, or further distributing spam\\n 17. Impersonating
another individual without consent, authorization, or legal right\\n 18.
Representing that the use of Llama 3.2 or outputs are human-generated\\n 19.
Generating or facilitating false online engagement, including fake reviews and
other means of fake online engagement\\n4. Fail to appropriately disclose to
end users any known dangers of your AI system\\n5. Interact with third party
tools, models, or software designed to generate unlawful content or engage in
unlawful or harmful conduct and/or represent that the outputs of such tools,
models, or software are associated with Meta or Llama 3.2\\n\\nWith respect
to any multimodal models included in Llama 3.2, the rights granted under Section
1(a) of the Llama 3.2 Community License Agreement are not being granted to you
if you are an individual domiciled in, or a company with a principal place of
business in, the European Union. This restriction does not apply to end users
of a product or service that incorporates any such multimodal models.\\n\\nPlease
report any violation of this Policy, software \u201Cbug,\u201D or other problems
that could lead to a violation of this Policy through one of the following means:\\n\\n\\n\\n*
Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues\\u0026h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\\n*
Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\\n*
Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\\n*
Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama
3.2: LlamaUseReport@meta.com\",\"modelfile\":\"# Modelfile generated by \\\"ollama
show\\\"\\n# To build a new Modelfile based on this, replace FROM with:\\n#
FROM llama3.2:3b\\n\\nFROM /Users/brandonhancock/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff\\nTEMPLATE
\\\"\\\"\\\"\\u003c|start_header_id|\\u003esystem\\u003c|end_header_id|\\u003e\\n\\nCutting
Knowledge Date: December 2023\\n\\n{{ if .System }}{{ .System }}\\n{{- end }}\\n{{-
if .Tools }}When you receive a tool call response, use the output to format
an answer to the orginal user question.\\n\\nYou are a helpful assistant with
tool calling capabilities.\\n{{- end }}\\u003c|eot_id|\\u003e\\n{{- range $i,
$_ := .Messages }}\\n{{- $last := eq (len (slice $.Messages $i)) 1 }}\\n{{-
if eq .Role \\\"user\\\" }}\\u003c|start_header_id|\\u003euser\\u003c|end_header_id|\\u003e\\n{{-
if and $.Tools $last }}\\n\\nGiven the following functions, please respond with
a JSON for a function call with its proper arguments that best answers the given
prompt.\\n\\nRespond in the format {\\\"name\\\": function name, \\\"parameters\\\":
dictionary of argument name and its value}. Do not use variables.\\n\\n{{ range
$.Tools }}\\n{{- . }}\\n{{ end }}\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{-
else }}\\n\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{- end }}{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- else if eq .Role \\\"assistant\\\" }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n{{-
if .ToolCalls }}\\n{{ range .ToolCalls }}\\n{\\\"name\\\": \\\"{{ .Function.Name
}}\\\", \\\"parameters\\\": {{ .Function.Arguments }}}{{ end }}\\n{{- else }}\\n\\n{{
.Content }}\\n{{- end }}{{ if not $last }}\\u003c|eot_id|\\u003e{{ end }}\\n{{-
else if eq .Role \\\"tool\\\" }}\\u003c|start_header_id|\\u003eipython\\u003c|end_header_id|\\u003e\\n\\n{{
.Content }}\\u003c|eot_id|\\u003e{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- end }}\\n{{- end }}\\\"\\\"\\\"\\nPARAMETER stop \\u003c|start_header_id|\\u003e\\nPARAMETER
stop \\u003c|end_header_id|\\u003e\\nPARAMETER stop \\u003c|eot_id|\\u003e\\nLICENSE
\\\"LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\\nLlama 3.2 Version Release Date:
September 25, 2024\\n\\n\u201CAgreement\u201D means the terms and conditions
for use, reproduction, distribution \\nand modification of the Llama Materials
set forth herein.\\n\\n\u201CDocumentation\u201D means the specifications, manuals
and documentation accompanying Llama 3.2\\ndistributed by Meta at https://llama.meta.com/doc/overview.\\n\\n\u201CLicensee\u201D
or \u201Cyou\u201D means you, or your employer or any other person or entity
(if you are \\nentering into this Agreement on such person or entity\u2019s
behalf), of the age required under\\napplicable laws, rules or regulations to
provide legal consent and that has legal authority\\nto bind your employer or
such other person or entity if you are entering in this Agreement\\non their
behalf.\\n\\n\u201CLlama 3.2\u201D means the foundational large language models
and software and algorithms, including\\nmachine-learning model code, trained
model weights, inference-enabling code, training-enabling code,\\nfine-tuning
enabling code and other elements of the foregoing distributed by Meta at \\nhttps://www.llama.com/llama-downloads.\\n\\n\u201CLlama
Materials\u201D means, collectively, Meta\u2019s proprietary Llama 3.2 and Documentation
(and \\nany portion thereof) made available under this Agreement.\\n\\n\u201CMeta\u201D
or \u201Cwe\u201D means Meta Platforms Ireland Limited (if you are located in
or, \\nif you are an entity, your principal place of business is in the EEA
or Switzerland) \\nand Meta Platforms, Inc. (if you are located outside of the
EEA or Switzerland). \\n\\n\\nBy clicking \u201CI Accept\u201D below or by using
or distributing any portion or element of the Llama Materials,\\nyou agree to
be bound by this Agreement.\\n\\n\\n1. License Rights and Redistribution.\\n\\n
\ a. Grant of Rights. You are granted a non-exclusive, worldwide, \\nnon-transferable
and royalty-free limited license under Meta\u2019s intellectual property or
other rights \\nowned by Meta embodied in the Llama Materials to use, reproduce,
distribute, copy, create derivative works \\nof, and make modifications to the
Llama Materials. \\n\\n b. Redistribution and Use. \\n\\n i. If
you distribute or make available the Llama Materials (or any derivative works
thereof), \\nor a product or service (including another AI model) that contains
any of them, you shall (A) provide\\na copy of this Agreement with any such
Llama Materials; and (B) prominently display \u201CBuilt with Llama\u201D\\non
a related website, user interface, blogpost, about page, or product documentation.
If you use the\\nLlama Materials or any outputs or results of the Llama Materials
to create, train, fine tune, or\\notherwise improve an AI model, which is distributed
or made available, you shall also include \u201CLlama\u201D\\nat the beginning
of any such AI model name.\\n\\n ii. If you receive Llama Materials,
or any derivative works thereof, from a Licensee as part\\nof an integrated
end user product, then Section 2 of this Agreement will not apply to you. \\n\\n
\ iii. You must retain in all copies of the Llama Materials that you distribute
the \\nfollowing attribution notice within a \u201CNotice\u201D text file distributed
as a part of such copies: \\n\u201CLlama 3.2 is licensed under the Llama 3.2
Community License, Copyright \xA9 Meta Platforms,\\nInc. All Rights Reserved.\u201D\\n\\n
\ iv. Your use of the Llama Materials must comply with applicable laws
and regulations\\n(including trade compliance laws and regulations) and adhere
to the Acceptable Use Policy for\\nthe Llama Materials (available at https://www.llama.com/llama3_2/use-policy),
which is hereby \\nincorporated by reference into this Agreement.\\n \\n2.
Additional Commercial Terms. If, on the Llama 3.2 version release date, the
monthly active users\\nof the products or services made available by or for
Licensee, or Licensee\u2019s affiliates, \\nis greater than 700 million monthly
active users in the preceding calendar month, you must request \\na license
from Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to\\nexercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.\\n\\n3. Disclaimer of Warranty.
UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND \\nRESULTS
THEREFROM ARE PROVIDED ON AN \u201CAS IS\u201D BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS\\nALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND
IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\\nOF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE\\nFOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS
AND ASSUME ANY RISKS ASSOCIATED\\nWITH YOUR USE OF THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS.\\n\\n4. Limitation of Liability. IN NO EVENT WILL META OR
ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, \\nWHETHER IN CONTRACT,
TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
\\nFOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,
EXEMPLARY OR PUNITIVE DAMAGES, EVEN \\nIF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.\\n\\n5. Intellectual Property.\\n\\n
\ a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, \\nneither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, \\nexcept as
required for reasonable and customary use in describing and redistributing the
Llama Materials or as \\nset forth in this Section 5(a). Meta hereby grants
you a license to use \u201CLlama\u201D (the \u201CMark\u201D) solely as required
\\nto comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s
brand guidelines (currently accessible \\nat https://about.meta.com/brand/resources/meta/company-brand/).
All goodwill arising out of your use of the Mark \\nwill inure to the benefit
of Meta.\\n\\n b. Subject to Meta\u2019s ownership of Llama Materials and
derivatives made by or for Meta, with respect to any\\n derivative works
and modifications of the Llama Materials that are made by you, as between you
and Meta,\\n you are and will be the owner of such derivative works and modifications.\\n\\n
\ c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or\\n counterclaim in a lawsuit) alleging
that the Llama Materials or Llama 3.2 outputs or results, or any portion\\n
\ of any of the foregoing, constitutes infringement of intellectual property
or other rights owned or licensable\\n by you, then any licenses granted
to you under this Agreement shall terminate as of the date such litigation or\\n
\ claim is filed or instituted. You will indemnify and hold harmless Meta
from and against any claim by any third\\n party arising out of or related
to your use or distribution of the Llama Materials.\\n\\n6. Term and Termination.
The term of this Agreement will commence upon your acceptance of this Agreement
or access\\nto the Llama Materials and will continue in full force and effect
until terminated in accordance with the terms\\nand conditions herein. Meta
may terminate this Agreement if you are in breach of any term or condition of
this\\nAgreement. Upon termination of this Agreement, you shall delete and cease
use of the Llama Materials. Sections 3,\\n4 and 7 shall survive the termination
of this Agreement. \\n\\n7. Governing Law and Jurisdiction. This Agreement will
be governed and construed under the laws of the State of \\nCalifornia without
regard to choice of law principles, and the UN Convention on Contracts for the
International\\nSale of Goods does not apply to this Agreement. The courts of
California shall have exclusive jurisdiction of\\nany dispute arising out of
this Agreement.\\\"\\nLICENSE \\\"**Llama 3.2** **Acceptable Use Policy**\\n\\nMeta
is committed to promoting safe and fair use of its tools and features, including
Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use
Policy (\u201C**Policy**\u201D). The most recent copy of this policy can be
found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\\n\\n**Prohibited
Uses**\\n\\nWe want everyone to use Llama 3.2 safely and responsibly. You agree
you will not use, or allow others to use, Llama 3.2 to:\\n\\n\\n\\n1. Violate
the law or others\u2019 rights, including to:\\n 1. Engage in, promote, generate,
contribute to, encourage, plan, incite, or further illegal or unlawful activity
or content, such as:\\n 1. Violence or terrorism\\n 2. Exploitation
or harm to children, including the solicitation, creation, acquisition, or dissemination
of child exploitative content or failure to report Child Sexual Abuse Material\\n
\ 3. Human trafficking, exploitation, and sexual violence\\n 4.
The illegal distribution of information or materials to minors, including obscene
materials, or failure to employ legally required age-gating in connection with
such information or materials.\\n 5. Sexual solicitation\\n 6.
Any other criminal activity\\n 1. Engage in, promote, incite, or facilitate
the harassment, abuse, threatening, or bullying of individuals or groups of
individuals\\n 2. Engage in, promote, incite, or facilitate discrimination
or other unlawful or harmful conduct in the provision of employment, employment
benefits, credit, housing, other economic benefits, or other essential goods
and services\\n 3. Engage in the unauthorized or unlicensed practice of any
profession including, but not limited to, financial, legal, medical/health,
or related professional practices\\n 4. Collect, process, disclose, generate,
or infer private or sensitive information about individuals, including information
about individuals\u2019 identity, health, or demographic information, unless
you have obtained the right to do so in accordance with applicable law\\n 5.
Engage in or facilitate any action or generate any content that infringes, misappropriates,
or otherwise violates any third-party rights, including the outputs or results
of any products or services using the Llama Materials\\n 6. Create, generate,
or facilitate the creation of malicious code, malware, computer viruses or do
anything else that could disable, overburden, interfere with or impair the proper
working, integrity, operation or appearance of a website or computer system\\n
\ 7. Engage in any action, or facilitate any action, to intentionally circumvent
or remove usage restrictions or other safety measures, or to enable functionality
disabled by Meta\\n2. Engage in, promote, incite, facilitate, or assist in the
planning or development of activities that present a risk of death or bodily
harm to individuals, including use of Llama 3.2 related to the following:\\n
\ 8. Military, warfare, nuclear industries or applications, espionage, use
for materials or activities that are subject to the International Traffic Arms
Regulations (ITAR) maintained by the United States Department of State or to
the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons
Convention Implementation Act of 1997\\n 9. Guns and illegal weapons (including
weapon development)\\n 10. Illegal drugs and regulated/controlled substances\\n
\ 11. Operation of critical infrastructure, transportation technologies, or
heavy machinery\\n 12. Self-harm or harm to others, including suicide, cutting,
and eating disorders\\n 13. Any content intended to incite or promote violence,
abuse, or any infliction of bodily harm to an individual\\n3. Intentionally
deceive or mislead others, including use of Llama 3.2 related to the following:\\n
\ 14. Generating, promoting, or furthering fraud or the creation or promotion
of disinformation\\n 15. Generating, promoting, or furthering defamatory
content, including the creation of defamatory statements, images, or other content\\n
\ 16. Generating, promoting, or further distributing spam\\n 17. Impersonating
another individual without consent, authorization, or legal right\\n 18.
Representing that the use of Llama 3.2 or outputs are human-generated\\n 19.
Generating or facilitating false online engagement, including fake reviews and
other means of fake online engagement\\n4. Fail to appropriately disclose to
end users any known dangers of your AI system\\n5. Interact with third party
tools, models, or software designed to generate unlawful content or engage in
unlawful or harmful conduct and/or represent that the outputs of such tools,
models, or software are associated with Meta or Llama 3.2\\n\\nWith respect
to any multimodal models included in Llama 3.2, the rights granted under Section
1(a) of the Llama 3.2 Community License Agreement are not being granted to you
if you are an individual domiciled in, or a company with a principal place of
business in, the European Union. This restriction does not apply to end users
of a product or service that incorporates any such multimodal models.\\n\\nPlease
report any violation of this Policy, software \u201Cbug,\u201D or other problems
that could lead to a violation of this Policy through one of the following means:\\n\\n\\n\\n*
Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues\\u0026h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\\n*
Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\\n*
Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\\n*
Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama
3.2: LlamaUseReport@meta.com\\\"\\n\",\"parameters\":\"stop \\\"\\u003c|start_header_id|\\u003e\\\"\\nstop
\ \\\"\\u003c|end_header_id|\\u003e\\\"\\nstop \\\"\\u003c|eot_id|\\u003e\\\"\",\"template\":\"\\u003c|start_header_id|\\u003esystem\\u003c|end_header_id|\\u003e\\n\\nCutting
Knowledge Date: December 2023\\n\\n{{ if .System }}{{ .System }}\\n{{- end }}\\n{{-
if .Tools }}When you receive a tool call response, use the output to format
an answer to the orginal user question.\\n\\nYou are a helpful assistant with
tool calling capabilities.\\n{{- end }}\\u003c|eot_id|\\u003e\\n{{- range $i,
$_ := .Messages }}\\n{{- $last := eq (len (slice $.Messages $i)) 1 }}\\n{{-
if eq .Role \\\"user\\\" }}\\u003c|start_header_id|\\u003euser\\u003c|end_header_id|\\u003e\\n{{-
if and $.Tools $last }}\\n\\nGiven the following functions, please respond with
a JSON for a function call with its proper arguments that best answers the given
prompt.\\n\\nRespond in the format {\\\"name\\\": function name, \\\"parameters\\\":
dictionary of argument name and its value}. Do not use variables.\\n\\n{{ range
$.Tools }}\\n{{- . }}\\n{{ end }}\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{-
else }}\\n\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{- end }}{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- else if eq .Role \\\"assistant\\\" }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n{{-
if .ToolCalls }}\\n{{ range .ToolCalls }}\\n{\\\"name\\\": \\\"{{ .Function.Name
}}\\\", \\\"parameters\\\": {{ .Function.Arguments }}}{{ end }}\\n{{- else }}\\n\\n{{
.Content }}\\n{{- end }}{{ if not $last }}\\u003c|eot_id|\\u003e{{ end }}\\n{{-
else if eq .Role \\\"tool\\\" }}\\u003c|start_header_id|\\u003eipython\\u003c|end_header_id|\\u003e\\n\\n{{
.Content }}\\u003c|eot_id|\\u003e{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- end }}\\n{{- end }}\",\"details\":{\"parent_model\":\"\",\"format\":\"gguf\",\"family\":\"llama\",\"families\":[\"llama\"],\"parameter_size\":\"3.2B\",\"quantization_level\":\"Q4_K_M\"},\"model_info\":{\"general.architecture\":\"llama\",\"general.basename\":\"Llama-3.2\",\"general.file_type\":15,\"general.finetune\":\"Instruct\",\"general.languages\":[\"en\",\"de\",\"fr\",\"it\",\"pt\",\"hi\",\"es\",\"th\"],\"general.parameter_count\":3212749888,\"general.quantization_version\":2,\"general.size_label\":\"3B\",\"general.tags\":[\"facebook\",\"meta\",\"pytorch\",\"llama\",\"llama-3\",\"text-generation\"],\"general.type\":\"model\",\"llama.attention.head_count\":24,\"llama.attention.head_count_kv\":8,\"llama.attention.key_length\":128,\"llama.attention.layer_norm_rms_epsilon\":0.00001,\"llama.attention.value_length\":128,\"llama.block_count\":28,\"llama.context_length\":131072,\"llama.embedding_length\":3072,\"llama.feed_forward_length\":8192,\"llama.rope.dimension_count\":128,\"llama.rope.freq_base\":500000,\"llama.vocab_size\":128256,\"tokenizer.ggml.bos_token_id\":128000,\"tokenizer.ggml.eos_token_id\":128009,\"tokenizer.ggml.merges\":null,\"tokenizer.ggml.model\":\"gpt2\",\"tokenizer.ggml.pre\":\"llama-bpe\",\"tokenizer.ggml.token_type\":null,\"tokenizer.ggml.tokens\":null},\"modified_at\":\"2024-12-31T11:53:14.529771974-05:00\"}"
headers:
Content-Type:
- application/json; charset=utf-8
Date:
- Wed, 15 Jan 2025 20:47:12 GMT
Transfer-Encoding:
- chunked
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,116 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "Name: John Doe\nAge: 30\nAddress:
123 Main St, Anytown, 12345"}], "model": "gpt-4o-mini", "tool_choice": {"type":
"function", "function": {"name": "Person"}}, "tools": [{"type": "function",
"function": {"name": "Person", "description": "Correctly extracted `Person`
with all the required parameters with correct types", "parameters": {"$defs":
{"Address": {"properties": {"street": {"title": "Street", "type": "string"},
"city": {"title": "City", "type": "string"}, "zip_code": {"title": "Zip Code",
"type": "string"}}, "required": ["street", "city", "zip_code"], "title": "Address",
"type": "object"}}, "properties": {"name": {"title": "Name", "type": "string"},
"age": {"title": "Age", "type": "integer"}, "address": {"$ref": "#/$defs/Address"}},
"required": ["address", "age", "name"], "type": "object"}}}]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '853'
content-type:
- application/json
cookie:
- __cf_bm=PzayZLF04c14veGc.0ocVg3VHBbpzKRW8Hqox8L9U7c-1736974028-1.0.1.1-mZpK8.SH9l7K2z8Tvt6z.dURiVPjFqEz7zYEITfRwdr5z0razsSebZGN9IRPmI5XC_w5rbZW2Kg6hh5cenXinQ;
_cfuvid=ciwC3n2Srn20xx4JhEUeN6Ap0tNBaE44S95nIilboQ0-1736974028496-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-Aq4aFpbhU10QK0e6Jlkxy8AUxCZCf\",\n \"object\":
\"chat.completion\",\n \"created\": 1736974039,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_N29aoGL9tN0qL2O7HI8Op2so\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"Person\",\n
\ \"arguments\": \"{\\\"name\\\":\\\"John Doe\\\",\\\"age\\\":30,\\\"address\\\":{\\\"street\\\":\\\"123
Main St\\\",\\\"city\\\":\\\"Anytown\\\",\\\"zip_code\\\":\\\"12345\\\"}}\"\n
\ }\n }\n ],\n \"refusal\": null\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 118,\n \"completion_tokens\": 30,\n
\ \"total_tokens\": 148,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_bd83329f63\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 9028b863dbaa672f-ATL
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 15 Jan 2025 20:47:20 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '840'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999968'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_2f9d1e3f0ace4944891dde05093486aa
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -39,6 +39,22 @@ class NestedModel(BaseModel):
data: SimpleModel
class Address(BaseModel):
street: str
city: str
zip_code: str
class Person(BaseModel):
name: str
age: int
address: Address
class CustomConverter(Converter):
pass
# Fixtures
@pytest.fixture
def mock_agent():
@@ -199,26 +215,23 @@ def test_convert_with_instructions_failure(
# Tests for get_conversion_instructions
def test_get_conversion_instructions_gpt():
mock_llm = Mock()
mock_llm.openai_api_base = None
llm = LLM(model="gpt-4o-mini")
with patch.object(LLM, "supports_function_calling") as supports_function_calling:
supports_function_calling.return_value = True
instructions = get_conversion_instructions(SimpleModel, mock_llm)
instructions = get_conversion_instructions(SimpleModel, llm)
model_schema = PydanticSchemaParser(model=SimpleModel).get_schema()
assert (
instructions
== f"I'm gonna convert this raw text into valid JSON.\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
== f"Please convert the following text into valid JSON.\n\nThe JSON should follow this schema:\n```json\n{model_schema}\n```"
)
def test_get_conversion_instructions_non_gpt():
mock_llm = Mock()
with patch.object(LLM, "supports_function_calling") as supports_function_calling:
supports_function_calling.return_value = False
with patch("crewai.utilities.converter.PydanticSchemaParser") as mock_parser:
mock_parser.return_value.get_schema.return_value = "Sample schema"
instructions = get_conversion_instructions(SimpleModel, mock_llm)
assert "Sample schema" in instructions
llm = LLM(model="ollama/llama3.1", base_url="http://localhost:11434")
with patch.object(LLM, "supports_function_calling", return_value=False):
instructions = get_conversion_instructions(SimpleModel, llm)
assert '"name": str' in instructions
assert '"age": int' in instructions
# Tests for is_gpt
@@ -232,10 +245,6 @@ def test_supports_function_calling_false():
assert llm.supports_function_calling() is False
class CustomConverter(Converter):
pass
def test_create_converter_with_mock_agent():
mock_agent = MagicMock()
mock_agent.get_output_converter.return_value = MagicMock(spec=Converter)
@@ -255,7 +264,7 @@ def test_create_converter_with_mock_agent():
def test_create_converter_with_custom_converter():
converter = create_converter(
converter_cls=CustomConverter,
llm=Mock(),
llm=LLM(model="gpt-4o-mini"),
text="Sample",
model=SimpleModel,
instructions="Convert",
@@ -313,3 +322,278 @@ def test_generate_model_description_dict_field():
description = generate_model_description(ModelWithDictField)
expected_description = '{\n "attributes": Dict[str, int]\n}'
assert description == expected_description
@pytest.mark.vcr(filter_headers=["authorization"])
def test_convert_with_instructions():
llm = LLM(model="gpt-4o-mini")
sample_text = "Name: Alice, Age: 30"
instructions = get_conversion_instructions(SimpleModel, llm)
converter = Converter(
llm=llm,
text=sample_text,
model=SimpleModel,
instructions=instructions,
)
# Act
output = converter.to_pydantic()
# Assert
assert isinstance(output, SimpleModel)
assert output.name == "Alice"
assert output.age == 30
@pytest.mark.vcr(filter_headers=["authorization"])
def test_converter_with_llama3_2_model():
llm = LLM(model="ollama/llama3.2:3b", base_url="http://localhost:11434")
sample_text = "Name: Alice Llama, Age: 30"
instructions = get_conversion_instructions(SimpleModel, llm)
converter = Converter(
llm=llm,
text=sample_text,
model=SimpleModel,
instructions=instructions,
)
output = converter.to_pydantic()
assert isinstance(output, SimpleModel)
assert output.name == "Alice Llama"
assert output.age == 30
@pytest.mark.vcr(filter_headers=["authorization"])
def test_converter_with_llama3_1_model():
llm = LLM(model="ollama/llama3.1", base_url="http://localhost:11434")
sample_text = "Name: Alice Llama, Age: 30"
instructions = get_conversion_instructions(SimpleModel, llm)
converter = Converter(
llm=llm,
text=sample_text,
model=SimpleModel,
instructions=instructions,
)
output = converter.to_pydantic()
assert isinstance(output, SimpleModel)
assert output.name == "Alice Llama"
assert output.age == 30
@pytest.mark.vcr(filter_headers=["authorization"])
def test_converter_with_nested_model():
llm = LLM(model="gpt-4o-mini")
sample_text = "Name: John Doe\nAge: 30\nAddress: 123 Main St, Anytown, 12345"
instructions = get_conversion_instructions(Person, llm)
converter = Converter(
llm=llm,
text=sample_text,
model=Person,
instructions=instructions,
)
output = converter.to_pydantic()
assert isinstance(output, Person)
assert output.name == "John Doe"
assert output.age == 30
assert isinstance(output.address, Address)
assert output.address.street == "123 Main St"
assert output.address.city == "Anytown"
assert output.address.zip_code == "12345"
# Tests for error handling
def test_converter_error_handling():
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = False
llm.call.return_value = "Invalid JSON"
sample_text = "Name: Alice, Age: 30"
instructions = get_conversion_instructions(SimpleModel, llm)
converter = Converter(
llm=llm,
text=sample_text,
model=SimpleModel,
instructions=instructions,
)
with pytest.raises(ConverterError) as exc_info:
output = converter.to_pydantic()
assert "Failed to convert text into a Pydantic model" in str(exc_info.value)
# Tests for retry logic
def test_converter_retry_logic():
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = False
llm.call.side_effect = [
"Invalid JSON",
"Still invalid",
'{"name": "Retry Alice", "age": 30}',
]
sample_text = "Name: Retry Alice, Age: 30"
instructions = get_conversion_instructions(SimpleModel, llm)
converter = Converter(
llm=llm,
text=sample_text,
model=SimpleModel,
instructions=instructions,
max_attempts=3,
)
output = converter.to_pydantic()
assert isinstance(output, SimpleModel)
assert output.name == "Retry Alice"
assert output.age == 30
assert llm.call.call_count == 3
# Tests for optional fields
def test_converter_with_optional_fields():
class OptionalModel(BaseModel):
name: str
age: Optional[int]
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = False
# Simulate the LLM's response with 'age' explicitly set to null
llm.call.return_value = '{"name": "Bob", "age": null}'
sample_text = "Name: Bob, age: None"
instructions = get_conversion_instructions(OptionalModel, llm)
converter = Converter(
llm=llm,
text=sample_text,
model=OptionalModel,
instructions=instructions,
)
output = converter.to_pydantic()
assert isinstance(output, OptionalModel)
assert output.name == "Bob"
assert output.age is None
# Tests for list fields
def test_converter_with_list_field():
class ListModel(BaseModel):
items: List[int]
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = False
llm.call.return_value = '{"items": [1, 2, 3]}'
sample_text = "Items: 1, 2, 3"
instructions = get_conversion_instructions(ListModel, llm)
converter = Converter(
llm=llm,
text=sample_text,
model=ListModel,
instructions=instructions,
)
output = converter.to_pydantic()
assert isinstance(output, ListModel)
assert output.items == [1, 2, 3]
# Tests for enums
from enum import Enum
def test_converter_with_enum():
class Color(Enum):
RED = "red"
GREEN = "green"
BLUE = "blue"
class EnumModel(BaseModel):
name: str
color: Color
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = False
llm.call.return_value = '{"name": "Alice", "color": "red"}'
sample_text = "Name: Alice, Color: Red"
instructions = get_conversion_instructions(EnumModel, llm)
converter = Converter(
llm=llm,
text=sample_text,
model=EnumModel,
instructions=instructions,
)
output = converter.to_pydantic()
assert isinstance(output, EnumModel)
assert output.name == "Alice"
assert output.color == Color.RED
# Tests for ambiguous input
def test_converter_with_ambiguous_input():
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = False
llm.call.return_value = '{"name": "Charlie", "age": "Not an age"}'
sample_text = "Charlie is thirty years old"
instructions = get_conversion_instructions(SimpleModel, llm)
converter = Converter(
llm=llm,
text=sample_text,
model=SimpleModel,
instructions=instructions,
)
with pytest.raises(ConverterError) as exc_info:
output = converter.to_pydantic()
assert "validation error" in str(exc_info.value).lower()
# Tests for function calling support
def test_converter_with_function_calling():
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = True
instructor = Mock()
instructor.to_pydantic.return_value = SimpleModel(name="Eve", age=35)
converter = Converter(
llm=llm,
text="Name: Eve, Age: 35",
model=SimpleModel,
instructions="Convert this text.",
)
converter._create_instructor = Mock(return_value=instructor)
output = converter.to_pydantic()
assert isinstance(output, SimpleModel)
assert output.name == "Eve"
assert output.age == 35
instructor.to_pydantic.assert_called_once()
def test_generate_model_description_union_field():
class UnionModel(BaseModel):
field: int | str | None
description = generate_model_description(UnionModel)
expected_description = '{\n "field": int | str | None\n}'
assert description == expected_description

View File

@@ -0,0 +1,94 @@
from typing import Any, Dict, List, Optional, Set, Tuple, Union
import pytest
from pydantic import BaseModel, Field
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
def test_simple_model():
class SimpleModel(BaseModel):
field1: int
field2: str
parser = PydanticSchemaParser(model=SimpleModel)
schema = parser.get_schema()
expected_schema = """{
field1: int,
field2: str
}"""
assert schema.strip() == expected_schema.strip()
def test_nested_model():
class NestedModel(BaseModel):
nested_field: int
class ParentModel(BaseModel):
parent_field: str
nested: NestedModel
parser = PydanticSchemaParser(model=ParentModel)
schema = parser.get_schema()
expected_schema = """{
parent_field: str,
nested: NestedModel
{
nested_field: int
}
}"""
assert schema.strip() == expected_schema.strip()
def test_model_with_list():
class ListModel(BaseModel):
list_field: List[int]
parser = PydanticSchemaParser(model=ListModel)
schema = parser.get_schema()
expected_schema = """{
list_field: List[int]
}"""
assert schema.strip() == expected_schema.strip()
def test_model_with_optional_field():
class OptionalModel(BaseModel):
optional_field: Optional[str]
parser = PydanticSchemaParser(model=OptionalModel)
schema = parser.get_schema()
expected_schema = """{
optional_field: Optional[str]
}"""
assert schema.strip() == expected_schema.strip()
def test_model_with_union():
class UnionModel(BaseModel):
union_field: Union[int, str]
parser = PydanticSchemaParser(model=UnionModel)
schema = parser.get_schema()
expected_schema = """{
union_field: Union[int, str]
}"""
assert schema.strip() == expected_schema.strip()
def test_model_with_dict():
class DictModel(BaseModel):
dict_field: Dict[str, int]
parser = PydanticSchemaParser(model=DictModel)
schema = parser.get_schema()
expected_schema = """{
dict_field: Dict[str, int]
}"""
assert schema.strip() == expected_schema.strip()

170
uv.lock generated
View File

@@ -198,6 +198,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/39/e3/893e8757be2612e6c266d9bb58ad2e3651524b5b40cf56761e985a28b13e/asgiref-3.8.1-py3-none-any.whl", hash = "sha256:3e1e3ecc849832fe52ccf2cb6686b7a55f82bb1d6aee72a58826471390335e47", size = 23828 },
]
[[package]]
name = "asn1crypto"
version = "1.5.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/de/cf/d547feed25b5244fcb9392e288ff9fdc3280b10260362fc45d37a798a6ee/asn1crypto-1.5.1.tar.gz", hash = "sha256:13ae38502be632115abf8a24cbe5f4da52e3b5231990aff31123c805306ccb9c", size = 121080 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c9/7f/09065fd9e27da0eda08b4d6897f1c13535066174cc023af248fc2a8d5e5a/asn1crypto-1.5.1-py2.py3-none-any.whl", hash = "sha256:db4e40728b728508912cbb3d44f19ce188f218e9eba635821bb4b68564f8fd67", size = 105045 },
]
[[package]]
name = "asttokens"
version = "2.4.1"
@@ -219,6 +228,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/a7/fa/e01228c2938de91d47b307831c62ab9e4001e747789d0b05baf779a6488c/async_timeout-4.0.3-py3-none-any.whl", hash = "sha256:7405140ff1230c310e51dc27b3145b9092d659ce68ff733fb0cefe3ee42be028", size = 5721 },
]
[[package]]
name = "atpublic"
version = "5.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/5d/18/b1d247792440378abeeb0853f9daa2a127284b68776af6815990be7fcdb0/atpublic-5.0.tar.gz", hash = "sha256:d5cb6cbabf00ec1d34e282e8ce7cbc9b74ba4cb732e766c24e2d78d1ad7f723f", size = 14646 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/6b/03/2cb0e5326e19b7d877bc9c3a7ef436a30a06835b638580d1f5e21a0409ed/atpublic-5.0-py3-none-any.whl", hash = "sha256:b651dcd886666b1042d1e38158a22a4f2c267748f4e97fde94bc492a4a28a3f3", size = 5207 },
]
[[package]]
name = "attrs"
version = "24.2.0"
@@ -631,7 +649,7 @@ wheels = [
[[package]]
name = "crewai"
version = "0.95.0"
version = "0.100.0"
source = { editable = "." }
dependencies = [
{ name = "appdirs" },
@@ -641,6 +659,7 @@ dependencies = [
{ name = "click" },
{ name = "instructor" },
{ name = "json-repair" },
{ name = "json5" },
{ name = "jsonref" },
{ name = "litellm" },
{ name = "openai" },
@@ -714,13 +733,14 @@ requires-dist = [
{ name = "blinker", specifier = ">=1.9.0" },
{ name = "chromadb", specifier = ">=0.5.23" },
{ name = "click", specifier = ">=8.1.7" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.25.5" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.32.1" },
{ name = "docling", marker = "extra == 'docling'", specifier = ">=2.12.0" },
{ name = "fastembed", marker = "extra == 'fastembed'", specifier = ">=0.4.1" },
{ name = "instructor", specifier = ">=1.3.3" },
{ name = "json-repair", specifier = ">=0.25.2" },
{ name = "json5", specifier = ">=0.10.0" },
{ name = "jsonref", specifier = ">=1.1.0" },
{ name = "litellm", specifier = "==1.57.4" },
{ name = "litellm", specifier = "==1.59.8" },
{ name = "mem0ai", marker = "extra == 'mem0'", specifier = ">=0.1.29" },
{ name = "openai", specifier = ">=1.13.3" },
{ name = "openpyxl", specifier = ">=3.1.5" },
@@ -762,7 +782,7 @@ dev = [
[[package]]
name = "crewai-tools"
version = "0.25.6"
version = "0.32.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "beautifulsoup4" },
@@ -774,20 +794,21 @@ dependencies = [
{ name = "lancedb" },
{ name = "linkup-sdk" },
{ name = "openai" },
{ name = "patronus" },
{ name = "pydantic" },
{ name = "pyright" },
{ name = "pytest" },
{ name = "pytube" },
{ name = "requests" },
{ name = "scrapegraph-py" },
{ name = "selenium" },
{ name = "serpapi" },
{ name = "snowflake" },
{ name = "spider-client" },
{ name = "weaviate-client" },
]
sdist = { url = "https://files.pythonhosted.org/packages/23/2f/fbfd0dc8912d375a2d1272c503f79c83c25f3d2b4b72c230b0672278a1bd/crewai_tools-0.25.6.tar.gz", hash = "sha256:442a7e7e579cb3c671a53c5b7afce645cd31d2db913ecc6d1e22a4c5e1baa840", size = 883175 }
sdist = { url = "https://files.pythonhosted.org/packages/e9/e7/fb07f0089028f7c9003770641d21f5844d4fa22bf5cc4c4b3676bfa0e1fe/crewai_tools-0.32.1.tar.gz", hash = "sha256:41acea9243b17a463f355d48dfe7d73bd59738c8862a8da780eae008e0136414", size = 887378 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ce/21/561a81b4f8cfcc2ac6a0c3db3ec86b70a7db6dabb0dd7d13c96be981b2fc/crewai_tools-0.25.6-py3-none-any.whl", hash = "sha256:463e0ee8d780ab7a801992e3960471fb8e64d038866429f70995ddd0a83e0679", size = 514758 },
{ url = "https://files.pythonhosted.org/packages/36/f0/8f98f1a2b90b9b989bd01cf48b5e3bb2d842be2062bfd3177a77561e7b61/crewai_tools-0.32.1-py3-none-any.whl", hash = "sha256:6cb436dc66e19e35285a4fce501158a13bce99b244370574f568ec33c5513351", size = 537264 },
]
[[package]]
@@ -2058,6 +2079,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/23/38/34cb843cee4c5c27aa5c822e90e99bf96feb3dfa705713b5b6e601d17f5c/json_repair-0.30.0-py3-none-any.whl", hash = "sha256:bda4a5552dc12085c6363ff5acfcdb0c9cafc629989a2112081b7e205828228d", size = 17641 },
]
[[package]]
name = "json5"
version = "0.10.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/85/3d/bbe62f3d0c05a689c711cff57b2e3ac3d3e526380adb7c781989f075115c/json5-0.10.0.tar.gz", hash = "sha256:e66941c8f0a02026943c52c2eb34ebeb2a6f819a0be05920a6f5243cd30fd559", size = 48202 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/aa/42/797895b952b682c3dafe23b1834507ee7f02f4d6299b65aaa61425763278/json5-0.10.0-py3-none-any.whl", hash = "sha256:19b23410220a7271e8377f81ba8aacba2fdd56947fbb137ee5977cbe1f5e8dfa", size = 34049 },
]
[[package]]
name = "jsonlines"
version = "3.1.0"
@@ -2344,7 +2374,7 @@ wheels = [
[[package]]
name = "litellm"
version = "1.57.4"
version = "1.59.8"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "aiohttp" },
@@ -2359,9 +2389,9 @@ dependencies = [
{ name = "tiktoken" },
{ name = "tokenizers" },
]
sdist = { url = "https://files.pythonhosted.org/packages/1a/9a/115bde058901b087e7fec1bed4be47baf8d5c78aff7dd2ffebcb922003ff/litellm-1.57.4.tar.gz", hash = "sha256:747a870ddee9c71f9560fc68ad02485bc1008fcad7d7a43e87867a59b8ed0669", size = 6304427 }
sdist = { url = "https://files.pythonhosted.org/packages/86/b0/c8ec06bd1c87a92d6d824008982b3c82b450d7bd3be850a53913f1ac4907/litellm-1.59.8.tar.gz", hash = "sha256:9d645cc4460f6a9813061f07086648c4c3d22febc8e1f21c663f2b7750d90512", size = 6428607 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/9f/72/35c8509cb2a37343c213b794420405cbef2e1fdf8626ee981fcbba3d7c5c/litellm-1.57.4-py3-none-any.whl", hash = "sha256:afe48924d8a36db801018970a101622fce33d117fe9c54441c0095c491511abb", size = 6592126 },
{ url = "https://files.pythonhosted.org/packages/b9/38/889da058f566ef9ea321aafa25e423249492cf2a508dfdc0e5acfcf04526/litellm-1.59.8-py3-none-any.whl", hash = "sha256:2473914bd2343485a185dfe7eedb12ee5fda32da3c9d9a8b73f6966b9b20cf39", size = 6716233 },
]
[[package]]
@@ -3495,6 +3525,24 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/cc/20/ff623b09d963f88bfde16306a54e12ee5ea43e9b597108672ff3a408aad6/pathspec-0.12.1-py3-none-any.whl", hash = "sha256:a0d503e138a4c123b27490a4f7beda6a01c6f288df0e4a8b79c7eb0dc7b4cc08", size = 31191 },
]
[[package]]
name = "patronus"
version = "0.0.17"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "httpx" },
{ name = "pandas" },
{ name = "pydantic" },
{ name = "pydantic-settings" },
{ name = "pyyaml" },
{ name = "tqdm" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/c5/a0/d5218ff6f2eab18c5a90266d21cdac673c85070e82e3f8aba538b3200f54/patronus-0.0.17.tar.gz", hash = "sha256:7298f770d4f6774b955806fb319c2c872fda3551bd7fa63d975bbeedc14b28de", size = 27377 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/0e/9e/717c4508d675549ff081a7fecf25af7d70f9d7ad87ea0d4825e02de3b801/patronus-0.0.17-py3-none-any.whl", hash = "sha256:1f322eeee838974515fdb7cbf8530ad25c6c59686abbcb28c1fdbf23d34eb10d", size = 31516 },
]
[[package]]
name = "pdfminer-six"
version = "20231228"
@@ -4055,6 +4103,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/c2/35/c0edf199257ef0a7d407d29cd51c4e70d1dad4370a5f44deb65a7a5475e2/pymdown_extensions-10.11.2-py3-none-any.whl", hash = "sha256:41cdde0a77290e480cf53892f5c5e50921a7ee3e5cd60ba91bf19837b33badcf", size = 259044 },
]
[[package]]
name = "pyopenssl"
version = "24.3.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "cryptography" },
]
sdist = { url = "https://files.pythonhosted.org/packages/c1/d4/1067b82c4fc674d6f6e9e8d26b3dff978da46d351ca3bac171544693e085/pyopenssl-24.3.0.tar.gz", hash = "sha256:49f7a019577d834746bc55c5fce6ecbcec0f2b4ec5ce1cf43a9a173b8138bb36", size = 178944 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/42/22/40f9162e943f86f0fc927ebc648078be87def360d9d8db346619fb97df2b/pyOpenSSL-24.3.0-py3-none-any.whl", hash = "sha256:e474f5a473cd7f92221cc04976e48f4d11502804657a08a989fb3be5514c904a", size = 56111 },
]
[[package]]
name = "pypdf"
version = "5.0.1"
@@ -4923,6 +4983,87 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e9/44/75a9c9421471a6c4805dbf2356f7c181a29c1879239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2", size = 10235 },
]
[[package]]
name = "snowflake"
version = "1.0.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "snowflake-core" },
{ name = "snowflake-legacy" },
]
sdist = { url = "https://files.pythonhosted.org/packages/80/d1/830929fb7b54586f4ee601f409e80343e16f32b9b579246cd6fa9984bcff/snowflake-1.0.2.tar.gz", hash = "sha256:4009e59af24e444de4a9e9d340fff0979cca8a02a4feee4665da97eb9c76d958", size = 6033 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b6/25/4cbba4da3f9b333d132680a66221d1a101309cce330fa8be38b674ceafd0/snowflake-1.0.2-py3-none-any.whl", hash = "sha256:6bb0fc70aa10234769202861ccb4b091f5e9fb1bbc61a1e708db93baa3f221f4", size = 5623 },
]
[[package]]
name = "snowflake-connector-python"
version = "3.12.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "asn1crypto" },
{ name = "certifi" },
{ name = "cffi" },
{ name = "charset-normalizer" },
{ name = "cryptography" },
{ name = "filelock" },
{ name = "idna" },
{ name = "packaging" },
{ name = "platformdirs" },
{ name = "pyjwt" },
{ name = "pyopenssl" },
{ name = "pytz" },
{ name = "requests" },
{ name = "sortedcontainers" },
{ name = "tomlkit" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/6b/de/f43d9c827ccc1974696ffd3c0495e2d4e98b0414b2353b7de932621f23dd/snowflake_connector_python-3.12.4.tar.gz", hash = "sha256:289e0691dfbf8ec8b7a8f58bcbb95a819890fe5e5b278fdbfc885059a63a946f", size = 743445 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/53/6c/edc8909e424654a7a3c18cbf804d8a35c17a65a2131f866a87ed8e762bd0/snowflake_connector_python-3.12.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6f141c159e3244bd660279f87f32e39351b2845fcb75f8138f31d2219f983b05", size = 958038 },
{ url = "https://files.pythonhosted.org/packages/93/a3/34c5082dfb9b555c914f4233224b8bc1f2c4d5668bc71bb587680b8dcd73/snowflake_connector_python-3.12.4-cp310-cp310-macosx_11_0_x86_64.whl", hash = "sha256:091458ba777c24adff659c5c28f0f5bb0bcca8a9b6ecc5641ae25b7c20a8f43d", size = 970665 },
{ url = "https://files.pythonhosted.org/packages/f8/87/9eceaaba58b2ec4f9094fc3a04d953bbabbfdcc05a6b14ef12610c1039f9/snowflake_connector_python-3.12.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23049d341da681ec7131cead71cdf7b1761ae5bcc08bcbdb931dcef6c25e8a5f", size = 2496731 },
{ url = "https://files.pythonhosted.org/packages/66/0a/e35e9e0a142f3779007b0246166a245305858b198ed0dd3a41a3d2405512/snowflake_connector_python-3.12.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cc88a09d77a8ce7e445094b2409b606ddb208b5fc9f7c7a379d0255a8d566e9d", size = 2520041 },
{ url = "https://files.pythonhosted.org/packages/79/77/9a238c153600adff8fbd1136d9f4be1e42cb827cbe1865924bfe84653e85/snowflake_connector_python-3.12.4-cp310-cp310-win_amd64.whl", hash = "sha256:3c33fbba036805c1767ea48eb40ffc3fb79d61f2a4bb4e77b571ea6f6a998be8", size = 918272 },
{ url = "https://files.pythonhosted.org/packages/0d/95/e8aac28d6913e4b59f96e6d361f31b9576b5f0abe4d2c4f7decf9f075932/snowflake_connector_python-3.12.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2ec5cfaa1526084cf4d0e7849d5ace601245cb4ad9675ab3cd7d799b3abea481", size = 958125 },
{ url = "https://files.pythonhosted.org/packages/67/b6/a847a94e03bdf39010048feacd57f250a91a655eed333d7d32b165f65201/snowflake_connector_python-3.12.4-cp311-cp311-macosx_11_0_x86_64.whl", hash = "sha256:ff225824b3a0fa5e822442de72172f97028f04ae183877f1305d538d8d6c5d11", size = 970770 },
{ url = "https://files.pythonhosted.org/packages/0e/91/f97812ae9946944bcd9bfe1965af1cb9b1844919da879d90b90dfd3e5086/snowflake_connector_python-3.12.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a9beced2789dc75e8f1e749aa637e7ec9b03302b4ed4b793ae0f1ff32823370e", size = 2519875 },
{ url = "https://files.pythonhosted.org/packages/37/52/500d72079bfb322ebdf3892180ecf3dc73c117b3a966ee8d4bb1378882b2/snowflake_connector_python-3.12.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ea47450a04ff713f3adf28053e34103bd990291e62daee9721c76597af4b2b5", size = 2542320 },
{ url = "https://files.pythonhosted.org/packages/59/92/74ead6bee8dd29fe372002ce59477221e04b9da96ad7aafe584afce02937/snowflake_connector_python-3.12.4-cp311-cp311-win_amd64.whl", hash = "sha256:748f9125854dca07ea471bb2bb3c5bb932a53f9b8a77ba348b50b738c77203ce", size = 918363 },
{ url = "https://files.pythonhosted.org/packages/a5/a3/1cbe0b52b810f069bdc96c372b2d91ac51aeac32986c2832aa3fe0b0b0e5/snowflake_connector_python-3.12.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:4bcd0371b20d199f15e6a3c0b489bf18e27f2a88c84cf3194b2569ca039fa7d1", size = 957561 },
{ url = "https://files.pythonhosted.org/packages/f4/05/8a5e16bd908a89f36d59686d356890c4bd6a976a487f86274181010f4b49/snowflake_connector_python-3.12.4-cp312-cp312-macosx_11_0_x86_64.whl", hash = "sha256:7900d82a450b206fa2ed6c42cd65d9b3b9fd4547eca1696937175fac2a03ba37", size = 969045 },
{ url = "https://files.pythonhosted.org/packages/79/1b/8f5ab15d224d7bf76533c55cfd8ce73b185ce94d84241f0e900739ce3f37/snowflake_connector_python-3.12.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:300f0562aeea55e40ee03b45205dbef7b78f5ba2f1787a278c7b807e7d8db22c", size = 2533969 },
{ url = "https://files.pythonhosted.org/packages/6e/d9/2e2fd72e0251691b5c54a219256c455141a2d3c104e411b82de598c62553/snowflake_connector_python-3.12.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a6762a00948f003be55d7dc5de9de690315d01951a94371ec3db069d9303daba", size = 2558052 },
{ url = "https://files.pythonhosted.org/packages/e8/cb/e0ab230ad5adc9932e595bdbec693b2499d446666daf6cb9cae306a41dd2/snowflake_connector_python-3.12.4-cp312-cp312-win_amd64.whl", hash = "sha256:83ca896790a7463b6c8cd42e1a29b8ea197cc920839ae6ee96a467475eab4ec2", size = 916627 },
]
[[package]]
name = "snowflake-core"
version = "1.0.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "atpublic" },
{ name = "pydantic" },
{ name = "python-dateutil" },
{ name = "pyyaml" },
{ name = "requests" },
{ name = "snowflake-connector-python" },
{ name = "urllib3" },
]
sdist = { url = "https://files.pythonhosted.org/packages/1d/cf/6f91e5b2daaf3df9ae666a65f5ba3938f11a40784e4ada5218ecf154b29a/snowflake_core-1.0.2.tar.gz", hash = "sha256:8bf267ff1efcd17f157432c6e24f6d2eb6c2aeed66f43ab34b215aa76d8edf02", size = 1092618 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/75/3c/ec228b7325b32781081c72254dd0ef793943e853d82616e862e231909c6c/snowflake_core-1.0.2-py3-none-any.whl", hash = "sha256:55c37cf526a0d78dd3359ad96b9ecd7130bbbbc2f5a2fec77bb3da0dac2dc688", size = 1555690 },
]
[[package]]
name = "snowflake-legacy"
version = "1.0.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/94/41/a6211bd2109913eee1506d37865ab13cf9a8cc2faa41833da3d1ffec654b/snowflake_legacy-1.0.0.tar.gz", hash = "sha256:2044661c79ba01841ab279c5e74b994532244c9d103224eba16eb159c8ed6033", size = 4043 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/aa/8c/64f9b5ee0c3f376a733584c480b31addbf2baff7bb41f655e5e3f3719d3b/snowflake_legacy-1.0.0-py3-none-any.whl", hash = "sha256:25f9678f180d7d5f5b60d17f8112f0ee8a7a77b82c67fd599ed6e27bd502be5a", size = 3059 },
]
[[package]]
name = "sortedcontainers"
version = "2.4.0"
@@ -5184,6 +5325,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/c4/ac/ce90573ba446a9bbe65838ded066a805234d159b4446ae9f8ec5bbd36cbd/tomli_w-1.1.0-py3-none-any.whl", hash = "sha256:1403179c78193e3184bfaade390ddbd071cba48a32a2e62ba11aae47490c63f7", size = 6440 },
]
[[package]]
name = "tomlkit"
version = "0.13.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/b1/09/a439bec5888f00a54b8b9f05fa94d7f901d6735ef4e55dcec9bc37b5d8fa/tomlkit-0.13.2.tar.gz", hash = "sha256:fff5fe59a87295b278abd31bec92c15d9bc4a06885ab12bcea52c71119392e79", size = 192885 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f9/b6/a447b5e4ec71e13871be01ba81f5dfc9d0af7e473da256ff46bc0e24026f/tomlkit-0.13.2-py3-none-any.whl", hash = "sha256:7a974427f6e119197f670fbbbeae7bef749a6c14e793db934baefc1b5f03efde", size = 37955 },
]
[[package]]
name = "torch"
version = "2.4.1"