Compare commits

..

15 Commits

Author SHA1 Message Date
Eduardo Chiarotti
65053e3b4a Create codeql.yml 2024-08-06 15:40:19 -03:00
Eduardo Chiarotti
498e96a419 Update issue templates (#1067)
* Update issue templates

Add both Bug and Feature templates

* Update feature_request.md
2024-08-06 14:47:00 -03:00
Thiago Moretto
c0c59dc932 Merge pull request #1064 from crewAIInc/thiago/pipeline-fix
Fix flaky test due to suppressed error on `on_llm_start` callback
2024-08-05 16:13:19 -03:00
Thiago Moretto
f3b3d321e5 Fix lint issue 2024-08-05 13:34:03 -03:00
Thiago Moretto
67e4433dc2 Fix flaky test due to suppressed error on on_llm_start callback 2024-08-05 13:29:39 -03:00
Rip&Tear
4a7ae8df71 Update LLM-Connections.md (#1039)
* Minor fixes and updates

* minor fixes across docs

* Updated LLM-Connections.md

---------

Co-authored-by: theCyberTech <mattrapidb@gmail.com>
2024-08-02 15:04:52 -03:00
Rip&Tear
09f92122d5 Docs minor fixes (#1035)
* Minor fixes and updates

* minor fixes across docs

---------

Co-authored-by: theCyberTech <mattrapidb@gmail.com>
2024-08-02 15:01:16 -03:00
Lorenze Jay
8118b7b7d6 Feat/sliding context window (#1042)
* patching for non-gpt model

* removal of json_object tool name assignment

* fixed issue for smaller models due to instructions prompt

* fixing for ollama llama3 models

* WIP: generated summary from documents split, could also create memgpt approach

* WIP: need tests but user inputted summarization strategy implemented - handling context window exceeding errors

* rm extra line

* removed type ignores

* added tests

* handling n to summarize prompt

* code cleanup, using click for cli asker

* rm not used class

* better refactor

* reverted poetry lock

* reverted poetry.locl

* improved context window exceeding exception class
2024-08-01 13:15:50 -07:00
João Moura
c93b85ac53 Preparing for new version 2024-07-30 19:21:18 -04:00
Lorenze Jay
6378f6caec WIP fixed mypy src types (#1036) 2024-07-30 10:59:50 -07:00
Eduardo Chiarotti
d824db82a3 feat: Add execution time to both task and testing feature (#1031)
* feat: Add execution time to both task and testing feature

* feat: Remove unused functions

* feat: change test_crew to evalaute_crew to avoid issues with testing libs

* feat: fix tests
2024-07-29 23:17:07 -03:00
Matt Young
de6b597eff telemetry.py - fix typo in comment. (#1020) 2024-07-29 23:03:51 -03:00
Deepak Tammali
6111d05219 docs: Fix crewai-tools package name typo in getting-started docs (#1026) 2024-07-29 23:03:32 -03:00
Monarch Wadia
f83c91d612 Fixed package name typo in pip install command (#1029)
Changed `pip install crewai-tools` to `pip install crewai-tools`
2024-07-29 23:02:48 -03:00
Mackensie Alvarez
c8f360414e Update Start-a-New-CrewAI-Project-Template-Method.md (#1030) 2024-07-29 23:02:18 -03:00
29 changed files with 823 additions and 124 deletions

35
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,35 @@
---
name: Bug report
about: Create a report to help us improve CrewAI
title: "[BUG]"
labels: bug
assignees: ''
---
**Description**
Provide a clear and concise description of what the bug is.
**Steps to Reproduce**
Provide a step-by-step process to reproduce the behavior:
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots/Code snippets**
If applicable, add screenshots or code snippets to help explain your problem.
**Environment Details:**
- **Operating System**: [e.g., Ubuntu 20.04, macOS Catalina, Windows 10]
- **Python Version**: [e.g., 3.8, 3.9, 3.10]
- **crewAI Version**: [e.g., 0.30.11]
- **crewAI Tools Version**: [e.g., 0.2.6]
**Logs**
Include relevant logs or error messages if applicable.
**Possible Solution**
Have a solution in mind? Please suggest it here, or write "None".
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,21 @@
---
name: Feature request
about: Suggest a Feature to improve CrewAI
title: "[FEAT]"
labels: feature-request, improvement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
If possible attach the Issue related to it
**Describe the solution you'd like / Use-case**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

93
.github/workflows/codeql.yml vendored Normal file
View File

@@ -0,0 +1,93 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
schedule:
- cron: '28 20 * * 1'
jobs:
analyze:
name: Analyze (${{ matrix.language }})
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners (GitHub.com only)
# Consider using larger runners or machines with greater resources for possible analysis time improvements.
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
timeout-minutes: ${{ (matrix.language == 'swift' && 120) || 360 }}
permissions:
# required for all workflows
security-events: write
# required to fetch internal or private CodeQL packs
packages: read
# only required for workflows in private repositories
actions: read
contents: read
strategy:
fail-fast: false
matrix:
include:
- language: python
build-mode: none
# CodeQL supports the following values keywords for 'language': 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
'your code, for example:'
echo ' make bootstrap'
echo ' make release'
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

View File

@@ -254,7 +254,7 @@ pip install dist/*.tar.gz
CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools.
There is NO data being collected on the prompts, tasks descriptions agents backstories or goals nor tools usage, no API calls, nor responses nor any data that is being processed by the agents, nor any secrets and env vars.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy. We don't offer a way to disable it now, but we will in the future.
Data collected includes:
@@ -279,7 +279,7 @@ Data collected includes:
- Tools names available
- Understand out of the publically available tools, which ones are being used the most so we can improve them
Users can opt-in sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews.
Users can opt-in to Further Telemetry, sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
## License

View File

@@ -17,10 +17,10 @@ Beforre we start there are a couple of things to note:
Before getting started with CrewAI, make sure that you have installed it via pip:
```shell
$ pip install crewai crewi-tools
$ pip install crewai crewai-tools
```
### Virtual Environemnts
### Virtual Environments
It is highly recommended that you use virtual environments to ensure that your CrewAI project is isolated from other projects and dependencies. Virtual environments provide a clean, separate workspace for each project, preventing conflicts between different versions of packages and libraries. This isolation is crucial for maintaining consistency and reproducibility in your development process. You have multiple options for setting up virtual environments depending on your operating system and Python version:
1. Use venv (Python's built-in virtual environment tool):

View File

@@ -7,6 +7,7 @@ description: Comprehensive guide on crafting, using, and managing custom tools w
This guide provides detailed instructions on creating custom tools for the crewAI framework and how to efficiently manage and utilize these tools, incorporating the latest functionalities such as tool delegation, error handling, and dynamic tool calling. It also highlights the importance of collaboration tools, enabling agents to perform a wide range of actions.
### Prerequisites
Before creating your own tools, ensure you have the crewAI extra tools package installed:
```bash
@@ -31,7 +32,7 @@ class MyCustomTool(BaseTool):
### Using the `tool` Decorator
Alternatively, use the `tool` decorator for a direct approach to create tools. This requires specifying attributes and the tool's logic within a function.
Alternatively, you can use the tool decorator `@tool`. This approach allows you to define the tool's attributes and functionality directly within a function, offering a concise and efficient way to create specialized tools tailored to your needs.
```python
from crewai_tools import tool

View File

@@ -16,7 +16,7 @@ Here's an example of how to force the tool output as the result of an agent's ta
# Define a custom tool that returns the result as the answer
coding_agent =Agent(
role="Data Scientist",
goal="Product amazing resports on AI",
goal="Product amazing reports on AI",
backstory="You work with data and AI",
tools=[MyCustomTool(result_as_answer=True)],
)

View File

@@ -6,33 +6,25 @@ description: Comprehensive guide on integrating CrewAI with various Large Langua
## Connect CrewAI to LLMs
!!! note "Default LLM"
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide.
By default, CrewAI uses OpenAI's GPT-4o model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide.
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can configure your agents to use a different model or API as described in this guide.
CrewAI offers flexibility in connecting to various LLMs, including local models via [Ollama](https://ollama.ai) and different APIs like Azure. It's compatible with all [LangChain LLM](https://python.langchain.com/docs/integrations/llms/) components, enabling diverse integrations for tailored AI solutions.
CrewAI provides extensive versatility in integrating with various Language Models (LLMs), including local options through Ollama such as Llama and Mixtral to cloud-based solutions like Azure. Its compatibility extends to all [LangChain LLM components](https://python.langchain.com/v0.2/docs/integrations/llms/), offering a wide range of integration possibilities for customized AI applications.
## CrewAI Agent Overview
The platform supports connections to an array of Generative AI models, including:
The `Agent` class is the cornerstone for implementing AI solutions in CrewAI. Here's a comprehensive overview of the Agent class attributes and methods:
- OpenAI's suite of advanced language models
- Anthropic's cutting-edge AI offerings
- Ollama's diverse range of locally-hosted generative model & embeddings
- LM Studio's diverse range of locally hosted generative models & embeddings
- Groq's Super Fast LLM offerings
- Azures' generative AI offerings
- HuggingFace's generative AI offerings
- **Attributes**:
- `role`: Defines the agent's role within the solution.
- `goal`: Specifies the agent's objective.
- `backstory`: Provides a background story to the agent.
- `cache` *Optional*: Determines whether the agent should use a cache for tool usage. Default is `True`.
- `max_rpm` *Optional*: Maximum number of requests per minute the agent's execution should respect. Optional.
- `verbose` *Optional*: Enables detailed logging of the agent's execution. Default is `False`.
- `allow_delegation` *Optional*: Allows the agent to delegate tasks to other agents, default is `True`.
- `tools`: Specifies the tools available to the agent for task execution. Optional.
- `max_iter` *Optional*: Maximum number of iterations for an agent to execute a task, default is 25.
- `max_execution_time` *Optional*: Maximum execution time for an agent to execute a task. Optional.
- `step_callback` *Optional*: Provides a callback function to be executed after each step. Optional.
- `llm` *Optional*: Indicates the Large Language Model the agent uses. By default, it uses the GPT-4 model defined in the environment variable "OPENAI_MODEL_NAME".
- `function_calling_llm` *Optional* : Will turn the ReAct CrewAI agent into a function-calling agent.
- `callbacks` *Optional*: A list of callback functions from the LangChain library that are triggered during the agent's execution process.
- `system_template` *Optional*: Optional string to define the system format for the agent.
- `prompt_template` *Optional*: Optional string to define the prompt format for the agent.
- `response_template` *Optional*: Optional string to define the response format for the agent.
This broad spectrum of LLM options enables users to select the most suitable model for their specific needs, whether prioritizing local deployment, specialized capabilities, or cloud-based scalability.
## Changing the default LLM
The default LLM is provided through the `langchain openai` package, which is installed by default when you install CrewAI. You can change this default LLM to a different model or API by setting the `OPENAI_MODEL_NAME` environment variable. This straightforward process allows you to harness the power of different OpenAI models, enhancing the flexibility and capabilities of your CrewAI implementation.
```python
# Required
os.environ["OPENAI_MODEL_NAME"]="gpt-4-0125-preview"
@@ -45,30 +37,27 @@ example_agent = Agent(
verbose=True
)
```
## Ollama Local Integration
Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, you will need the `langchain-ollama` package. You can then set the following environment variables to connect to your Ollama instance running locally on port 11434.
## Ollama Integration
Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, set the appropriate environment variables as shown below.
### Setting Up Ollama
- **Environment Variables Configuration**: To integrate Ollama, set the following environment variables:
```sh
OPENAI_API_BASE='http://localhost:11434'
OPENAI_MODEL_NAME='llama2' # Adjust based on available model
OPENAI_API_KEY=''
os.environ[OPENAI_API_BASE]='http://localhost:11434'
os.environ[OPENAI_MODEL_NAME]='llama2' # Adjust based on available model
os.environ[OPENAI_API_KEY]='' # No API Key required for Ollama
```
## Ollama Integration (ex. for using Llama 2 locally)
1. [Download Ollama](https://ollama.com/download).
2. After setting up the Ollama, Pull the Llama2 by typing following lines into the terminal ```ollama pull llama2```.
3. Enjoy your free Llama2 model that powered up by excellent agents from crewai.
## Ollama Integration Step by Step (ex. for using Llama 3.1 8B locally)
1. [Download and install Ollama](https://ollama.com/download).
2. After setting up the Ollama, Pull the Llama3.1 8B model by typing following lines into your terminal ```ollama run llama3.1```.
3. Llama3.1 should now be served locally on `http://localhost:11434`
```
from crewai import Agent, Task, Crew
from langchain.llms import Ollama
from langchain_ollama import ChatOllama
import os
os.environ["OPENAI_API_KEY"] = "NA"
llm = Ollama(
model = "llama2",
model = "llama3.1",
base_url = "http://localhost:11434")
general_agent = Agent(role = "Math Professor",
@@ -98,13 +87,14 @@ There are a couple of different ways you can use HuggingFace to host your LLM.
### Your own HuggingFace endpoint
```python
from langchain_community.llms import HuggingFaceEndpoint
from langchain_huggingface import HuggingFaceEndpoint,
llm = HuggingFaceEndpoint(
endpoint_url="<YOUR_ENDPOINT_URL_HERE>",
huggingfacehub_api_token="<HF_TOKEN_HERE>",
repo_id="microsoft/Phi-3-mini-4k-instruct",
task="text-generation",
max_new_tokens=512
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
agent = Agent(
@@ -115,66 +105,50 @@ agent = Agent(
)
```
### From HuggingFaceHub endpoint
```python
from langchain_community.llms import HuggingFaceHub
llm = HuggingFaceHub(
repo_id="HuggingFaceH4/zephyr-7b-beta",
huggingfacehub_api_token="<HF_TOKEN_HERE>",
task="text-generation",
)
```
## OpenAI Compatible API Endpoints
Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, Groq, and Mistral AI.
### Configuration Examples
#### FastChat
```sh
OPENAI_API_BASE="http://localhost:8001/v1"
OPENAI_MODEL_NAME='oh-2.5m7b-q51'
OPENAI_API_KEY=NA
os.environ[OPENAI_API_BASE]="http://localhost:8001/v1"
os.environ[OPENAI_MODEL_NAME]='oh-2.5m7b-q51'
os.environ[OPENAI_API_KEY]=NA
```
#### LM Studio
Launch [LM Studio](https://lmstudio.ai) and go to the Server tab. Then select a model from the dropdown menu and wait for it to load. Once it's loaded, click the green Start Server button and use the URL, port, and API key that's shown (you can modify them). Below is an example of the default settings as of LM Studio 0.2.19:
```sh
OPENAI_API_BASE="http://localhost:1234/v1"
OPENAI_API_KEY="lm-studio"
os.environ[OPENAI_API_BASE]="http://localhost:1234/v1"
os.environ[OPENAI_API_KEY]="lm-studio"
```
#### Groq API
```sh
OPENAI_API_KEY=your-groq-api-key
OPENAI_MODEL_NAME='llama3-8b-8192'
OPENAI_API_BASE=https://api.groq.com/openai/v1
os.environ[OPENAI_API_KEY]=your-groq-api-key
os.environ[OPENAI_MODEL_NAME]='llama3-8b-8192'
os.environ[OPENAI_API_BASE]=https://api.groq.com/openai/v1
```
#### Mistral API
```sh
OPENAI_API_KEY=your-mistral-api-key
OPENAI_API_BASE=https://api.mistral.ai/v1
OPENAI_MODEL_NAME="mistral-small"
os.environ[OPENAI_API_KEY]=your-mistral-api-key
os.environ[OPENAI_API_BASE]=https://api.mistral.ai/v1
os.environ[OPENAI_MODEL_NAME]="mistral-small"
```
### Solar
```python
```sh
from langchain_community.chat_models.solar import SolarChat
# Initialize language model
os.environ["SOLAR_API_KEY"] = "your-solar-api-key"
llm = SolarChat(max_tokens=1024)
```
```sh
os.environ[SOLAR_API_BASE]="https://api.upstage.ai/v1/solar"
os.environ[SOLAR_API_KEY]="your-solar-api-key"
```
# Free developer API key available here: https://console.upstage.ai/services/solar
# Langchain Example: https://github.com/langchain-ai/langchain/pull/18556
```
### text-gen-web-ui
```sh
OPENAI_API_BASE=http://localhost:5000/v1
OPENAI_MODEL_NAME=NA
OPENAI_API_KEY=NA
```
### Cohere
```python
@@ -190,10 +164,11 @@ llm = ChatCohere()
### Azure Open AI Configuration
For Azure OpenAI API integration, set the following environment variables:
```sh
AZURE_OPENAI_VERSION="2022-12-01"
AZURE_OPENAI_DEPLOYMENT=""
AZURE_OPENAI_ENDPOINT=""
AZURE_OPENAI_KEY=""
os.environ[AZURE_OPENAI_DEPLOYMENT] = "You deployment"
os.environ["OPENAI_API_VERSION"] = "2023-12-01-preview"
os.environ["AZURE_OPENAI_ENDPOINT"] = "Your Endpoint"
os.environ["AZURE_OPENAI_API_KEY"] = "<Your API Key>"
```
### Example Agent with Azure LLM
@@ -216,6 +191,5 @@ azure_agent = Agent(
llm=azure_llm
)
```
## Conclusion
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.

View File

@@ -5,7 +5,7 @@ description: Understanding the telemetry data collected by CrewAI and how it con
## Telemetry
CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on improving and developing the features, integrations, and tools most utilized by our users.
CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on improving and developing the features, integrations, and tools most utilized by our users. We don't offer a way to disable it now, but we will in the future.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy.
@@ -22,7 +22,7 @@ It's pivotal to understand that **NO data is collected** concerning prompts, tas
- **Tool Usage**: Identifying which tools are most frequently used allows us to prioritize improvements in those areas.
### Opt-In Further Telemetry Sharing
Users can choose to share their complete telemetry data by enabling the `share_crew` attribute to `True` in their crew configurations. This opt-in approach respects user privacy and aligns with data protection standards by ensuring users have control over their data sharing preferences. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
Users can choose to share their complete telemetry data by enabling the `share_crew` attribute to `True` in their crew configurations. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
### Updates and Revisions
We are committed to maintaining the accuracy and transparency of our documentation. Regular reviews and updates are performed to ensure our documentation accurately reflects the latest developments of our codebase and telemetry practices. Users are encouraged to review this section for the most current information on our data collection practices and how they contribute to the improvement of CrewAI.

View File

@@ -1,9 +1,9 @@
# CodeInterpreterTool
## Description
This tool is used to give the Agent the ability to run code (Python3) from the code generated by the Agent itself. The code is executed in a sandboxed environment, so it is safe to run any code.
This tool enables the Agent to execute Python 3 code that it has generated autonomously. The code is run in a secure, isolated environment, ensuring safety regardless of the content.
It is incredible useful since it allows the Agent to generate code, run it in the same environment, get the result and use it to make decisions.
This functionality is particularly valuable as it allows the Agent to create code, execute it within the same ecosystem, obtain the results, and utilize that information to inform subsequent decisions and actions.
## Requirements

View File

@@ -2,7 +2,7 @@
## Description
This tools is a wrapper around the composio toolset and gives your agent access to a wide variety of tools from the composio SDK.
This tools is a wrapper around the composio set of tools and gives your agent access to a wide variety of tools from the composio SDK.
## Installation
@@ -19,7 +19,7 @@ after the installation is complete, either run `composio login` or export your c
The following example demonstrates how to initialize the tool and execute a github action:
1. Initialize toolset
1. Initialize Composio tools
```python
from composio import App

View File

@@ -40,10 +40,9 @@ The `SerperDevTool` comes with several parameters that will be passed to the API
- **locale**: Optional. Specify the locale for the search results.
- **n_results**: Number of search results to return. Default is `10`.
The values for `country`, `location`, `lovale` and `search_url` can be found on the [Serper Playground](https://serper.dev/playground).
The values for `country`, `location`, `locale` and `search_url` can be found on the [Serper Playground](https://serper.dev/playground).
## Example with Parameters
Here is an example demonstrating how to use the tool with additional parameters:
```python

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "crewai"
version = "0.41.1"
version = "0.46.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
authors = ["Joao Moura <joao@crewai.com>"]
readme = "README.md"

View File

@@ -3,7 +3,6 @@ from typing import TYPE_CHECKING, Optional
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.utilities.converter import ConverterError
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
from crewai.utilities import I18N
@@ -39,18 +38,17 @@ class CrewAgentExecutorMixin:
and "Action: Delegate work to coworker" not in output.log
):
try:
memory = ShortTermMemoryItem(
data=output.log,
agent=self.crew_agent.role,
metadata={
"observation": self.task.description,
},
)
if (
hasattr(self.crew, "_short_term_memory")
and self.crew._short_term_memory
):
self.crew._short_term_memory.save(memory)
self.crew._short_term_memory.save(
value=output.log,
metadata={
"observation": self.task.description,
},
agent=self.crew_agent.role,
)
except Exception as e:
print(f"Failed to add to short term memory: {e}")
pass

View File

@@ -1,6 +1,8 @@
import threading
import time
from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
from typing import Any, Dict, Iterator, List, Literal, Optional, Tuple, Union
import click
from langchain.agents import AgentExecutor
from langchain.agents.agent import ExceptionTool
@@ -11,12 +13,21 @@ from langchain_core.tools import BaseTool
from langchain_core.utils.input import get_color_mapping
from pydantic import InstanceOf
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains.summarize import load_summarize_chain
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
from crewai.agents.tools_handler import ToolsHandler
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
from crewai.utilities.training_handler import CrewTrainingHandler
from crewai.utilities.logger import Logger
class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
@@ -40,6 +51,8 @@ class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
system_template: Optional[str] = None
prompt_template: Optional[str] = None
response_template: Optional[str] = None
_logger: Logger = Logger(verbose_level=2)
_fit_context_window_strategy: Optional[Literal["summarize"]] = "summarize"
def _call(
self,
@@ -131,7 +144,7 @@ class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
# Call the LLM to see what to do.
output = self.agent.plan( # type: ignore # Incompatible types in assignment (expression has type "AgentAction | AgentFinish | list[AgentAction]", variable has type "AgentAction")
output = self.agent.plan(
intermediate_steps,
callbacks=run_manager.get_child() if run_manager else None,
**inputs,
@@ -185,6 +198,27 @@ class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
yield AgentStep(action=output, observation=observation)
return
except Exception as e:
if LLMContextLengthExceededException(str(e))._is_context_limit_error(
str(e)
):
output = self._handle_context_length_error(
intermediate_steps, run_manager, inputs
)
if isinstance(output, AgentFinish):
yield output
elif isinstance(output, list):
for step in output:
yield step
return
yield AgentStep(
action=AgentAction("_Exception", str(e), str(e)),
observation=str(e),
)
return
# If the tool chosen is the finishing tool, then we end and return.
if isinstance(output, AgentFinish):
if self.should_ask_for_human_input:
@@ -235,6 +269,7 @@ class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
agent=self.crew_agent,
action=agent_action,
)
tool_calling = tool_usage.parse(agent_action.log)
if isinstance(tool_calling, ToolUsageErrorException):
@@ -280,3 +315,91 @@ class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
CrewTrainingHandler(TRAINING_DATA_FILE).append(
self.crew._train_iteration, agent_id, training_data
)
def _handle_context_length(
self, intermediate_steps: List[Tuple[AgentAction, str]]
) -> List[Tuple[AgentAction, str]]:
text = intermediate_steps[0][1]
original_action = intermediate_steps[0][0]
text_splitter = RecursiveCharacterTextSplitter(
separators=["\n\n", "\n"],
chunk_size=8000,
chunk_overlap=500,
)
if self._fit_context_window_strategy == "summarize":
docs = text_splitter.create_documents([text])
self._logger.log(
"debug",
"Summarizing Content, it is recommended to use a RAG tool",
color="bold_blue",
)
summarize_chain = load_summarize_chain(
self.llm, chain_type="map_reduce", verbose=True
)
summarized_docs = []
for doc in docs:
summary = summarize_chain.invoke(
{"input_documents": [doc]}, return_only_outputs=True
)
summarized_docs.append(summary["output_text"])
formatted_results = "\n\n".join(summarized_docs)
summary_step = AgentStep(
action=AgentAction(
tool=original_action.tool,
tool_input=original_action.tool_input,
log=original_action.log,
),
observation=formatted_results,
)
summary_tuple = (summary_step.action, summary_step.observation)
return [summary_tuple]
return intermediate_steps
def _handle_context_length_error(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
run_manager: Optional[CallbackManagerForChainRun],
inputs: Dict[str, str],
) -> Union[AgentFinish, List[AgentStep]]:
self._logger.log(
"debug",
"Context length exceeded. Asking user if they want to use summarize prompt to fit, this will reduce context length.",
color="yellow",
)
user_choice = click.confirm(
"Context length exceeded. Do you want to summarize the text to fit models context window?"
)
if user_choice:
self._logger.log(
"debug",
"Context length exceeded. Using summarize prompt to fit, this will reduce context length.",
color="bold_blue",
)
intermediate_steps = self._handle_context_length(intermediate_steps)
output = self.agent.plan(
intermediate_steps,
callbacks=run_manager.get_child() if run_manager else None,
**inputs,
)
if isinstance(output, AgentFinish):
return output
elif isinstance(output, AgentAction):
return [AgentStep(action=output, observation=None)]
else:
return [AgentStep(action=action, observation=None) for action in output]
else:
self._logger.log(
"debug",
"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools.",
color="red",
)
raise SystemExit(
"Context length exceeded and user opted not to summarize. Consider using smaller text or RAG tools from crewai_tools."
)

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = "^0.41.1" }
crewai = { extras = ["tools"], version = "^0.46.0" }
[tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:run"

View File

@@ -1,3 +1,4 @@
from typing import Any, Dict, Optional
from crewai.memory.memory import Memory
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.memory.storage.rag_storage import RAGStorage
@@ -18,7 +19,14 @@ class ShortTermMemory(Memory):
)
super().__init__(storage)
def save(self, item: ShortTermMemoryItem) -> None:
def save(
self,
value: Any,
metadata: Optional[Dict[str, Any]] = None,
agent: Optional[str] = None,
) -> None:
item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent)
super().save(value=item.data, metadata=item.metadata, agent=item.agent)
def search(self, query: str, score_threshold: float = 0.35):

View File

@@ -3,7 +3,10 @@ from typing import Any, Dict, Optional
class ShortTermMemoryItem:
def __init__(
self, data: Any, agent: str, metadata: Optional[Dict[str, Any]] = None
self,
data: Any,
agent: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None,
):
self.data = data
self.agent = agent

View File

@@ -4,7 +4,7 @@ from typing import Any, Dict
class Storage:
"""Abstract base class defining the storage interface"""
def save(self, key: str, value: Any, metadata: Dict[str, Any]) -> None:
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
pass
def search(self, key: str) -> Dict[str, Any]: # type: ignore

View File

@@ -40,7 +40,7 @@ class Telemetry:
- Roles of agents in a crew
- Tools names available
Users can opt-in to sharing more complete data suing the `share_crew`
Users can opt-in to sharing more complete data using the `share_crew`
attribute in the Crew class.
"""

View File

@@ -16,7 +16,7 @@ try:
except ImportError:
agentops = None
OPENAI_BIGGER_MODELS = ["gpt-4"]
OPENAI_BIGGER_MODELS = ["gpt-4o"]
class ToolUsageErrorException(Exception):

View File

@@ -7,6 +7,9 @@ from .parser import YamlParser
from .printer import Printer
from .prompts import Prompts
from .rpm_controller import RPMController
from .exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
__all__ = [
"Converter",
@@ -19,4 +22,5 @@ __all__ = [
"Prompts",
"RPMController",
"YamlParser",
"LLMContextLengthExceededException",
]

View File

@@ -0,0 +1,26 @@
class LLMContextLengthExceededException(Exception):
CONTEXT_LIMIT_ERRORS = [
"maximum context length",
"context length exceeded",
"context_length_exceeded",
"context window full",
"too many tokens",
"input is too long",
"exceeds token limit",
]
def __init__(self, error_message: str):
self.original_error_message = error_message
super().__init__(self._get_error_message(error_message))
def _is_context_limit_error(self, error_message: str) -> bool:
return any(
phrase.lower() in error_message.lower()
for phrase in self.CONTEXT_LIMIT_ERRORS
)
def _get_error_message(self, error_message: str):
return (
f"LLM context length exceeded. Original error: {error_message}\n"
"Consider using a smaller input or implementing a text splitting strategy."
)

View File

@@ -10,24 +10,24 @@ from crewai.agents.agent_builder.utilities.base_token_process import TokenProces
class TokenCalcHandler(BaseCallbackHandler):
model_name: str = ""
token_cost_process: TokenProcess
encoding: tiktoken.Encoding
def __init__(self, model_name, token_cost_process):
self.model_name = model_name
self.token_cost_process = token_cost_process
try:
self.encoding = tiktoken.encoding_for_model(self.model_name)
except KeyError:
self.encoding = tiktoken.get_encoding("cl100k_base")
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
try:
encoding = tiktoken.encoding_for_model(self.model_name)
except KeyError:
encoding = tiktoken.get_encoding("cl100k_base")
if self.token_cost_process is None:
return
for prompt in prompts:
self.token_cost_process.sum_prompt_tokens(len(encoding.encode(prompt)))
self.token_cost_process.sum_prompt_tokens(len(self.encoding.encode(prompt)))
async def on_llm_new_token(self, token: str, **kwargs) -> None:
self.token_cost_process.sum_completion_tokens(1)

View File

@@ -7,6 +7,7 @@ import pytest
from langchain.tools import tool
from langchain_core.exceptions import OutputParserException
from langchain_openai import ChatOpenAI
from langchain.schema import AgentAction
from crewai import Agent, Crew, Task
from crewai.agents.cache import CacheHandler
@@ -1014,3 +1015,75 @@ def test_agent_max_retry_limit():
),
]
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_handle_context_length_exceeds_limit():
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
)
original_action = AgentAction(
tool="test_tool", tool_input="test_input", log="test_log"
)
with patch.object(
CrewAgentExecutor, "_iter_next_step", wraps=agent.agent_executor._iter_next_step
) as private_mock:
task = Task(
description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool.",
expected_output="The final answer",
)
agent.execute_task(
task=task,
)
private_mock.assert_called_once()
with patch("crewai.agents.executor.click") as mock_prompt:
mock_prompt.return_value = "y"
with patch.object(
CrewAgentExecutor, "_handle_context_length"
) as mock_handle_context:
mock_handle_context.side_effect = ValueError(
"Context length limit exceeded"
)
long_input = "This is a very long input. " * 10000
# Attempt to handle context length, expecting the mocked error
with pytest.raises(ValueError) as excinfo:
agent.agent_executor._handle_context_length(
[(original_action, long_input)]
)
assert "Context length limit exceeded" in str(excinfo.value)
mock_handle_context.assert_called_once()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_handle_context_length_exceeds_limit_cli_no():
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
)
task = Task(description="test task", agent=agent, expected_output="test output")
with patch.object(
CrewAgentExecutor, "_iter_next_step", wraps=agent.agent_executor._iter_next_step
) as private_mock:
task = Task(
description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool.",
expected_output="The final answer",
)
agent.execute_task(
task=task,
)
private_mock.assert_called_once()
with patch("crewai.agents.executor.click") as mock_prompt:
mock_prompt.return_value = "n"
pytest.raises(SystemExit)
with patch.object(
CrewAgentExecutor, "_handle_context_length"
) as mock_handle_context:
mock_handle_context.assert_not_called()

View File

@@ -0,0 +1,181 @@
interactions:
- request:
body: '{"messages": [{"content": "You are test role. test backstory\nYour personal
goal is: test goalTo give my best complete final answer to the task use the
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\nYour final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!\nCurrent Task: The final answer is 42.
But don''t give it yet, instead keep using the `get_final_answer` tool.\n\nThis
is the expect criteria for your final answer: The final answer \n you MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '938'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.35.10
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.35.10
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
The"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
is"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"42"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ac1a40879b87d1f-LAX
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Thu, 01 Aug 2024 00:16:40 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=MHUl15YVi607cmuLtQ84ESiH30IyJiIW1a40fopQ81w-1722471400-1.0.1.1-OGpq5Ezj6iE0ToM1diQllGb70.J3O_K2De9NbwZPWmW2qN07U20adJ_0yd6PKUNqMdL.xEnLcNAOWVmsfrLUrQ;
path=/; expires=Thu, 01-Aug-24 00:46:40 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=G2ZVNvfNfFk4DeKyZ7jMYetG7wOasINAGHstrOnuAY8-1722471400129-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '131'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15552000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_b68b417b3fe1c67244279551e411b37a
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,162 @@
interactions:
- request:
body: '{"messages": [{"content": "You are test role. test backstory\nYour personal
goal is: test goalTo give my best complete final answer to the task use the
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\nYour final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!\nCurrent Task: The final answer is 42.
But don''t give it yet, instead keep using the `get_final_answer` tool.\n\nThis
is the expect criteria for your final answer: The final answer \n you MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '938'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.35.10
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.35.10
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"42"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ac331b9eaee2b7f-LAX
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Thu, 01 Aug 2024 04:48:09 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=OXht5zC71vWYFW_z_933m3sZfFS2xBez0DHv93FvT5s-1722487689-1.0.1.1-wE8JTR7MnwUgiiTDppYg8A7zLEiidth.MB0zrwONeAtNWRjKC1tuGf8LZYDlYIHUhqG73syYExpZ.5pZhzJkcg;
path=/; expires=Thu, 01-Aug-24 05:18:09 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=PAR7y4xRe4VzRT.7GK34Tq5r8vevY6xq0E.i.R40xnU-1722487689562-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '84'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15552000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_105dcfc53c9672dea0437249c12c3319
status:
code: 200
message: OK
version: 1

View File

@@ -632,21 +632,18 @@ def test_sequential_async_task_execution_completion():
list_ideas = Task(
description="Give me a list of 5 interesting ideas to explore for an article, what makes them unique and interesting.",
expected_output="Bullet point list of 5 important events.",
max_retry_limit=3,
agent=researcher,
async_execution=True,
)
list_important_history = Task(
description="Research the history of AI and give me the 5 most important events that shaped the technology.",
expected_output="Bullet point list of 5 important events.",
max_retry_limit=3,
agent=researcher,
async_execution=True,
)
write_article = Task(
description="Write an article about the history of AI and its most important events.",
expected_output="A 4 paragraph article about AI.",
max_retry_limit=3,
agent=writer,
context=[list_ideas, list_important_history],
)

View File

@@ -23,10 +23,7 @@ def short_term_memory():
expected_output="A list of relevant URLs based on the search query.",
agent=agent,
)
return ShortTermMemory(crew=Crew(
agents=[agent],
tasks=[task]
))
return ShortTermMemory(crew=Crew(agents=[agent], tasks=[task]))
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -38,7 +35,11 @@ def test_save_and_search(short_term_memory):
agent="test_agent",
metadata={"task": "test_task"},
)
short_term_memory.save(memory)
short_term_memory.save(
value=memory.data,
metadata=memory.metadata,
agent=memory.agent,
)
find = short_term_memory.search("test value", score_threshold=0.01)[0]
assert find["context"] == memory.data, "Data value mismatch."