merge upstream

This commit is contained in:
Braelyn Boynton
2024-04-02 12:22:49 -07:00
73 changed files with 209126 additions and 2569 deletions

14
.editorconfig Normal file
View File

@@ -0,0 +1,14 @@
# .editorconfig
root = true
# All files
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
# Python files
[*.py]
indent_style = space
indent_size = 2

5
.gitignore vendored
View File

@@ -7,4 +7,7 @@ assets/*
.idea
test/
docs_crew/
chroma.sqlite3
chroma.sqlite3
old_en.json
db/*
.db

View File

@@ -49,7 +49,7 @@ To get started with CrewAI, follow these simple steps:
pip install crewai
```
If you want to also install crewai-tools, which is a package with tools that can be used by the agents, but more dependencies, you can install it with, example bellow uses it:
If you want to also install crewai-tools, which is a package with tools that can be used by the agents, but more dependencies, you can install it with, example below uses it:
```shell
pip install 'crewai[tools]'

1
docs/CNAME Normal file
View File

@@ -0,0 +1 @@
docs.crewai.com

View File

@@ -23,6 +23,7 @@ Tasks in CrewAI can be designed to require collaboration between agents. For exa
| **Output Pydantic** *(optional)* | Takes a pydantic model and returns the output as a pydantic object. **Agent LLM needs to be using an OpenAI client, could be Ollama for example but using the OpenAI wrapper** |
| **Output File** *(optional)* | Takes a file path and saves the output of the task on it. |
| **Callback** *(optional)* | A function to be executed after the task is completed. |
| **Human Input** *(optional) - Release Candidate* | Indicates whether the agent should ask for feedback at the end of the task |
## Creating a Task
@@ -54,12 +55,12 @@ from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool
research_agent = Agent(
role='Researcher',
goal='Find and summarize the latest AI news',
backstory="""You're a researcher at a large company.
You're responsible for analyzing data and providing insights
to the business."""
verbose=True
role='Researcher',
goal='Find and summarize the latest AI news',
backstory="""You're a researcher at a large company.
You're responsible for analyzing data and providing insights
to the business.""",
verbose=True
)
search_tool = SerperDevTool()
@@ -224,4 +225,4 @@ These validations help in maintaining the consistency and reliability of task ex
## Conclusion
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.

View File

@@ -195,6 +195,42 @@ agent = Agent(
)
```
### Custom Caching Mechanism
Tools now can have an optinal attribute called `cache_function`, this cache function
can be use to fine control when to cache and when not to cache a tool retuls.
Good example my be a tool responsible for getting values from securities where
you are okay to cache treasury value but not stock values.
```python
from crewai_tools import tool
@tool
def multiplcation_tool(first_number: int, second_number: int) -> str:
"""Useful for when you need to multiply two numbers together."""
return first_number * second_number
def cache_func(args, result):
# The cache function will receive:
# - arguments passed to the tool
# - the result of the tool
#
# In this case we only cache the resutl if it's multiple of 2
cache = result % 2 == 0
return cache
multiplcation_tool.cache_function = cache_func
writer1 = Agent(
role="Writer",
goal="You write lesssons of math for kids.",
backstory="You're an expert in writting and you love to teach kids but you know nothing of math.",
tools=[multiplcation_tool],
allow_delegation=False,
)
#...
```
## Using LangChain Tools
!!! info "LangChain Integration"
CrewAI seamlessly integrates with LangChains comprehensive toolkit for search-based queries and more, here are the available built-in tools that are offered by Langchain [LangChain Toolkit](https://python.langchain.com/docs/integrations/tools/)

View File

@@ -5,43 +5,43 @@ description: Guide on how to create and use custom tools within the crewAI frame
## Creating your own Tools
!!! example "Custom Tool Creation"
Developers can craft custom tools tailored for their agents needs or utilize pre-built options:
Developers can craft custom tools tailored to their agents needs or utilize pre-built options.
To create your own crewAI tools you will need to install our extra tools package:
To create your own crewAI tools, you will need to install our extra tools package:
```bash
pip install 'crewai[tools]'
```
Once you do that there are two main ways for one to create a crewAI tool:
Once installed, there are two primary methods for creating a crewAI tool:
### Subclassing `BaseTool`
To define a custom tool, create a new class that inherits from `BaseTool`. Specify the `name`, `description`, and implement the `_run` method to outline its operational logic.
```python
from crewai_tools import BaseTool
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, you agent will need this information to use it."
description: str = "Clear description for what this tool is useful for. Your agent will need this information to utilize it effectively."
def _run(self, argument: str) -> str:
# Implementation goes here
# Implementation details go here
return "Result from custom tool"
```
Define a new class inheriting from `BaseTool`, specifying `name`, `description`, and the `_run` method for operational logic.
### Utilizing the `tool` Decorator
For a simpler approach, create a `Tool` object directly with the required attributes and a functional logic.
For a more straightforward approach, employ the `tool` decorator to create a `Tool` object directly. This method requires specifying the required attributes and functional logic within a decorated function.
```python
from crewai_tools import tool
@tool("Name of my tool")
def my_tool(question: str) -> str:
"""Clear description for what this tool is useful for, you agent will need this information to use it."""
# Function logic here
"""Provide a clear description of what this tool is useful for. Your agent will need this information to use it."""
# Implement function logic here
```
```python
@@ -49,43 +49,30 @@ import json
import requests
from crewai import Agent
from crewai.tools import tool
from unstructured.partition.html import partition_html
# Annotate the function with the tool decorator from crewAI
@tool("Integration with a given API")
def integtation_tool(argument: str) -> str:
"""Integration with a given API"""
# Code here
return resutls # string to be sent back to the agent
# Assign the scraping tool to an agent
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[integtation_tool]
)
# Decorate the function with the tool decorator from crewAI
@tool("Integration with a Given API")
def integration_tool(argument: str) -> str:
"""Details the integration process with a given API."""
# Implementation details
return "Results to be sent back to the agent"
```
### Defining a Cache Function for the Tool
### Using the `Tool` function from langchain
For another simple approach, create a function in python directly with the required attributes and a functional logic.
By default, all tools have caching enabled, meaning that if a tool is called with the same arguments by any agent in the crew, it will return the same result. However, specific scenarios may require more tailored caching strategies. For these cases, use the `cache_function` attribute to assign a function that determines whether the result should be cached.
```python
def combine(a, b):
return a + b
```
@tool("Integration with a Given API")
def integration_tool(argument: str) -> str:
"""Integration with a given API."""
# Implementation details
return "Results to be sent back to the agent"
Then you can add that function into the your tool by using 'func' variable in the Tool function.
def cache_strategy(arguments: dict, result: str) -> bool:
if result == "some_value":
return True
return False
```python
from langchain.agents import Tool
math_tool = Tool(
name="Math tool",
func=math_tool,
description="Useful for adding two numbers together, in other words combining them."
)
```
integration_tool.cache_function = cache_strategy
```

View File

@@ -1,6 +1,6 @@
---
title: Human Input on Execution
description: Comprehensive guide on integrating CrewAI with human input during execution in complex decision-making processes or when needed help during complex tasks.
title: Human Input on Execution [Release Candidate]
description: Comprehensive guide on integrating CrewAI with human input during execution in complex decision-making processes or when needed help during complex tasks.
---
# Human Input in Agent Execution
@@ -9,7 +9,7 @@ Human input plays a pivotal role in several agent execution scenarios, enabling
## Using Human Input with CrewAI
Incorporating human input with CrewAI is straightforward, enhancing the agent's ability to make informed decisions. While the documentation previously mentioned using a "LangChain Tool" and a specific "DuckDuckGoSearchRun" tool from `langchain_community.tools`, it's important to clarify that the integration of such tools should align with the actual capabilities and configurations defined within your `Agent` class setup.
The easiest way to integrate human input into agent execution is by setting the `human_input` flag in the task definition. When this flag is enabled, the agent will prompt the user for input before giving it's final answer. This input can be used to provide additional context, clarify ambiguities, or validate the agent's output.
### Example:
@@ -23,14 +23,10 @@ import os
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool
from langchain.agents import load_tools
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
os.environ["OPENAI_API_KEY"] = "Your Key"
# Loading Human Tools
human_tools = load_tools(["human"])
# Loading Tools
search_tool = SerperDevTool()
# Define your agents with roles, goals, and tools
@@ -44,7 +40,7 @@ researcher = Agent(
),
verbose=True,
allow_delegation=False,
tools=[search_tool]+human_tools # Passing human tools to the agent
tools=[search_tool]
)
writer = Agent(
role='Tech Content Strategist',
@@ -67,6 +63,7 @@ task1 = Task(
),
expected_output='A comprehensive full report on the latest AI advancements in 2024, leave nothing out',
agent=researcher,
human_input=True, # setting the flag on for human input in this task
)
task2 = Task(

View File

@@ -50,6 +50,39 @@ OPENAI_MODEL_NAME='openhermes' # Adjust based on available model
OPENAI_API_KEY=''
```
## HuggingFace Integration
There are a couple different ways you can use HuggingFace to host your LLM.
### Your own HuggingFace endpoint
```python
from langchain_community.llms import HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
endpoint_url="<YOUR_ENDPOINT_URL_HERE>",
huggingfacehub_api_token="<HF_TOKEN_HERE>",
task="text-generation",
max_new_tokens=512
)
agent = Agent(
role="HuggingFace Agent",
goal="Generate text using HuggingFace",
backstory="A diligent explorer of GitHub docs.",
llm=llm
)
```
### From HuggingFaceHub endpoint
```python
from langchain_community.llms import HuggingFaceHub
llm = HuggingFaceHub(
repo_id="HuggingFaceH4/zephyr-7b-beta",
huggingfacehub_api_token="<HF_TOKEN_HERE>",
task="text-generation",
)
```
## OpenAI Compatible API Endpoints
Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, and Mistral AI.
@@ -75,8 +108,27 @@ OPENAI_API_BASE=https://api.mistral.ai/v1
OPENAI_MODEL_NAME="mistral-small"
```
### Azure Open AI Configuration
For Azure OpenAI API integration, set the following environment variables:
### text-gen-web-ui
```sh
OPENAI_API_BASE=http://localhost:5000/v1
OPENAI_MODEL_NAME=NA
OPENAI_API_KEY=NA
```
### Cohere
```sh
from langchain_community.chat_models import ChatCohere
# Initialize language model
os.environ["COHERE_API_KEY"] = "your-cohere-api-key"
llm = ChatCohere()
Free developer API key available here: https://cohere.com/
Langchain Documentation: https://python.langchain.com/docs/integrations/chat/cohere
```
### Azure Open AI
Azure's OpenAI API needs a distinct setup, utilizing the `langchain_openai` component for Azure-specific configurations.
```sh
AZURE_OPENAI_VERSION="2022-12-01"
AZURE_OPENAI_DEPLOYMENT=""

View File

@@ -1,8 +1,5 @@
# CSVSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
@@ -34,4 +31,32 @@ tool = CSVSearchTool()
## Arguments
- `csv` : The path to the CSV file you want to search. This is a mandatory argument if the tool was initialized without a specific CSV file; otherwise, it is optional.
- `csv` : The path to the CSV file you want to search. This is a mandatory argument if the tool was initialized without a specific CSV file; otherwise, it is optional.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = CSVSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -0,0 +1,65 @@
# CodeDocsSearchTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The CodeDocsSearchTool is a powerful RAG (Retrieval-Augmented Generation) tool designed for semantic searches within code documentation. It enables users to efficiently find specific information or topics within code documentation. By providing a `docs_url` during initialization, the tool narrows down the search to that particular documentation site. Alternatively, without a specific `docs_url`, it searches across a wide array of code documentation known or discovered throughout its execution, making it versatile for various documentation search needs.
## Installation
To start using the CodeDocsSearchTool, first, install the crewai_tools package via pip:
```
pip install 'crewai[tools]'
```
## Example
Utilize the CodeDocsSearchTool as follows to conduct searches within code documentation:
```python
from crewai_tools import CodeDocsSearchTool
# To search any code documentation content if the URL is known or discovered during its execution:
tool = CodeDocsSearchTool()
# OR
# To specifically focus your search on a given documentation site by providing its URL:
tool = CodeDocsSearchTool(docs_url='https://docs.example.com/reference')
```
Note: Substitute 'https://docs.example.com/reference' with your target documentation URL and 'How to use search tool' with the search query relevant to your needs.
## Arguments
- `docs_url`: Optional. Specifies the URL of the code documentation to be searched. Providing this during the tool's initialization focuses the search on the specified documentation content.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = YoutubeVideoSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,8 +1,5 @@
# DOCXSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
@@ -33,3 +30,31 @@ tool = DOCXSearchTool(docx='path/to/your/document.docx')
## Arguments
- `docx`: An optional file path to a specific DOCX document you wish to search. If not provided during initialization, the tool allows for later specification of any DOCX file's content path for searching.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = DOCXSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,8 +1,5 @@
# DirectorySearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
@@ -30,4 +27,32 @@ tool = DirectorySearchTool(directory='/path/to/directory')
```
## Arguments
- `directory` : This string argument specifies the directory within which to search. It is mandatory if the tool has not been initialized with a directory; otherwise, the tool will only search within the initialized directory.
- `directory` : This string argument specifies the directory within which to search. It is mandatory if the tool has not been initialized with a directory; otherwise, the tool will only search within the initialized directory.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = DirectorySearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,30 +1,27 @@
# GitHubSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
# GithubSearchTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The GitHubSearchTool is a Read, Append, and Generate (RAG) tool specifically designed for conducting semantic searches within GitHub repositories. Utilizing advanced semantic search capabilities, it sifts through code, pull requests, issues, and repositories, making it an essential tool for developers, researchers, or anyone in need of precise information from GitHub.
The GithubSearchTool is a Read, Append, and Generate (RAG) tool specifically designed for conducting semantic searches within GitHub repositories. Utilizing advanced semantic search capabilities, it sifts through code, pull requests, issues, and repositories, making it an essential tool for developers, researchers, or anyone in need of precise information from GitHub.
## Installation
To use the GitHubSearchTool, first ensure the crewai_tools package is installed in your Python environment:
To use the GithubSearchTool, first ensure the crewai_tools package is installed in your Python environment:
```shell
pip install 'crewai[tools]'
```
This command installs the necessary package to run the GitHubSearchTool along with any other tools included in the crewai_tools package.
This command installs the necessary package to run the GithubSearchTool along with any other tools included in the crewai_tools package.
## Example
Heres how you can use the GitHubSearchTool to perform semantic searches within a GitHub repository:
Heres how you can use the GithubSearchTool to perform semantic searches within a GitHub repository:
```python
from crewai_tools import GitHubSearchTool
from crewai_tools import GithubSearchTool
# Initialize the tool for semantic searches within a specific GitHub repository
tool = GitHubSearchTool(
tool = GithubSearchTool(
github_repo='https://github.com/example/repo',
content_types=['code', 'issue'] # Options: code, repo, pr, issue
)
@@ -32,7 +29,7 @@ tool = GitHubSearchTool(
# OR
# Initialize the tool for semantic searches within a specific GitHub repository, so the agent can search any repository if it learns about during its execution
tool = GitHubSearchTool(
tool = GithubSearchTool(
content_types=['code', 'issue'] # Options: code, repo, pr, issue
)
```
@@ -40,3 +37,31 @@ tool = GitHubSearchTool(
## Arguments
- `github_repo` : The URL of the GitHub repository where the search will be conducted. This is a mandatory field and specifies the target repository for your search.
- `content_types` : Specifies the types of content to include in your search. You must provide a list of content types from the following options: `code` for searching within the code, `repo` for searching within the repository's general information, `pr` for searching within pull requests, and `issue` for searching within issues. This field is mandatory and allows tailoring the search to specific content types within the GitHub repository.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = GithubSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,8 +1,5 @@
# JSONSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
@@ -30,4 +27,32 @@ tool = JSONSearchTool(json_path='./path/to/your/file.json')
```
## Arguments
- `json_path` (str): An optional argument that defines the path to the JSON file to be searched. This parameter is only necessary if the tool is initialized without a specific JSON path. Providing this argument restricts the search to the specified JSON file.
- `json_path` (str): An optional argument that defines the path to the JSON file to be searched. This parameter is only necessary if the tool is initialized without a specific JSON path. Providing this argument restricts the search to the specified JSON file.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = JSONSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,8 +1,5 @@
# MDXSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
@@ -32,4 +29,32 @@ tool = MDXSearchTool(mdx='path/to/your/document.mdx')
```
## Arguments
- mdx: **Optional** The MDX path for the search. Can be provided at initialization
- mdx: **Optional** The MDX path for the search. Can be provided at initialization
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = MDXSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,8 +1,5 @@
# PDFSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
@@ -33,3 +30,31 @@ tool = PDFSearchTool(pdf='path/to/your/document.pdf')
## Arguments
- `pdf`: **Optinal** The PDF path for the search. Can be provided at initialization or within the `run` method's arguments. If provided at initialization, the tool confines its search to the specified document.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = PDFSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,8 +1,5 @@
# PGSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
@@ -24,11 +21,38 @@ from crewai_tools import PGSearchTool
# Initialize the tool with the database URI and the target table name
tool = PGSearchTool(db_uri='postgresql://user:password@localhost:5432/mydatabase', table_name='employees')
```
## Arguments
The PGSearchTool requires the following arguments for its operation:
- `db_uri`: A string representing the URI of the PostgreSQL database to be queried. This argument is mandatory and must include the necessary authentication details and the location of the database.
- `table_name`: A string specifying the name of the table within the database on which the semantic search will be performed. This argument is mandatory.
- `table_name`: A string specifying the name of the table within the database on which the semantic search will be performed. This argument is mandatory.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = PGSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,8 +1,5 @@
# ScrapeWebsiteTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
@@ -24,7 +21,11 @@ tool = ScrapeWebsiteTool()
# Initialize the tool with the website URL, so the agent can only scrap the content of the specified website
tool = ScrapeWebsiteTool(website_url='https://www.example.com')
# Extract the text from the site
text = tool.run()
print(text)
```
## Arguments
- `website_url` : Mandatory website URL to read the file. This is the primary input for the tool, specifying which website's content should be scraped and read.
- `website_url` : Mandatory website URL to read the file. This is the primary input for the tool, specifying which website's content should be scraped and read.

View File

@@ -1,8 +1,5 @@
# TXTSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
@@ -35,3 +32,31 @@ tool = TXTSearchTool(txt='path/to/text/file.txt')
## Arguments
- `txt` (str): **Optinal**. The path to the text file you want to search. This argument is only required if the tool was not initialized with a specific text file; otherwise, the search will be conducted within the initially provided text file.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = TXTSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,8 +1,5 @@
# WebsiteSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
@@ -32,4 +29,32 @@ tool = WebsiteSearchTool(website='https://example.com')
```
## Arguments
- `website` : An optional argument that specifies the valid website URL to perform the search on. This becomes necessary if the tool is initialized without a specific website. In the `WebsiteSearchToolSchema`, this argument is mandatory. However, in the `FixedWebsiteSearchToolSchema`, it becomes optional if a website is provided during the tool's initialization, as it will then only search within the predefined website's content.
- `website` : An optional argument that specifies the valid website URL to perform the search on. This becomes necessary if the tool is initialized without a specific website. In the `WebsiteSearchToolSchema`, this argument is mandatory. However, in the `FixedWebsiteSearchToolSchema`, it becomes optional if a website is provided during the tool's initialization, as it will then only search within the predefined website's content.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = WebsiteSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,35 +0,0 @@
# XMLSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The XMLSearchTool is a cutting-edge RAG tool engineered for conducting semantic searches within XML files. Ideal for users needing to parse and extract information from XML content efficiently, this tool supports inputting a search query and an optional XML file path. By specifying an XML path, users can target their search more precisely to the content of that file, thereby obtaining more relevant search outcomes.
## Installation
To start using the XMLSearchTool, you must first install the crewai_tools package. This can be easily done with the following command:
```shell
pip install 'crewai[tools]'
```
## Example
Here are two examples demonstrating how to use the XMLSearchTool. The first example shows searching within a specific XML file, while the second example illustrates initiating a search without predefining an XML path, providing flexibility in search scope.
```python
from crewai_tools.tools.xml_search_tool import XMLSearchTool
# Allow agents to search within any XML file's content as it learns about their paths during execution
tool = XMLSearchTool()
# OR
# Initialize the tool with a specific XML file path for exclusive search within that document
tool = XMLSearchTool(xml='path/to/your/xmlfile.xml')
```
## Arguments
- `xml`: This is the path to the XML file you wish to search. It is an optional parameter during the tool's initialization but must be provided either at initialization or as part of the `run` method's arguments to execute a search.

View File

@@ -1,8 +1,5 @@
# XMLSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
@@ -33,3 +30,31 @@ tool = XMLSearchTool(xml='path/to/your/xmlfile.xml')
## Arguments
- `xml`: This is the path to the XML file you wish to search. It is an optional parameter during the tool's initialization but must be provided either at initialization or as part of the `run` method's arguments to execute a search.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = XMLSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,8 +1,5 @@
# YoutubeChannelSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
@@ -33,3 +30,31 @@ tool = YoutubeChannelSearchTool(youtube_channel_handle='@exampleChannel')
## Arguments
- `youtube_channel_handle` : A mandatory string representing the Youtube channel handle. This parameter is crucial for initializing the tool to specify the channel you want to search within. The tool is designed to only search within the content of the provided channel handle.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = YoutubeChannelSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,8 +1,5 @@
# YoutubeVideoSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
@@ -31,8 +28,37 @@ tool = YoutubeVideoSearchTool()
# Targeted search within a specific Youtube video's content
tool = YoutubeVideoSearchTool(youtube_video_url='https://youtube.com/watch?v=example')
```
## Arguments
The YoutubeVideoSearchTool accepts the following initialization arguments:
- `youtube_video_url`: An optional argument at initialization but required if targeting a specific Youtube video. It specifies the Youtube video URL path you want to search within.
- `youtube_video_url`: An optional argument at initialization but required if targeting a specific Youtube video. It specifies the Youtube video URL path you want to search within.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = YoutubeVideoSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -153,7 +153,7 @@ nav:
- Github RAG Search: 'tools/GitHubSearchTool.md'
- Code Docs RAG Search: 'tools/CodeDocsSearchTool.md'
- Youtube Video RAG Search: 'tools/YoutubeVideoSearchTool.md'
- Youtube Chanel RAG Search: 'tools/YoutubeChannelSearchTool.md'
- Youtube Channel RAG Search: 'tools/YoutubeChannelSearchTool.md'
- Examples:
- Trip Planner Crew: https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner"
- Create Instagram Post: https://github.com/joaomdmoura/crewAI-examples/tree/main/instagram_post"
@@ -179,4 +179,4 @@ extra:
- icon: fontawesome/brands/twitter
link: https://twitter.com/joaomdmoura
- icon: fontawesome/brands/github
link: https://github.com/joaomdmoura/crewAI
link: https://github.com/joaomdmoura/crewAI

907
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "crewai"
version = "0.22.5"
version = "0.27.0rc1"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
authors = ["Joao Moura <joao@crewai.com>"]
readme = "README.md"
@@ -24,9 +24,11 @@ opentelemetry-sdk = "^1.22.0"
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
instructor = "^0.5.2"
regex = "^2023.12.25"
crewai-tools = { version = "^0.0.15", optional = true }
crewai-tools = { version = "^0.1.1", optional = true }
click = "^8.1.7"
python-dotenv = "1.0.0"
embedchain = "^0.1.98"
appdirs = "^1.4.4"
agentops = "0.1.0b1"
[tool.poetry.extras]
@@ -35,7 +37,6 @@ tools = ["crewai-tools"]
[tool.poetry.group.dev.dependencies]
isort = "^5.13.2"
pyright = ">=1.1.350,<2.0.0"
black = {git = "https://github.com/psf/black.git", rev = "stable"}
autoflake = "^2.2.1"
pre-commit = "^3.6.0"
mkdocs = "^1.4.3"
@@ -45,7 +46,7 @@ mkdocs-material = {extras = ["imaging"], version = "^9.5.7"}
mkdocs-material-extensions = "^1.3.1"
pillow = "^10.2.0"
cairosvg = "^2.7.1"
crewai_tools = "^0.0.15"
crewai_tools = "^0.1.1"
[tool.isort]
profile = "black"

View File

@@ -4,7 +4,6 @@ from typing import Any, Dict, List, Optional, Tuple
from langchain.agents.agent import RunnableAgent
from langchain.agents.tools import tool as LangChainTool
from langchain.memory import ConversationSummaryMemory
from langchain.tools.render import render_text_description
from langchain_core.agents import AgentAction
from langchain_core.callbacks import BaseCallbackHandler
@@ -22,6 +21,7 @@ from pydantic import (
from pydantic_core import PydanticCustomError
from crewai.agents import CacheHandler, CrewAgentExecutor, CrewAgentParser, ToolsHandler
from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.utilities import I18N, Logger, Prompts, RPMController
from crewai.utilities.token_counter_callback import TokenCalcHandler, TokenProcess
from agentops.agent import track_agent
@@ -70,6 +70,10 @@ class Agent(BaseModel):
role: str = Field(description="Role of the agent")
goal: str = Field(description="Objective of the agent")
backstory: str = Field(description="Backstory of the agent")
cache: bool = Field(
default=True,
description="Whether the agent should use a cache for tool usage.",
)
config: Optional[Dict[str, Any]] = Field(
description="Configuration for the agent",
default=None,
@@ -96,11 +100,12 @@ class Agent(BaseModel):
agent_executor: InstanceOf[CrewAgentExecutor] = Field(
default=None, description="An instance of the CrewAgentExecutor class."
)
crew: Any = Field(default=None, description="Crew to which the agent belongs.")
tools_handler: InstanceOf[ToolsHandler] = Field(
default=None, description="An instance of the ToolsHandler class."
)
cache_handler: InstanceOf[CacheHandler] = Field(
default=CacheHandler(), description="An instance of the CacheHandler class."
default=None, description="An instance of the CacheHandler class."
)
step_callback: Optional[Any] = Field(
default=None,
@@ -120,6 +125,10 @@ class Agent(BaseModel):
default=None, description="Callback to be executed"
)
_original_role: str | None = None
_original_goal: str | None = None
_original_backstory: str | None = None
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
super().__init__(**config, **data)
@@ -159,6 +168,8 @@ class Agent(BaseModel):
TokenCalcHandler(self.llm.model_name, self._token_process)
]
if not self.agent_executor:
if not self.cache_handler:
self.cache_handler = CacheHandler()
self.set_cache_handler(self.cache_handler)
return self
@@ -178,7 +189,8 @@ class Agent(BaseModel):
Returns:
Output of the agent
"""
self.tools_handler.last_used_tool = {}
if self.tools_handler:
self.tools_handler.last_used_tool = {}
task_prompt = task.prompt()
@@ -187,13 +199,24 @@ class Agent(BaseModel):
task=task_prompt, context=context
)
tools = self._parse_tools(tools or self.tools)
if self.crew and self.memory:
contextual_memory = ContextualMemory(
self.crew._short_term_memory,
self.crew._long_term_memory,
self.crew._entity_memory,
)
memory = contextual_memory.build_context_for_task(task, context)
task_prompt += self.i18n.slice("memory").format(memory=memory)
tools = tools or self.tools
parsed_tools = self._parse_tools(tools)
self.create_agent_executor(tools=tools)
self.agent_executor.tools = tools
self.agent_executor.tools = parsed_tools
self.agent_executor.task = task
self.agent_executor.tools_description = render_text_description(tools)
self.agent_executor.tools_names = self.__tools_names(tools)
self.agent_executor.tools_description = render_text_description(parsed_tools)
self.agent_executor.tools_names = self.__tools_names(parsed_tools)
result = self.agent_executor.invoke(
{
@@ -214,8 +237,10 @@ class Agent(BaseModel):
Args:
cache_handler: An instance of the CacheHandler class.
"""
self.cache_handler = cache_handler
self.tools_handler = ToolsHandler(cache=self.cache_handler)
self.tools_handler = ToolsHandler()
if self.cache:
self.cache_handler = cache_handler
self.tools_handler.cache = cache_handler
self.create_agent_executor()
def set_rpm_controller(self, rpm_controller: RPMController) -> None:
@@ -248,8 +273,11 @@ class Agent(BaseModel):
executor_args = {
"llm": self.llm,
"i18n": self.i18n,
"crew": self.crew,
"crew_agent": self,
"tools": self._parse_tools(tools),
"verbose": self.verbose,
"original_tools": tools,
"handle_parsing_errors": True,
"max_iterations": self.max_iter,
"step_callback": self.step_callback,
@@ -263,15 +291,7 @@ class Agent(BaseModel):
"request_within_rpm_limit"
] = self._rpm_controller.check_or_wait
if self.memory:
summary_memory = ConversationSummaryMemory(
llm=self.llm, input_key="input", memory_key="chat_history"
)
executor_args["memory"] = summary_memory
agent_args["chat_history"] = lambda x: x["chat_history"]
prompt = Prompts(i18n=self.i18n, tools=tools).task_execution_with_memory()
else:
prompt = Prompts(i18n=self.i18n, tools=tools).task_execution()
prompt = Prompts(i18n=self.i18n, tools=tools).task_execution()
execution_prompt = prompt.partial(
goal=self.goal,
@@ -287,10 +307,17 @@ class Agent(BaseModel):
def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Interpolate inputs into the agent description and backstory."""
if self._original_role is None:
self._original_role = self.role
if self._original_goal is None:
self._original_goal = self.goal
if self._original_backstory is None:
self._original_backstory = self.backstory
if inputs:
self.role = self.role.format(**inputs)
self.goal = self.goal.format(**inputs)
self.backstory = self.backstory.format(**inputs)
self.role = self._original_role.format(**inputs)
self.goal = self._original_goal.format(**inputs)
self.backstory = self._original_backstory.format(**inputs)
def increment_formatting_errors(self) -> None:
"""Count the formatting errors of the agent."""

View File

@@ -1,3 +1,4 @@
import threading
import time
from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
@@ -12,17 +13,26 @@ from langchain_core.utils.input import get_color_mapping
from pydantic import InstanceOf
from crewai.agents.tools_handler import ToolsHandler
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N
from crewai.utilities.converter import ConverterError
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
class CrewAgentExecutor(AgentExecutor):
_i18n: I18N = I18N()
should_ask_for_human_input: bool = False
llm: Any = None
iterations: int = 0
task: Any = None
tools_description: str = ""
tools_names: str = ""
original_tools: List[Any] = []
crew_agent: Any = None
crew: Any = None
function_calling_llm: Any = None
request_within_rpm_limit: Any = None
tools_handler: InstanceOf[ToolsHandler] = None
@@ -41,6 +51,52 @@ class CrewAgentExecutor(AgentExecutor):
self.iterations == self.force_answer_max_iterations
) and not self.have_forced_answer
def _create_short_term_memory(self, output) -> None:
if (
self.crew_agent.memory
and "Action: Delegate work to co-worker" not in output.log
):
memory = ShortTermMemoryItem(
data=output.log,
agent=self.crew_agent.role,
metadata={
"observation": self.task.description,
},
)
self.crew._short_term_memory.save(memory)
def _create_long_term_memory(self, output) -> None:
if self.crew_agent.memory:
ltm_agent = TaskEvaluator(self.crew_agent)
evaluation = ltm_agent.evaluate(self.task, output.log)
if isinstance(evaluation, ConverterError):
return
long_term_memory = LongTermMemoryItem(
task=self.task.description,
agent=self.crew_agent.role,
quality=evaluation.quality,
datetime=str(time.time()),
expected_output=self.task.expected_output,
metadata={
"suggestions": "\n".join(
[f"- {s}" for s in evaluation.suggestions]
),
"quality": evaluation.quality,
},
)
self.crew._long_term_memory.save(long_term_memory)
for entity in evaluation.entities:
entity_memory = EntityMemoryItem(
name=entity.name,
type=entity.type,
description=entity.description,
relationships="\n".join([f"- {r}" for r in entity.relationships]),
)
self.crew._entity_memory.save(entity_memory)
def _call(
self,
inputs: Dict[str, str],
@@ -51,13 +107,18 @@ class CrewAgentExecutor(AgentExecutor):
name_to_tool_map = {tool.name: tool for tool in self.tools}
# We construct a mapping from each tool to a color, used for logging.
color_mapping = get_color_mapping(
[tool.name for tool in self.tools], excluded_colors=["green", "red"]
[tool.name.casefold() for tool in self.tools],
excluded_colors=["green", "red"],
)
intermediate_steps: List[Tuple[AgentAction, str]] = []
# Allowing human input given task setting
if self.task.human_input:
self.should_ask_for_human_input = True
# Let's start tracking the number of iterations and time elapsed
self.iterations = 0
time_elapsed = 0.0
start_time = time.time()
# We now enter the agent loop (until it returns something).
while self._should_continue(self.iterations, time_elapsed):
if not self.request_within_rpm_limit or self.request_within_rpm_limit():
@@ -68,16 +129,21 @@ class CrewAgentExecutor(AgentExecutor):
intermediate_steps,
run_manager=run_manager,
)
if self.step_callback:
self.step_callback(next_step_output)
if isinstance(next_step_output, AgentFinish):
# Creating long term memory
create_long_term_memory = threading.Thread(
target=self._create_long_term_memory, args=(next_step_output,)
)
create_long_term_memory.start()
return self._return(
next_step_output, intermediate_steps, run_manager=run_manager
)
intermediate_steps.extend(next_step_output)
if len(next_step_output) == 1:
next_step_action = next_step_output[0]
# See if tool should return directly
@@ -86,11 +152,13 @@ class CrewAgentExecutor(AgentExecutor):
return self._return(
tool_return, intermediate_steps, run_manager=run_manager
)
self.iterations += 1
time_elapsed = time.time() - start_time
output = self.agent.return_stopped_response(
self.early_stopping_method, intermediate_steps, **inputs
)
return self._return(output, intermediate_steps, run_manager=run_manager)
def _iter_next_step(
@@ -114,6 +182,7 @@ class CrewAgentExecutor(AgentExecutor):
return
intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
# Call the LLM to see what to do.
output = self.agent.plan(
intermediate_steps,
@@ -147,8 +216,10 @@ class CrewAgentExecutor(AgentExecutor):
else:
raise ValueError("Got unexpected type of `handle_parsing_errors`")
output = AgentAction("_Exception", observation, "")
if run_manager:
run_manager.on_agent_action(output, color="green")
tool_run_kwargs = self.agent.tool_run_logging_kwargs()
observation = ExceptionTool().run(
output.tool_input,
@@ -169,19 +240,39 @@ class CrewAgentExecutor(AgentExecutor):
# If the tool chosen is the finishing tool, then we end and return.
if isinstance(output, AgentFinish):
yield output
return
if self.should_ask_for_human_input:
# Making sure we only ask for it once, so disabling for the next thought loop
self.should_ask_for_human_input = False
human_feedback = self._ask_human_input(output.return_values["output"])
action = AgentAction(
tool="Human Input", tool_input=human_feedback, log=output.log
)
yield AgentStep(
action=action,
observation=self._i18n.slice("human_feedback").format(
human_feedback=human_feedback
),
)
return
else:
yield output
return
self._create_short_term_memory(output)
actions: List[AgentAction]
actions = [output] if isinstance(output, AgentAction) else output
yield from actions
for agent_action in actions:
if run_manager:
run_manager.on_agent_action(agent_action, color="green")
# Otherwise we lookup the tool
tool_usage = ToolUsage(
tools_handler=self.tools_handler,
tools=self.tools,
original_tools=self.original_tools,
tools_description=self.tools_description,
tools_names=self.tools_names,
function_calling_llm=self.function_calling_llm,
@@ -193,13 +284,20 @@ class CrewAgentExecutor(AgentExecutor):
if isinstance(tool_calling, ToolUsageErrorException):
observation = tool_calling.message
else:
if tool_calling.tool_name.lower().strip() in [
name.lower().strip() for name in name_to_tool_map
if tool_calling.tool_name.casefold().strip() in [
name.casefold().strip() for name in name_to_tool_map
]:
observation = tool_usage.use(tool_calling, agent_action.log)
else:
observation = self._i18n.errors("wrong_tool_name").format(
tool=tool_calling.tool_name,
tools=", ".join([tool.name for tool in self.tools]),
tools=", ".join([tool.name.casefold() for tool in self.tools]),
)
yield AgentStep(action=agent_action, observation=observation)
def _ask_human_input(self, final_answer: dict) -> str:
"""Get human input."""
return input(
self._i18n.slice("getting_input").format(final_answer=final_answer)
)

View File

@@ -1,7 +1,7 @@
from typing import Any
from typing import Any, Optional, Union
from ..tools.cache_tools import CacheTools
from ..tools.tool_calling import ToolCalling
from ..tools.tool_calling import InstructorToolCalling, ToolCalling
from .cache.cache_handler import CacheHandler
@@ -11,15 +11,20 @@ class ToolsHandler:
last_used_tool: ToolCalling = {}
cache: CacheHandler
def __init__(self, cache: CacheHandler):
def __init__(self, cache: Optional[CacheHandler] = None):
"""Initialize the callback handler."""
self.cache = cache
self.last_used_tool = {}
def on_tool_use(self, calling: ToolCalling, output: str) -> Any:
def on_tool_use(
self,
calling: Union[ToolCalling, InstructorToolCalling],
output: str,
should_cache: bool = True,
) -> Any:
"""Run when tool ends running."""
self.last_used_tool = calling
if calling.tool_name != CacheTools().name:
if self.cache and should_cache and calling.tool_name != CacheTools().name:
self.cache.add(
tool=calling.tool_name,
input=calling.arguments,

View File

@@ -1,2 +1,3 @@
.env
.db
__pycache__/

View File

@@ -1,5 +1,8 @@
import json
import subprocess
import sys
import uuid
from pathlib import Path
from typing import Any, Dict, List, Optional, Union
from langchain_core.callbacks import BaseCallbackHandler
@@ -18,6 +21,9 @@ from pydantic_core import PydanticCustomError
from crewai.agent import Agent
from crewai.agents.cache import CacheHandler
from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.process import Process
from crewai.task import Task
from crewai.telemetry import Telemetry
@@ -34,14 +40,17 @@ class Crew(BaseModel):
tasks: List of tasks assigned to the crew.
agents: List of agents part of this crew.
manager_llm: The language model that will run manager agent.
memory: Whether the crew should use memory to store memories of it's execution.
manager_callbacks: The callback handlers to be executed by the manager agent when hierarchical process is used
cache: Whether the crew should use a cache to store the results of the tools execution.
function_calling_llm: The language model that will run the tool calling for all the agents.
process: The process flow that the crew will follow (e.g., sequential).
process: The process flow that the crew will follow (e.g., sequential, hierarchical).
verbose: Indicates the verbosity level for logging during execution.
config: Configuration settings for the crew.
max_rpm: Maximum number of requests per minute for the crew execution to be respected.
id: A unique identifier for the crew instance.
full_output: Whether the crew should return the full output with all tasks outputs or just the final output.
task_callback: Callback to be executed after each task for every agents execution.
step_callback: Callback to be executed after each step for every agents execution.
share_crew: Whether you want to share the complete crew infromation and execution with crewAI to make the library better, and allow us to train models.
"""
@@ -51,11 +60,24 @@ class Crew(BaseModel):
_rpm_controller: RPMController = PrivateAttr()
_logger: Logger = PrivateAttr()
_cache_handler: InstanceOf[CacheHandler] = PrivateAttr(default=CacheHandler())
_short_term_memory: Optional[InstanceOf[ShortTermMemory]] = PrivateAttr()
_long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr()
_entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr()
cache: bool = Field(default=True)
model_config = ConfigDict(arbitrary_types_allowed=True)
tasks: List[Task] = Field(default_factory=list)
agents: List[Agent] = Field(default_factory=list)
process: Process = Field(default=Process.sequential)
verbose: Union[int, bool] = Field(default=0)
memory: bool = Field(
default=True,
description="Whether the crew should use memory to store memories of it's execution",
)
embedder: Optional[dict] = Field(
default={"provider": "openai"},
description="Configuration for the embedder to be used for the crew.",
)
usage_metrics: Optional[dict] = Field(
default=None,
description="Metrics for the LLM usage during all tasks execution.",
@@ -81,6 +103,10 @@ class Crew(BaseModel):
default=None,
description="Callback to be executed after each step for all agents execution.",
)
task_callback: Optional[Any] = Field(
default=None,
description="Callback to be executed after each task for all agents execution.",
)
max_rpm: Optional[int] = Field(
default=None,
description="Maximum number of requests per minute for the crew execution to be respected.",
@@ -89,6 +115,10 @@ class Crew(BaseModel):
default="en",
description="Language used for the crew, defaults to English.",
)
language_file: str = Field(
default=None,
description="Path to the language file to be used for the crew.",
)
@field_validator("id", mode="before")
@classmethod
@@ -125,6 +155,19 @@ class Crew(BaseModel):
self._telemetry.crew_creation(self)
return self
@model_validator(mode="after")
def create_crew_memory(self) -> "Crew":
"""Set private attributes."""
if self.memory:
storage_dir = Path(".db")
storage_dir.mkdir(exist_ok=True)
if sys.platform.startswith("win"):
subprocess.call(["attrib", "+H", str(storage_dir)])
self._long_term_memory = LongTermMemory()
self._short_term_memory = ShortTermMemory(embedder_config=self.embedder)
self._entity_memory = EntityMemory(embedder_config=self.embedder)
return self
@model_validator(mode="after")
def check_manager_llm(self):
"""Validates that the language model is set when using hierarchical process."""
@@ -151,7 +194,8 @@ class Crew(BaseModel):
if self.agents:
for agent in self.agents:
agent.set_cache_handler(self._cache_handler)
if self.cache:
agent.set_cache_handler(self._cache_handler)
if self.max_rpm:
agent.set_rpm_controller(self._rpm_controller)
return self
@@ -188,16 +232,20 @@ class Crew(BaseModel):
"""Starts the crew to work on its assigned tasks."""
self._execution_span = self._telemetry.crew_execution_span(self)
self._interpolate_inputs(inputs)
self._set_tasks_callbacks()
i18n = I18N(language=self.language, language_file=self.language_file)
for agent in self.agents:
agent.i18n = I18N(language=self.language)
agent.i18n = i18n
agent.crew = self
if not agent.function_calling_llm:
agent.function_calling_llm = self.function_calling_llm
agent.create_agent_executor()
if not agent.step_callback:
agent.step_callback = self.step_callback
agent.create_agent_executor()
agent.create_agent_executor()
metrics = []
@@ -251,7 +299,7 @@ class Crew(BaseModel):
def _run_hierarchical_process(self) -> str:
"""Creates and assigns a manager agent to make sure the crew completes the tasks."""
i18n = I18N(language=self.language)
i18n = I18N(language=self.language, language_file=self.language_file)
manager = Agent(
role=i18n.retrieve("hierarchical_manager_agent", "role"),
goal=i18n.retrieve("hierarchical_manager_agent", "goal"),
@@ -275,6 +323,11 @@ class Crew(BaseModel):
self._finish_execution(task_output)
return self._format_output(task_output), manager._token_process.get_summary()
def _set_tasks_callbacks(self) -> str:
"""Sets callback for every task suing task_callback"""
for task in self.tasks:
task.callback = self.task_callback
def _interpolate_inputs(self, inputs: Dict[str, Any]) -> str:
"""Interpolates the inputs in the tasks and agents."""
[task.interpolate_inputs(inputs) for task in self.tasks]

View File

@@ -0,0 +1,3 @@
from .entity.entity_memory import EntityMemory
from .long_term.long_term_memory import LongTermMemory
from .short_term.short_term_memory import ShortTermMemory

View File

View File

@@ -0,0 +1,58 @@
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory
class ContextualMemory:
def __init__(self, stm: ShortTermMemory, ltm: LongTermMemory, em: EntityMemory):
self.stm = stm
self.ltm = ltm
self.em = em
def build_context_for_task(self, task, context) -> str:
"""
Automatically builds a minimal, highly relevant set of contextual information
for a given task.
"""
query = f"{task.description} {context}".strip()
if query == "":
return ""
context = []
context.append(self._fetch_ltm_context(task.description))
context.append(self._fetch_stm_context(query))
context.append(self._fetch_entity_context(query))
return "\n".join(filter(None, context))
def _fetch_stm_context(self, query) -> str:
"""
Fetches recent relevant insights from STM related to the task's description and expected_output,
formatted as bullet points.
"""
stm_results = self.stm.search(query)
formatted_results = "\n".join([f"- {result}" for result in stm_results])
return f"Recent Insights:\n{formatted_results}" if stm_results else ""
def _fetch_ltm_context(self, task) -> str:
"""
Fetches historical data or insights from LTM that are relevant to the task's description and expected_output,
formatted as bullet points.
"""
ltm_results = self.ltm.search(task)
if not ltm_results:
return None
formatted_results = "\n".join(
[f"{result['metadata']['suggestions']}" for result in ltm_results]
)
formatted_results = list(set(formatted_results))
return f"Historical Data:\n{formatted_results}" if ltm_results else ""
def _fetch_entity_context(self, query) -> str:
"""
Fetches relevant entity information from Entity Memory related to the task's description and expected_output,
formatted as bullet points.
"""
em_results = self.em.search(query)
formatted_results = "\n".join(
[f"- {result['context']}" for result in em_results]
)
return f"Entities:\n{formatted_results}" if em_results else ""

View File

View File

@@ -0,0 +1,22 @@
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.memory import Memory
from crewai.memory.storage.rag_storage import RAGStorage
class EntityMemory(Memory):
"""
EntityMemory class for managing structured information about entities
and their relationships using SQLite storage.
Inherits from the Memory class.
"""
def __init__(self, embedder_config=None):
storage = RAGStorage(
type="entities", allow_reset=False, embedder_config=embedder_config
)
super().__init__(storage)
def save(self, item: EntityMemoryItem) -> None:
"""Saves an entity item into the SQLite storage."""
data = f"{item.name}({item.type}): {item.description}"
super().save(data, item.metadata)

View File

@@ -0,0 +1,12 @@
class EntityMemoryItem:
def __init__(
self,
name: str,
type: str,
description: str,
relationships: str,
):
self.name = name
self.type = type
self.description = description
self.metadata = {"relationships": relationships}

View File

View File

@@ -0,0 +1,32 @@
from typing import Any, Dict
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.memory import Memory
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
class LongTermMemory(Memory):
"""
LongTermMemory class for managing cross runs data related to overall crew's
execution and performance.
Inherits from the Memory class and utilizes an instance of a class that
adheres to the Storage for data storage, specifically working with
LongTermMemoryItem instances.
"""
def __init__(self):
storage = LTMSQLiteStorage()
super().__init__(storage)
def save(self, item: LongTermMemoryItem) -> None:
metadata = item.metadata
metadata.update({"agent": item.agent, "expected_output": item.expected_output})
self.storage.save(
task_description=item.task,
score=metadata["quality"],
metadata=metadata,
datetime=item.datetime,
)
def search(self, task: str) -> Dict[str, Any]:
return self.storage.load(task)

View File

@@ -0,0 +1,19 @@
from typing import Any, Dict, Union
class LongTermMemoryItem:
def __init__(
self,
agent: str,
task: str,
expected_output: str,
datetime: str,
quality: Union[int, float] = None,
metadata: Dict[str, Any] = None,
):
self.task = task
self.agent = agent
self.quality = quality
self.datetime = datetime
self.expected_output = expected_output
self.metadata = metadata if metadata is not None else {}

View File

@@ -0,0 +1,23 @@
from typing import Any, Dict
from crewai.memory.storage.interface import Storage
class Memory:
"""
Base class for memory, now supporting agent tags and generic metadata.
"""
def __init__(self, storage: Storage):
self.storage = storage
def save(
self, value: Any, metadata: Dict[str, Any] = None, agent: str = None
) -> None:
metadata = metadata or {}
if agent:
metadata["agent"] = agent
self.storage.save(value, metadata)
def search(self, query: str) -> Dict[str, Any]:
return self.storage.search(query)

View File

View File

@@ -0,0 +1,23 @@
from crewai.memory.memory import Memory
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.memory.storage.rag_storage import RAGStorage
class ShortTermMemory(Memory):
"""
ShortTermMemory class for managing transient data related to immediate tasks
and interactions.
Inherits from the Memory class and utilizes an instance of a class that
adheres to the Storage for data storage, specifically working with
MemoryItem instances.
"""
def __init__(self, embedder_config=None):
storage = RAGStorage(type="short_term", embedder_config=embedder_config)
super().__init__(storage)
def save(self, item: ShortTermMemoryItem) -> None:
super().save(item.data, item.metadata, item.agent)
def search(self, query: str, score_threshold: float = 0.35):
return self.storage.search(query=query, score_threshold=score_threshold)

View File

@@ -0,0 +1,8 @@
from typing import Any, Dict
class ShortTermMemoryItem:
def __init__(self, data: Any, agent: str, metadata: Dict[str, Any] = None):
self.data = data
self.agent = agent
self.metadata = metadata if metadata is not None else {}

View File

@@ -0,0 +1,11 @@
from typing import Any, Dict
class Storage:
"""Abstract base class defining the storage interface"""
def save(self, key: str, value: Any, metadata: Dict[str, Any]) -> None:
pass
def search(self, key: str) -> Dict[str, Any]:
pass

View File

@@ -0,0 +1,101 @@
import json
import sqlite3
from typing import Any, Dict, Union
from crewai.utilities import Printer
from crewai.utilities.paths import db_storage_path
class LTMSQLiteStorage:
"""
An updated SQLite storage class for LTM data storage.
"""
def __init__(self, db_path=f"{db_storage_path()}/long_term_memory_storage.db"):
self.db_path = db_path
self._printer: Printer = Printer()
self._initialize_db()
def _initialize_db(self):
"""
Initializes the SQLite database and creates LTM table
"""
try:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS long_term_memories (
id INTEGER PRIMARY KEY AUTOINCREMENT,
task_description TEXT,
metadata TEXT,
datetime TEXT,
score REAL
)
"""
)
conn.commit()
except sqlite3.Error as e:
self._printer.print(
content=f"MEMORY ERROR: An error occurred during database initialization: {e}",
color="red",
)
def save(
self,
task_description: str,
metadata: Dict[str, Any],
datetime: str,
score: Union[int, float],
) -> None:
"""Saves data to the LTM table with error handling."""
try:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute(
"""
INSERT INTO long_term_memories (task_description, metadata, datetime, score)
VALUES (?, ?, ?, ?)
""",
(task_description, json.dumps(metadata), datetime, score),
)
conn.commit()
except sqlite3.Error as e:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while saving to LTM: {e}",
color="red",
)
def load(self, task_description: str) -> Dict[str, Any]:
"""Queries the LTM table by task description with error handling."""
try:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute(
"""
SELECT metadata, datetime, score
FROM long_term_memories
WHERE task_description = ?
ORDER BY datetime DESC, score ASC
LIMIT 2
""",
(task_description,),
)
rows = cursor.fetchall()
if rows:
return [
{
"metadata": json.loads(row[0]),
"datetime": row[1],
"score": row[2],
}
for row in rows
]
except sqlite3.Error as e:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while querying LTM: {e}",
color="red",
)
return None

View File

@@ -0,0 +1,88 @@
import contextlib
import io
import logging
from typing import Any, Dict
from embedchain import App
from embedchain.llm.base import BaseLlm
from crewai.memory.storage.interface import Storage
from crewai.utilities.paths import db_storage_path
@contextlib.contextmanager
def suppress_logging(
logger_name="chromadb.segment.impl.vector.local_persistent_hnsw",
level=logging.ERROR,
):
logger = logging.getLogger(logger_name)
original_level = logger.getEffectiveLevel()
logger.setLevel(level)
with contextlib.redirect_stdout(io.StringIO()), contextlib.redirect_stderr(
io.StringIO()
), contextlib.suppress(UserWarning):
yield
logger.setLevel(original_level)
class FakeLLM(BaseLlm):
pass
class RAGStorage(Storage):
"""
Extends Storage to handle embeddings for memory entries, improving
search efficiency.
"""
def __init__(self, type, allow_reset=True, embedder_config=None):
super().__init__()
config = {
"app": {
"config": {"name": type, "collect_metrics": False, "log_level": "ERROR"}
},
"chunker": {
"chunk_size": 5000,
"chunk_overlap": 100,
"length_function": "len",
"min_chunk_size": 150,
},
"vectordb": {
"provider": "chroma",
"config": {
"collection_name": type,
"dir": f"{db_storage_path()}/{type}",
"allow_reset": allow_reset,
},
},
}
if embedder_config:
config["embedder"] = embedder_config
self.app = App.from_config(config=config)
self.app.llm = FakeLLM()
if allow_reset:
self.app.reset()
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
self._generate_embedding(value, metadata)
def search(
self,
query: str,
limit: int = 3,
filter: dict = None,
score_threshold: float = 0.35,
) -> Dict[str, Any]:
with suppress_logging():
results = (
self.app.search(query, limit, where=filter)
if filter
else self.app.search(query, limit)
)
return [r for r in results if r["metadata"]["score"] >= score_threshold]
def _generate_embedding(self, text: str, metadata: Dict[str, Any]) -> Any:
with suppress_logging():
self.app.add(text, data_type="text", metadata=metadata)

View File

@@ -24,6 +24,7 @@ class Task(BaseModel):
delegations: int = 0
i18n: I18N = I18N()
thread: threading.Thread = None
prompt_context: Optional[str] = None
description: str = Field(description="Description of the actual task.")
expected_output: str = Field(
description="Clear definition of expected output for the task."
@@ -70,6 +71,13 @@ class Task(BaseModel):
frozen=True,
description="Unique identifier for the object, not set by user.",
)
human_input: Optional[bool] = Field(
description="Whether the task should have a human review the final answer of the agent",
default=False,
)
_original_description: str | None = None
_original_expected_output: str | None = None
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
@@ -137,6 +145,7 @@ class Task(BaseModel):
context.append(task.output.raw_output)
context = "\n".join(context)
self.prompt_context = context
tools = tools or self.tools
if self.async_execution:
@@ -189,9 +198,14 @@ class Task(BaseModel):
def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Interpolate inputs into the task description and expected output."""
if self._original_description is None:
self._original_description = self.description
if self._original_expected_output is None:
self._original_expected_output = self.expected_output
if inputs:
self.description = self.description.format(**inputs)
self.expected_output = self.expected_output.format(**inputs)
self.description = self._original_description.format(**inputs)
self.expected_output = self._original_expected_output.format(**inputs)
def increment_tools_errors(self) -> None:
"""Increment the tools errors counter."""

View File

@@ -0,0 +1,46 @@
-----BEGIN CERTIFICATE-----
MIIDqDCCAy6gAwIBAgIRAPNkTmtuAFAjfglGvXvh9R0wCgYIKoZIzj0EAwMwgYgx
CzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpOZXcgSmVyc2V5MRQwEgYDVQQHEwtKZXJz
ZXkgQ2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMS4wLAYDVQQD
EyVVU0VSVHJ1c3QgRUNDIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTE4MTEw
MjAwMDAwMFoXDTMwMTIzMTIzNTk1OVowgY8xCzAJBgNVBAYTAkdCMRswGQYDVQQI
ExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGDAWBgNVBAoT
D1NlY3RpZ28gTGltaXRlZDE3MDUGA1UEAxMuU2VjdGlnbyBFQ0MgRG9tYWluIFZh
bGlkYXRpb24gU2VjdXJlIFNlcnZlciBDQTBZMBMGByqGSM49AgEGCCqGSM49AwEH
A0IABHkYk8qfbZ5sVwAjBTcLXw9YWsTef1Wj6R7W2SUKiKAgSh16TwUwimNJE4xk
IQeV/To14UrOkPAY9z2vaKb71EijggFuMIIBajAfBgNVHSMEGDAWgBQ64QmG1M8Z
wpZ2dEl23OA1xmNjmjAdBgNVHQ4EFgQU9oUKOxGG4QR9DqoLLNLuzGR7e64wDgYD
VR0PAQH/BAQDAgGGMBIGA1UdEwEB/wQIMAYBAf8CAQAwHQYDVR0lBBYwFAYIKwYB
BQUHAwEGCCsGAQUFBwMCMBsGA1UdIAQUMBIwBgYEVR0gADAIBgZngQwBAgEwUAYD
VR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2VydHJ1c3QuY29tL1VTRVJUcnVz
dEVDQ0NlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUFBwEBBGowaDA/
BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdEVD
Q0FkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1
c3QuY29tMAoGCCqGSM49BAMDA2gAMGUCMEvnx3FcsVwJbZpCYF9z6fDWJtS1UVRs
cS0chWBNKPFNpvDKdrdKRe+oAkr2jU+ubgIxAODheSr2XhcA7oz9HmedGdMhlrd9
4ToKFbZl+/OnFFzqnvOhcjHvClECEQcKmc8fmA==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIID0zCCArugAwIBAgIQVmcdBOpPmUxvEIFHWdJ1lDANBgkqhkiG9w0BAQwFADB7
MQswCQYDVQQGEwJHQjEbMBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHDAdTYWxmb3JkMRowGAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDEhMB8GA1UE
AwwYQUFBIENlcnRpZmljYXRlIFNlcnZpY2VzMB4XDTE5MDMxMjAwMDAwMFoXDTI4
MTIzMTIzNTk1OVowgYgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpOZXcgSmVyc2V5
MRQwEgYDVQQHEwtKZXJzZXkgQ2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBO
ZXR3b3JrMS4wLAYDVQQDEyVVU0VSVHJ1c3QgRUNDIENlcnRpZmljYXRpb24gQXV0
aG9yaXR5MHYwEAYHKoZIzj0CAQYFK4EEACIDYgAEGqxUWqn5aCPnetUkb1PGWthL
q8bVttHmc3Gu3ZzWDGH926CJA7gFFOxXzu5dP+Ihs8731Ip54KODfi2X0GHE8Znc
JZFjq38wo7Rw4sehM5zzvy5cU7Ffs30yf4o043l5o4HyMIHvMB8GA1UdIwQYMBaA
FKARCiM+lvEH7OKvKe+CpX/QMKS0MB0GA1UdDgQWBBQ64QmG1M8ZwpZ2dEl23OA1
xmNjmjAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zARBgNVHSAECjAI
MAYGBFUdIAAwQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL2NybC5jb21vZG9jYS5j
b20vQUFBQ2VydGlmaWNhdGVTZXJ2aWNlcy5jcmwwNAYIKwYBBQUHAQEEKDAmMCQG
CCsGAQUFBzABhhhodHRwOi8vb2NzcC5jb21vZG9jYS5jb20wDQYJKoZIhvcNAQEM
BQADggEBABns652JLCALBIAdGN5CmXKZFjK9Dpx1WywV4ilAbe7/ctvbq5AfjJXy
ij0IckKJUAfiORVsAYfZFhr1wHUrxeZWEQff2Ji8fJ8ZOd+LygBkc7xGEJuTI42+
FsMuCIKchjN0djsoTI0DQoWz4rIjQtUfenVqGtF8qmchxDM6OW1TyaLtYiKou+JV
bJlsQ2uRl9EMC5MCHdK8aXdJ5htN978UeAOwproLtOGFfy/cQjutdAFI3tZs4RmY
CV4Ks2dH/hzg1cEo70qLRDEmBDeNiXQ2Lu+lIg+DdEmSx/cQwgwp+7e9un/jX9Wf
8qn0dNW44bOwgeThpWOjzOoEeJBuv/c=
-----END CERTIFICATE-----

View File

@@ -1,3 +1,5 @@
import asyncio
import importlib.resources
import json
import os
import platform
@@ -40,25 +42,40 @@ class Telemetry:
def __init__(self):
self.ready = False
try:
telemetry_endpoint = "http://telemetry.crewai.com:4318"
telemetry_endpoint = "https://telemetry.crewai.com:4319"
self.resource = Resource(
attributes={SERVICE_NAME: "crewAI-telemetry"},
)
self.provider = TracerProvider(resource=self.resource)
processor = BatchSpanProcessor(
OTLPSpanExporter(endpoint=f"{telemetry_endpoint}/v1/traces", timeout=15)
cert_file = importlib.resources.files("crewai.telemetry").joinpath(
"STAR_crewai_com_bundle.pem"
)
processor = BatchSpanProcessor(
OTLPSpanExporter(
endpoint=f"{telemetry_endpoint}/v1/traces",
certificate_file=cert_file,
timeout=30,
)
)
self.provider.add_span_processor(processor)
self.ready = True
except Exception:
pass
except BaseException as e:
if isinstance(
e,
(SystemExit, KeyboardInterrupt, GeneratorExit, asyncio.CancelledError),
):
raise # Re-raise the exception to not interfere with system signals
self.ready = False
def set_tracer(self):
if self.ready:
try:
trace.set_tracer_provider(self.provider)
except Exception:
pass
provider = trace.get_tracer_provider()
if provider is None:
try:
trace.set_tracer_provider(self.provider)
except Exception:
self.ready = False
def crew_creation(self, crew):
"""Records the creation of a crew."""
@@ -92,7 +109,9 @@ class Telemetry:
"i18n": agent.i18n.language,
"llm": json.dumps(self._safe_llm_attributes(agent.llm)),
"delegation_enabled?": agent.allow_delegation,
"tools_names": [tool.name for tool in agent.tools],
"tools_names": [
tool.name.casefold() for tool in agent.tools
],
}
for agent in crew.agents
]
@@ -107,7 +126,9 @@ class Telemetry:
"id": str(task.id),
"async_execution?": task.async_execution,
"agent_role": task.agent.role if task.agent else "None",
"tools_names": [tool.name for tool in task.tools],
"tools_names": [
tool.name.casefold() for tool in task.tools
],
}
for task in crew.tasks
]
@@ -195,7 +216,9 @@ class Telemetry:
"i18n": agent.i18n.language,
"llm": json.dumps(self._safe_llm_attributes(agent.llm)),
"delegation_enabled?": agent.allow_delegation,
"tools_names": [tool.name for tool in agent.tools],
"tools_names": [
tool.name.casefold() for tool in agent.tools
],
}
for agent in crew.agents
]
@@ -215,7 +238,9 @@ class Telemetry:
"context": [task.description for task in task.context]
if task.context
else "None",
"tools_names": [tool.name for tool in task.tools],
"tools_names": [
tool.name.casefold() for tool in task.tools
],
}
for task in crew.tasks
]

View File

@@ -15,22 +15,23 @@ class AgentTools(BaseModel):
i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
def tools(self):
return [
tools = [
StructuredTool.from_function(
func=self.delegate_work,
name="Delegate work to co-worker",
description=self.i18n.tools("delegate_work").format(
coworkers=[f"{agent.role}" for agent in self.agents]
coworkers=f"[{', '.join([f'{agent.role}' for agent in self.agents])}]"
),
),
StructuredTool.from_function(
func=self.ask_question,
name="Ask question to co-worker",
description=self.i18n.tools("ask_question").format(
coworkers=[f"{agent.role}" for agent in self.agents]
coworkers=f"[{', '.join([f'{agent.role}' for agent in self.agents])}]"
),
),
]
return tools
def delegate_work(self, coworker: str, task: str, context: str):
"""Useful to delegate a specific task to a coworker passing all necessary context and names."""
@@ -46,16 +47,20 @@ class AgentTools(BaseModel):
agent = [
available_agent
for available_agent in self.agents
if available_agent.role.strip().lower() == agent.strip().lower()
if available_agent.role.casefold().strip() == agent.casefold().strip()
]
except:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers="\n".join([f"- {agent.role}" for agent in self.agents])
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
)
if not agent:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers="\n".join([f"- {agent.role}" for agent in self.agents])
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
)
agent = agent[0]

View File

@@ -30,6 +30,7 @@ class ToolUsage:
task: Task being executed.
tools_handler: Tools handler that will manage the tool usage.
tools: List of tools available for the agent.
original_tools: Original tools available for the agent before being converted to BaseTool.
tools_description: Description of the tools available for the agent.
tools_names: Names of the tools available for the agent.
function_calling_llm: Language model to be used for the tool usage.
@@ -39,6 +40,7 @@ class ToolUsage:
self,
tools_handler: ToolsHandler,
tools: List[BaseTool],
original_tools: List[Any],
tools_description: str,
tools_names: str,
task: Any,
@@ -54,6 +56,7 @@ class ToolUsage:
self.tools_description = tools_description
self.tools_names = tools_names
self.tools_handler = tools_handler
self.original_tools = original_tools
self.tools = tools
self.task = task
self.action = action
@@ -112,9 +115,12 @@ class ToolUsage:
except Exception:
self.task.increment_tools_errors()
result = self.tools_handler.cache.read(
tool=calling.tool_name, input=calling.arguments
)
result = None
if self.tools_handler.cache:
result = self.tools_handler.cache.read(
tool=calling.tool_name, input=calling.arguments
)
if not result:
try:
@@ -159,7 +165,22 @@ class ToolUsage:
agentops.record(agentops.ErrorEvent(details=e, trigger_event=tool_event))
return self.use(calling=calling, tool_string=tool_string)
self.tools_handler.on_tool_use(calling=calling, output=result)
if self.tools_handler:
should_cache = True
original_tool = next(
(ot for ot in self.original_tools if ot.name == tool.name), None
)
if (
hasattr(original_tool, "cache_function")
and original_tool.cache_function
):
should_cache = original_tool.cache_function(
calling.arguments, result
)
self.tools_handler.on_tool_use(
calling=calling, output=result, should_cache=should_cache
)
self._printer.print(content=f"\n\n{result}\n", color="yellow")
agentops.record(tool_event)
@@ -190,6 +211,8 @@ class ToolUsage:
def _check_tool_repeated_usage(
self, calling: Union[ToolCalling, InstructorToolCalling]
) -> None:
if not self.tools_handler:
return False
if last_tool_usage := self.tools_handler.last_used_tool:
return (calling.tool_name == last_tool_usage.tool_name) and (
calling.arguments == last_tool_usage.arguments
@@ -247,12 +270,12 @@ class ToolUsage:
model=model,
instructions=dedent(
"""\
The schema should have the following structure, only two keys:
- tool_name: str
- arguments: dict (with all arguments being passed)
The schema should have the following structure, only two keys:
- tool_name: str
- arguments: dict (with all arguments being passed)
Example:
{"tool_name": "tool name", "arguments": {"arg_name1": "value", "arg_name2": 2}}""",
Example:
{"tool_name": "tool name", "arguments": {"arg_name1": "value", "arg_name2": 2}}""",
),
max_attemps=1,
)

View File

@@ -1,27 +0,0 @@
{
"hierarchical_manager_agent": {
"role": "Διευθυντής Ομάδας",
"goal": "Διαχειρίσου την ομάδα σου για να ολοκληρώσει την εργασία με τον καλύτερο δυνατό τρόπο.",
"backstory": "Είσαι ένας έμπειρος διευθυντής με την ικανότητα να βγάζεις το καλύτερο από την ομάδα σου.\nΕίσαι επίσης γνωστός για την ικανότητά σου να αναθέτεις εργασίες στους σωστούς ανθρώπους και να κάνεις τις σωστές ερωτήσεις για να πάρεις το καλύτερο από την ομάδα σου.\nΑκόμα κι αν δεν εκτελείς εργασίες μόνος σου, έχεις πολλή εμπειρία στον τομέα, που σου επιτρέπει να αξιολογείς σωστά τη δουλειά των μελών της ομάδας σου."
},
"slices": {
"observation": "\nΠαρατήρηση",
"task": "Αρχή! Αυτό είναι ΠΟΛΥ σημαντικό για εσάς, η δουλειά σας εξαρτάται από αυτό!\n\nΤρέχουσα εργασία: {input}",
"memory": "Αυτή είναι η περίληψη της μέχρι τώρα δουλειάς σας:\n{chat_history}",
"role_playing": "Είσαι {role}.\n{backstory}\n\nΟ προσωπικός σας στόχος είναι: {goal}",
"tools": "ΕΡΓΑΛΕΙΑ:\n------\nΈχετε πρόσβαση μόνο στα ακόλουθα εργαλεία:\n\n{tools}\n\nΓια να χρησιμοποιήσετε ένα εργαλείο, χρησιμοποιήστε την ακόλουθη ακριβώς μορφή:\n\n```\nThought: Χρειάζεται να χρησιμοποιήσω κάποιο εργαλείο; Ναι\nΕνέργεια: το εργαλείο που θέλετε να χρησιμοποιήσετε, θα πρέπει να είναι ένα από τα [{tool_names}], μόνο το όνομα.\nΕισαγωγή ενέργειας: Οποιαδήποτε και όλες οι σχετικές πληροφορίες και το πλαίσιο χρήσης του εργαλείου\nΠαρατήρηση: το αποτέλεσμα της χρήσης του εργαλείου\n```\n\nΌταν έχετε μια απάντηση για την εργασία σας ή εάν δεν χρειάζεται να χρησιμοποιήσετε ένα εργαλείο, ΠΡΕΠΕΙ να χρησιμοποιήσετε τη μορφή:\n\n```\nΣκέψη: Πρέπει να χρησιμοποιήσω ένα εργαλείο ? Όχι\nΤελική απάντηση: [η απάντησή σας εδώ]```",
"task_with_context": "{task}\nΑυτό είναι το πλαίσιο με το οποίο εργάζεστε:\n{context}",
"expected_output": "Η τελική σας απάντηση πρέπει να είναι: {expected_output}"
},
"errors": {
"force_final_answer": "Στην πραγματικότητα, χρησιμοποίησα πάρα πολλά εργαλεία, οπότε θα σταματήσω τώρα και θα σας δώσω την απόλυτη ΚΑΛΥΤΕΡΗ τελική μου απάντηση ΤΩΡΑ, χρησιμοποιώντας την αναμενόμενη μορφή: ```\nΣκέφτηκα: Χρειάζεται να χρησιμοποιήσω ένα εργαλείο; Όχι\nΤελική απάντηση: [η απάντησή σας εδώ]```",
"agent_tool_unexsiting_coworker": "\nΣφάλμα κατά την εκτέλεση του εργαλείου. Ο συνάδελφος που αναφέρεται στο Action Input δεν βρέθηκε, πρέπει να είναι μία από τις ακόλουθες επιλογές:\n{coworkers}..\n",
"task_repeated_usage": "Μόλις χρησιμοποίησα το εργαλείο {tool} με είσοδο {tool_input}. Άρα το ξέρω ήδη και πρέπει να σταματήσω να το χρησιμοποιώ στη σειρά με την ίδια είσοδο. \nΘα μπορούσα να δώσω την τελική μου απάντηση εάν είμαι έτοιμος, χρησιμοποιώντας ακριβώς την αναμενόμενη μορφή παρακάτω: \n\nΣκέφτηκα: Χρειάζεται να χρησιμοποιήσω κάποιο εργαλείο; Όχι\nΤελική απάντηση: [η απάντησή σας εδώ]\n",
"tool_usage_error": "Φαίνεται ότι αντιμετωπίσαμε ένα απροσδόκητο σφάλμα κατά την προσπάθεια χρήσης του εργαλείου.",
"tool_usage_exception": "Φαίνεται ότι αντιμετωπίσαμε ένα απροσδόκητο σφάλμα κατά την προσπάθεια χρήσης του εργαλείου. Αυτό ήταν το σφάλμα: {error}"
},
"tools": {
"delegate_work": "Αναθέστε μια συγκεκριμένη εργασία σε έναν από τους παρακάτω συναδέλφους:\n{coworkers}.\nΗ εισαγωγή σε αυτό το εργαλείο θα πρέπει να είναι ο ρόλος του συναδέλφου, η εργασία που θέλετε να κάνει και ΟΛΟ το απαραίτητο πλαίσιο για την εκτέλεση της εργασίας, δεν γνωρίζουν τίποτα για την εργασία, γι' αυτό μοιραστείτε απολύτως όλα όσα γνωρίζετε, μην αναφέρετε πράγματα, αλλά εξηγήστε τα.",
"ask_question": "Κάντε μια συγκεκριμένη ερώτηση σε έναν από τους παρακάτω συναδέλφους:\n{coworkers}.\nΗ είσοδος σε αυτό το εργαλείο θα πρέπει να είναι ο ρόλος του συναδέλφου, η ερώτηση που έχετε για αυτόν και ΟΛΟ το απαραίτητο πλαίσιο για να κάνετε σωστά την ερώτηση, δεν γνωρίζουν τίποτα για την ερώτηση, γι' αυτό μοιραστείτε απολύτως όλα όσα γνωρίζετε, μην αναφέρετε πράγματα, αλλά εξηγήστε τα."
}
}

View File

@@ -6,16 +6,18 @@
},
"slices": {
"observation": "\nObservation",
"task": "\n\nCurrent Task: {input}\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought: ",
"memory": "This is the summary of your work so far:\n{chat_history}",
"task": "\nCurrent Task: {input}\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought: ",
"memory": "\n\n# Useful context: \n{memory}",
"role_playing": "You are {role}. {backstory}\nYour personal goal is: {goal}",
"tools": "\n\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nUse the following format:\n\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple a python dictionary using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary information is gathered:\n\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n",
"no_tools": "To give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\nYour final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!\n\nThought: ",
"tools": "\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nUse the following format:\n\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple a python dictionary using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary information is gathered:\n\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n",
"no_tools": "To give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\nYour final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
"format": "I MUST either use a tool (use one at time) OR give my best final answer. To Use the following format:\n\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\nYour final answer must be the great and the most complete as possible, it must be outcome described\n\n ",
"final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfy the expect criteria, use the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n",
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nI just remembered the expected format I must follow:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task\nYour final answer must be the great and the most complete as possible, it must be outcome described\n\n",
"task_with_context": "{task}\n\nThis is the context you're working with:\n{context}",
"expected_output": "\nThis is the expect criteria for your final answer: {expected_output} \n you MUST return the actual complete content as the final answer, not a summary."
"expected_output": "\nThis is the expect criteria for your final answer: {expected_output} \n you MUST return the actual complete content as the final answer, not a summary.",
"human_feedback": "You got human feedback on your work, re-avaluate it and give a new Final Answer when ready.\n {human_feedback}",
"getting_input": "This is the agent final answer: {final_answer}\nPlease provide a feedback: "
},
"errors": {
"unexpected_format": "\nSorry, I didn't use the expected format, I MUST either use a tool (use one at time) OR give my best final answer.\n",

View File

@@ -0,0 +1,61 @@
from typing import List
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
from crewai.utilities import Converter
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
class Entity(BaseModel):
name: str = Field(description="The name of the entity.")
type: str = Field(description="The type of the entity.")
description: str = Field(description="Description of the entity.")
relationships: List[str] = Field(description="Relationships of the entity.")
class TaskEvaluation(BaseModel):
suggestions: List[str] = Field(
description="Suggestions to improve future similar tasks."
)
quality: float = Field(
description="A score from 0 to 10 evaluating on completion, quality, and overall performance, all taking into account the task description, expected output, and the result of the task."
)
entities: List[Entity] = Field(
description="Entities extracted from the task output."
)
class TaskEvaluator:
def __init__(self, original_agent):
self.llm = original_agent.llm
def evaluate(self, task, ouput) -> TaskEvaluation:
evaluation_query = (
f"Assess the quality of the task completed based on the description, expected output, and actual results.\n\n"
f"Task Description:\n{task.description}\n\n"
f"Expected Output:\n{task.expected_output}\n\n"
f"Actual Output:\n{ouput}\n\n"
"Please provide:\n"
"- Bullet points suggestions to improve future similar tasks\n"
"- A score from 0 to 10 evaluating on completion, quality, and overall performance"
"- Entities extracted from the task output, if any, their type, description, and relationships"
)
instructions = "I'm gonna convert this raw text into valid JSON."
if not self._is_gpt(self.llm):
model_schema = PydanticSchemaParser(model=TaskEvaluation).get_schema()
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
converter = Converter(
llm=self.llm,
text=evaluation_query,
model=TaskEvaluation,
instructions=instructions,
)
return converter.to_pydantic()
def _is_gpt(self, llm) -> bool:
return isinstance(llm, ChatOpenAI) and llm.openai_api_base == None

View File

@@ -7,6 +7,10 @@ from pydantic import BaseModel, Field, PrivateAttr, ValidationError, model_valid
class I18N(BaseModel):
_translations: Dict[str, Dict[str, str]] = PrivateAttr()
language_file: Optional[str] = Field(
default=None,
description="Path to the translation file to load",
)
language: Optional[str] = Field(
default="en",
description="Language used to load translations",
@@ -16,13 +20,17 @@ class I18N(BaseModel):
def load_translation(self) -> "I18N":
"""Load translations from a JSON file based on the specified language."""
try:
dir_path = os.path.dirname(os.path.realpath(__file__))
prompts_path = os.path.join(
dir_path, f"../translations/{self.language}.json"
)
if self.language_file:
with open(self.language_file, "r") as f:
self._translations = json.load(f)
else:
dir_path = os.path.dirname(os.path.realpath(__file__))
prompts_path = os.path.join(
dir_path, f"../translations/{self.language}.json"
)
with open(prompts_path, "r") as f:
self._translations = json.load(f)
with open(prompts_path, "r") as f:
self._translations = json.load(f)
except FileNotFoundError:
raise ValidationError(
f"Translation file for language '{self.language}' not found."

View File

@@ -0,0 +1,12 @@
from pathlib import Path
import appdirs
def db_storage_path():
app_name = "crewai"
app_author = "CrewAI"
data_dir = Path(appdirs.user_data_dir(app_name, app_author))
data_dir.mkdir(parents=True, exist_ok=True)
return data_dir

View File

@@ -13,16 +13,6 @@ class Prompts(BaseModel):
tools: list[Any] = Field(default=[])
SCRATCHPAD_SLICE: ClassVar[str] = "\n{agent_scratchpad}"
def task_execution_with_memory(self) -> BasePromptTemplate:
"""Generate a prompt for task execution with memory components."""
slices = ["role_playing"]
if len(self.tools) > 0:
slices.append("tools")
else:
slices.append("no_tools")
slices.extend(["memory", "task"])
return self._build_prompt(slices)
def task_execution_without_tools(self) -> BasePromptTemplate:
"""Generate a prompt for task execution without tools components."""
return self._build_prompt(["role_playing", "task"])

View File

@@ -48,36 +48,6 @@ def test_custom_llm():
assert agent.llm.temperature == 0
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_without_memory():
no_memory_agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
memory=False,
llm=ChatOpenAI(temperature=0, model="gpt-4"),
)
memory_agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
memory=True,
llm=ChatOpenAI(temperature=0, model="gpt-4"),
)
task = Task(
description="How much is 1 + 1?",
agent=no_memory_agent,
expected_output="the result of the math operation.",
)
result = no_memory_agent.execute_task(task)
assert result == "The result of the math operation 1 + 1 is 2."
assert no_memory_agent.agent_executor.memory is None
assert memory_agent.agent_executor.memory is not None
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_execution():
agent = Agent(
@@ -208,7 +178,7 @@ def test_cache_hitting():
with patch.object(CacheHandler, "read") as read:
read.return_value = "0"
task = Task(
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool.",
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool, you must use the tool.",
agent=agent,
expected_output="The result of the multiplication.",
)
@@ -219,6 +189,70 @@ def test_cache_hitting():
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_disabling_cache_for_agent():
@tool
def multiplier(first_number: int, second_number: int) -> float:
"""Useful for when you need to multiply two numbers together."""
return first_number * second_number
cache_handler = CacheHandler()
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
tools=[multiplier],
allow_delegation=False,
cache_handler=cache_handler,
cache=False,
verbose=True,
)
task1 = Task(
description="What is 2 times 6?",
agent=agent,
expected_output="The result of the multiplication.",
)
task2 = Task(
description="What is 3 times 3?",
agent=agent,
expected_output="The result of the multiplication.",
)
output = agent.execute_task(task1)
output = agent.execute_task(task2)
assert cache_handler._cache != {
"multiplier-{'first_number': 2, 'second_number': 6}": 12,
"multiplier-{'first_number': 3, 'second_number': 3}": 9,
}
task = Task(
description="What is 2 times 6 times 3? Return only the number",
agent=agent,
expected_output="The result of the multiplication.",
)
output = agent.execute_task(task)
assert output == "36"
assert cache_handler._cache != {
"multiplier-{'first_number': 2, 'second_number': 6}": 12,
"multiplier-{'first_number': 3, 'second_number': 3}": 9,
"multiplier-{'first_number': 12, 'second_number': 3}": 36,
}
with patch.object(CacheHandler, "read") as read:
read.return_value = "0"
task = Task(
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool.",
agent=agent,
expected_output="The result of the multiplication.",
)
output = agent.execute_task(task)
assert output == "12"
read.assert_not_called()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_execution_with_specific_tools():
@tool
@@ -285,14 +319,14 @@ def test_agent_repeated_tool_usage(capsys):
goal="test goal",
backstory="test backstory",
max_iter=4,
llm=ChatOpenAI(model="gpt-4-0125-preview"),
llm=ChatOpenAI(model="gpt-4"),
allow_delegation=False,
verbose=True,
)
task = Task(
description="The final answer is 42. But don't give it until I tell you so, instead keep using the `get_final_answer` tool.",
expected_output="The final answer",
expected_output="The final answer, don't give it until I tell you so",
)
# force cleaning cache
agent.tools_handler.cache = CacheHandler()
@@ -303,7 +337,46 @@ def test_agent_repeated_tool_usage(capsys):
captured = capsys.readouterr()
assert "The final answer is 42." in captured.out
assert (
"I tried reusing the same input, I must stop using this action input. I'll try something else instead."
in captured.out
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_repeated_tool_usage_check_even_with_disabled_cache(capsys):
@tool
def get_final_answer(anything: str) -> float:
"""Get the final answer but don't give it yet, just re-use this
tool non-stop."""
return 42
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
max_iter=4,
llm=ChatOpenAI(model="gpt-4"),
allow_delegation=False,
verbose=True,
cache=False,
)
task = Task(
description="The final answer is 42. But don't give it until I tell you so, instead keep using the `get_final_answer` tool.",
expected_output="The final answer, don't give it until I tell you so",
)
agent.execute_task(
task=task,
tools=[get_final_answer],
)
captured = capsys.readouterr()
assert (
"I tried reusing the same input, I must stop using this action input. I'll try something else instead."
in captured.out
)
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -680,3 +753,47 @@ def test_agent_definition_based_on_dict():
assert agent.backstory == "test backstory"
assert agent.verbose == True
assert agent.tools == []
# test for human input
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_human_input():
from unittest.mock import patch
config = {
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
}
agent = Agent(config=config)
task = Task(
agent=agent,
description="Say the word: Hi",
expected_output="The word: Hi",
human_input=True,
)
with patch.object(CrewAgentExecutor, "_ask_human_input") as mock_human_input:
mock_human_input.return_value = "Hello"
output = agent.execute_task(task)
mock_human_input.assert_called_once()
assert output == "Hello"
def test_interpolate_inputs():
agent = Agent(
role="{topic} specialist",
goal="Figure {goal} out",
backstory="I am the master of {role}",
)
agent.interpolate_inputs({"topic": "AI", "goal": "life", "role": "all things"})
assert agent.role == "AI specialist"
assert agent.goal == "Figure life out"
assert agent.backstory == "I am the master of all things"
agent.interpolate_inputs({"topic": "Sales", "goal": "stuff", "role": "nothing"})
assert agent.role == "Sales specialist"
assert agent.goal == "Figure stuff out"
assert agent.backstory == "I am the master of nothing"

View File

@@ -0,0 +1,381 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "You are test role. test backstory\nYour
personal goal is: test goalTo give my best complete final answer to the task
use the exact following format:\n\nThought: I now can give a great answer\nFinal
Answer: my best complete final answer to the task.\nYour final answer must be
the great and the most complete as possible, it must be outcome described.\n\nI
MUST use these formats, my job depends on it!\n\nThought: \n\nCurrent Task:
Say the word: Hi\n\nThis is the expect criteria for your final answer: The word:
Hi \n you MUST return the actual complete content as the final answer, not a
summary.\n\nBegin! This is VERY important to you, use the tools available and
give your best Final Answer, your job depends on it!\n\nThought: \n"}], "model":
"gpt-4", "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '871'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.13.3
x-stainless-arch:
- other:amd64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Windows
x-stainless-package-version:
- 1.13.3
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.10.10
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Hi"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOpci8MKSrJuksy3er9xP4VPtt3","object":"chat.completion.chunk","created":1711975799,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 86d8b3c50a3900fe-GRU
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Type:
- text/event-stream
Date:
- Mon, 01 Apr 2024 12:49:59 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=SKbA6Tkvgm70ubrePPCA0E3p7jCx6I0hvl21nUYJZfs-1711975799-1.0.1.1-vpC0CATlcE3u.X_XqVu7m5uBcvIfSZLza9_rT63hkxowZaPpgUjsvUXJODkXHzs99U.JWBrunylSpl3oOOvOPA;
path=/; expires=Mon, 01-Apr-24 13:19:59 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=uA0B4Boqw_bNZvnzP5HAJVToe90yw8F9rVggJtXKT_4-1711975799706-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '240'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299804'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 39ms
x-request-id:
- req_1612b410e539e01c9e167a201fe5932d
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role. test backstory\nYour
personal goal is: test goalTo give my best complete final answer to the task
use the exact following format:\n\nThought: I now can give a great answer\nFinal
Answer: my best complete final answer to the task.\nYour final answer must be
the great and the most complete as possible, it must be outcome described.\n\nI
MUST use these formats, my job depends on it!\n\nThought: \n\nCurrent Task:
Say the word: Hi\n\nThis is the expect criteria for your final answer: The word:
Hi \n you MUST return the actual complete content as the final answer, not a
summary.\n\nBegin! This is VERY important to you, use the tools available and
give your best Final Answer, your job depends on it!\n\nThought: \nI now can
give a great answer\n\nFinal Answer: \nHi\nObservation: You got human feedback
on your work, re-avaluate it and give a new Final Answer when ready.\n Hello\n"}],
"model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream": true, "temperature":
0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '1038'
content-type:
- application/json
cookie:
- __cf_bm=SKbA6Tkvgm70ubrePPCA0E3p7jCx6I0hvl21nUYJZfs-1711975799-1.0.1.1-vpC0CATlcE3u.X_XqVu7m5uBcvIfSZLza9_rT63hkxowZaPpgUjsvUXJODkXHzs99U.JWBrunylSpl3oOOvOPA;
_cfuvid=uA0B4Boqw_bNZvnzP5HAJVToe90yw8F9rVggJtXKT_4-1711975799706-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.13.3
x-stainless-arch:
- other:amd64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Windows
x-stainless-package-version:
- 1.13.3
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.10.10
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"The"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
feedback"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
received"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
indicates"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
my"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
initial"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
was"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
incorrect"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Evalu"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"ating"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
it"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
it"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
seems"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
change"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
my"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
response"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":".\n\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Hello"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-99BOq7DS8ZMoSHlojbgXCOiT2imcY","object":"chat.completion.chunk","created":1711975800,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 86d8b3cfaf4200fe-GRU
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Type:
- text/event-stream
Date:
- Mon, 01 Apr 2024 12:50:00 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '269'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299763'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 47ms
x-request-id:
- req_58f19f788ea39618601b15c4a9ea5bdd
status:
code: 200
message: OK
version: 1

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,887 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "You are test role. test backstory\nYour
personal goal is: test goal\n\nYou ONLY have access to the following tools,
and should NEVER make up tools that are not listed here:\n\nget_final_answer:
get_final_answer(anything: str) -> float - Get the final answer but don''t give
it yet, just re-use this\n tool non-stop.\n\nUse the following format:\n\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [get_final_answer], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple a python dictionary using \" to
wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n\n\nCurrent Task: The final
answer is 42. But don''t give it until I tell you so, instead keep using the
`get_final_answer` tool.\n\nThis is the expect criteria for your final answer:
The final answer, don''t give it until I tell you so \n you MUST return the
actual complete content as the final answer, not a summary.\n\nBegin! This is
VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought: \n"}], "model": "gpt-4", "n": 1, "stop":
["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '1398'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.13.3
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.13.3
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"The"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
task"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
is"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
find"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
using"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
`"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"get"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"`"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
The"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
is"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
already"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
given"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
as"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"42"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
but"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
task"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
requires"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
me"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
keep"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
using"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
`"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"get"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"`"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
until"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
instructed"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":".\n\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Action"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
get"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Action"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Input"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
{\""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"anything"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
\""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"42"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\"}"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7nHp7Rqq1aPh9cyfo16KJBehh1","object":"chat.completion.chunk","created":1710871515,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 866f63bafbf9a559-GRU
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Type:
- text/event-stream
Date:
- Tue, 19 Mar 2024 18:05:16 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=ccGSKYzVR4Gjc0t3AwEWhsiSXeuSBVYOKwXpSW4B5Es-1710871516-1.0.1.1-fF4jEG6Af3PBL3N2jNeglY8CtfZq4GAHXCCpvnDcD6GhBk8KGbBK3uQ_8dNrR4vt3_YnwWx0x2Vy0ttedMrinw;
path=/; expires=Tue, 19-Mar-24 18:35:16 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=w7JjAWBqi9fg29.9ByVvw9AMB_L3j9jvHZkVynxBKLk-1710871516141-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '264'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299675'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 65ms
x-request-id:
- req_8fb6821861a228009f904c4fc1818fcf
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role. test backstory\nYour
personal goal is: test goal\n\nYou ONLY have access to the following tools,
and should NEVER make up tools that are not listed here:\n\nget_final_answer:
get_final_answer(anything: str) -> float - Get the final answer but don''t give
it yet, just re-use this\n tool non-stop.\n\nUse the following format:\n\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [get_final_answer], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple a python dictionary using \" to
wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n\n\nCurrent Task: The final
answer is 42. But don''t give it until I tell you so, instead keep using the
`get_final_answer` tool.\n\nThis is the expect criteria for your final answer:
The final answer, don''t give it until I tell you so \n you MUST return the
actual complete content as the final answer, not a summary.\n\nBegin! This is
VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought: \nThe task is to find the final answer using
the `get_final_answer` tool. The final answer is already given as 42, but the
task requires me to keep using the `get_final_answer` tool until instructed
to give the final answer.\n\nAction: get_final_answer\nAction Input: {\"anything\":
\"42\"}\nObservation: 42\n"}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"],
"stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '1705'
content-type:
- application/json
cookie:
- __cf_bm=ccGSKYzVR4Gjc0t3AwEWhsiSXeuSBVYOKwXpSW4B5Es-1710871516-1.0.1.1-fF4jEG6Af3PBL3N2jNeglY8CtfZq4GAHXCCpvnDcD6GhBk8KGbBK3uQ_8dNrR4vt3_YnwWx0x2Vy0ttedMrinw;
_cfuvid=w7JjAWBqi9fg29.9ByVvw9AMB_L3j9jvHZkVynxBKLk-1710871516141-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.13.3
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.13.3
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"The"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
observation"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
confirms"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
that"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
`"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"get"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"`"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
returns"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
correct"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
result"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
of"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"42"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
However"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
task"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
instructions"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
specify"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
not"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
yet"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
keep"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
using"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
`"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"get"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"`"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":".\n\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Action"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
get"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Action"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Input"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
{\""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"anything"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
\""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"42"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\"}"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7q6apgzYDSxRdLDbR8FU4qtNyU","object":"chat.completion.chunk","created":1710871518,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 866f63ceeea4a559-GRU
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Type:
- text/event-stream
Date:
- Tue, 19 Mar 2024 18:05:19 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '443'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299600'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 80ms
x-request-id:
- req_c89e32ab22a271c202100059672a9d83
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role. test backstory\nYour
personal goal is: test goal\n\nYou ONLY have access to the following tools,
and should NEVER make up tools that are not listed here:\n\nget_final_answer:
get_final_answer(anything: str) -> float - Get the final answer but don''t give
it yet, just re-use this\n tool non-stop.\n\nUse the following format:\n\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [get_final_answer], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple a python dictionary using \" to
wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n\n\nCurrent Task: The final
answer is 42. But don''t give it until I tell you so, instead keep using the
`get_final_answer` tool.\n\nThis is the expect criteria for your final answer:
The final answer, don''t give it until I tell you so \n you MUST return the
actual complete content as the final answer, not a summary.\n\nBegin! This is
VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought: \nThe task is to find the final answer using
the `get_final_answer` tool. The final answer is already given as 42, but the
task requires me to keep using the `get_final_answer` tool until instructed
to give the final answer.\n\nAction: get_final_answer\nAction Input: {\"anything\":
\"42\"}\nObservation: 42\nThought: \nThe observation confirms that the tool
`get_final_answer` returns the correct result of 42. However, the task instructions
specify not to give the final answer yet and to keep using the `get_final_answer`
tool.\n\nAction: get_final_answer\nAction Input: {\"anything\": \"42\"}\nObservation:
I tried reusing the same input, I must stop using this action input. I''ll try
something else instead.\n\n\nTool won''t be use because it''s time to give your
final answer. Don''t use tools and just your absolute BEST Final answer.\nObservation:
Tool won''t be use because it''s time to give your final answer. Don''t use
tools and just your absolute BEST Final answer.\n"}], "model": "gpt-4", "n":
1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '2371'
content-type:
- application/json
cookie:
- __cf_bm=ccGSKYzVR4Gjc0t3AwEWhsiSXeuSBVYOKwXpSW4B5Es-1710871516-1.0.1.1-fF4jEG6Af3PBL3N2jNeglY8CtfZq4GAHXCCpvnDcD6GhBk8KGbBK3uQ_8dNrR4vt3_YnwWx0x2Vy0ttedMrinw;
_cfuvid=w7JjAWBqi9fg29.9ByVvw9AMB_L3j9jvHZkVynxBKLk-1710871516141-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.13.3
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.13.3
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
know"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
The"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
is"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"42"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-94Y7tQodC6XqzcoSYADEx9vQVrqNO","object":"chat.completion.chunk","created":1710871521,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 866f63e0bf0fa559-GRU
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Type:
- text/event-stream
Date:
- Tue, 19 Mar 2024 18:05:22 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '256'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299436'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 112ms
x-request-id:
- req_0f47e77546d773004e0dfe7e64949091
status:
code: 200
message: OK
version: 1

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -645,15 +645,12 @@ def test_agent_usage_metrics_are_captured_for_sequential_process():
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
assert (
result
== "Howdy! I hope you're having a wonderful day. I'm here ready to make your day even better. Let's have a great time together!"
)
assert result == "Howdy!"
assert crew.usage_metrics == {
"completion_tokens": 91,
"prompt_tokens": 164,
"completion_tokens": 17,
"prompt_tokens": 161,
"successful_requests": 1,
"total_tokens": 255,
"total_tokens": 178,
}
@@ -678,12 +675,12 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
)
result = crew.kickoff()
assert result == "Howdy!"
assert result == '"Howdy!"'
assert crew.usage_metrics == {
"total_tokens": 2476,
"prompt_tokens": 2191,
"completion_tokens": 285,
"successful_requests": 5,
"total_tokens": 1641,
"prompt_tokens": 1358,
"completion_tokens": 283,
"successful_requests": 3,
}
@@ -736,3 +733,107 @@ def test_crew_inputs_interpolate_both_agents_and_tasks():
crew.kickoff(inputs={"topic": "AI", "points": 5})
interpolate_agent_inputs.assert_called()
interpolate_task_inputs.assert_called()
def test_task_callback_on_crew():
from unittest.mock import patch
researcher_agent = Agent(
role="Researcher",
goal="Make the best research and analysis on content about AI and AI agents",
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
allow_delegation=False,
)
list_ideas = Task(
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
expected_output="Bullet point list of 5 important events.",
agent=researcher_agent,
async_execution=True,
)
crew = Crew(
agents=[researcher_agent],
process=Process.sequential,
tasks=[list_ideas],
task_callback=lambda: None,
)
with patch.object(Agent, "execute_task") as execute:
execute.return_value = "ok"
crew.kickoff()
assert list_ideas.callback is not None
@pytest.mark.vcr(filter_headers=["authorization"])
def test_tools_with_custom_caching():
from unittest.mock import patch
from crewai_tools import tool
@tool
def multiplcation_tool(first_number: int, second_number: int) -> str:
"""Useful for when you need to multiply two numbers together."""
return first_number * second_number
def cache_func(args, result):
cache = result % 2 == 0
return cache
multiplcation_tool.cache_function = cache_func
writer1 = Agent(
role="Writer",
goal="You write lesssons of math for kids.",
backstory="You're an expert in writting and you love to teach kids but you know nothing of math.",
tools=[multiplcation_tool],
allow_delegation=False,
)
writer2 = Agent(
role="Writer",
goal="You write lesssons of math for kids.",
backstory="You're an expert in writting and you love to teach kids but you know nothing of math.",
tools=[multiplcation_tool],
allow_delegation=False,
)
task1 = Task(
description="What is 2 times 6? Return only the number after using the multiplication tool.",
expected_output="the result of multiplication",
agent=writer1,
)
task2 = Task(
description="What is 3 times 1? Return only the number after using the multiplication tool.",
expected_output="the result of multiplication",
agent=writer1,
)
task3 = Task(
description="What is 2 times 6? Return only the number after using the multiplication tool.",
expected_output="the result of multiplication",
agent=writer2,
)
task4 = Task(
description="What is 3 times 1? Return only the number after using the multiplication tool.",
expected_output="the result of multiplication",
agent=writer2,
)
crew = Crew(agents=[writer1, writer2], tasks=[task1, task2, task3, task4])
with patch.object(
CacheHandler, "add", wraps=crew._cache_handler.add
) as add_to_cache:
with patch.object(
CacheHandler, "read", wraps=crew._cache_handler.read
) as read_from_cache:
result = crew.kickoff()
add_to_cache.assert_called_once_with(
tool="multiplcation_tool",
input={"first_number": 2, "second_number": 6},
output=12,
)
assert result == "3"

View File

@@ -0,0 +1,29 @@
import pytest
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
@pytest.fixture
def long_term_memory():
"""Fixture to create a LongTermMemory instance"""
return LongTermMemory()
def test_save_and_search(long_term_memory):
memory = LongTermMemoryItem(
agent="test_agent",
task="test_task",
expected_output="test_output",
datetime="test_datetime",
quality=0.5,
metadata={"task": "test_task", "quality": 0.5},
)
long_term_memory.save(memory)
find = long_term_memory.search("test_task")[0]
assert find["score"] == 0.5
assert find["datetime"] == "test_datetime"
assert find["metadata"]["agent"] == "test_agent"
assert find["metadata"]["quality"] == 0.5
assert find["metadata"]["task"] == "test_task"
assert find["metadata"]["expected_output"] == "test_output"

View File

@@ -0,0 +1,24 @@
import pytest
from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
@pytest.fixture
def short_term_memory():
"""Fixture to create a ShortTermMemory instance"""
return ShortTermMemory()
def test_save_and_search(short_term_memory):
memory = ShortTermMemoryItem(
data="""test value test value test value test value test value test value
test value test value test value test value test value test value
test value test value test value test value test value test value""",
agent="test_agent",
metadata={"task": "test_task"},
)
short_term_memory.save(memory)
find = short_term_memory.search("test value", score_threshold=0.01)[0]
assert find["context"] == memory.data, "Data value mismatch."
assert find["metadata"]["agent"] == "test_agent", "Agent value mismatch."

View File

@@ -462,3 +462,24 @@ def test_task_definition_based_on_dict():
assert task.description == config["description"]
assert task.expected_output == config["expected_output"]
assert task.agent is None
def test_interpolate_inputs():
task = Task(
description="Give me a list of 5 interesting ideas about {topic} to explore for an article, what makes them unique and interesting.",
expected_output="Bullet point list of 5 interesting ideas about {topic}.",
)
task.interpolate_inputs(inputs={"topic": "AI"})
assert (
task.description
== "Give me a list of 5 interesting ideas about AI to explore for an article, what makes them unique and interesting."
)
assert task.expected_output == "Bullet point list of 5 interesting ideas about AI."
task.interpolate_inputs(inputs={"topic": "ML"})
assert (
task.description
== "Give me a list of 5 interesting ideas about ML to explore for an article, what makes them unique and interesting."
)
assert task.expected_output == "Bullet point list of 5 interesting ideas about ML."

View File

@@ -0,0 +1,40 @@
import pytest
from crewai.utilities.i18n import I18N
def test_load_translation():
i18n = I18N(language="en")
i18n.load_translation()
assert i18n._translations is not None
def test_slice():
i18n = I18N(language="en")
i18n.load_translation()
assert isinstance(i18n.slice("role_playing"), str)
def test_errors():
i18n = I18N(language="en")
i18n.load_translation()
assert isinstance(i18n.errors("unexpected_format"), str)
def test_tools():
i18n = I18N(language="en")
i18n.load_translation()
assert isinstance(i18n.tools("ask_question"), str)
def test_retrieve():
i18n = I18N(language="en")
i18n.load_translation()
assert isinstance(i18n.retrieve("slices", "role_playing"), str)
def test_retrieve_not_found():
i18n = I18N(language="en")
i18n.load_translation()
with pytest.raises(Exception):
i18n.retrieve("nonexistent_kind", "nonexistent_key")