Compare commits

..

11 Commits

Author SHA1 Message Date
Brandon Hancock
61a4d7b8da Merge branch 'main' into bugfix/restrict-python-version-compatibility 2024-12-09 14:06:12 -05:00
Brandon Hancock
e5b222c049 drop pipeline 2024-12-09 14:02:10 -05:00
Brandon Hancock
fb396cbaaa Drop skip 2024-12-09 13:58:03 -05:00
Brandon Hancock
3ce44764aa Merge branch 'bugfix/restrict-python-version-compatibility' of https://github.com/joaomdmoura/crewAI into bugfix/restrict-python-version-compatibility 2024-12-09 13:54:47 -05:00
Brandon Hancock
25690347d2 resolve final tests 2024-12-09 13:54:27 -05:00
Brandon Hancock (bhancock_ai)
97b85e1830 Merge branch 'main' into bugfix/restrict-python-version-compatibility 2024-12-09 13:50:19 -05:00
Brandon Hancock
cf39d73e64 adding thiago changes 2024-12-09 13:49:50 -05:00
Brandon Hancock
26b0375349 trying to fix failing test 2024-12-09 10:55:23 -05:00
Brandon Hancock
e322953c8b Drop test cassette that was causing error 2024-12-09 10:28:15 -05:00
Brandon Hancock
7d5da94382 revert 2024-12-09 10:21:58 -05:00
Brandon Hancock
589447c5c4 drop 3.13 2024-12-09 10:14:44 -05:00
14 changed files with 27 additions and 141 deletions

View File

@@ -32,6 +32,7 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. | | **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** _(optional)_ | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. | | **Output Log File** _(optional)_ | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. | | **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Manager Callbacks** _(optional)_ | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. |
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. | | **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. | | **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. | | **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |

View File

@@ -29,7 +29,7 @@ Large Language Models (LLMs) are the core intelligence behind CrewAI agents. The
## Available Models and Their Capabilities ## Available Models and Their Capabilities
Here's a detailed breakdown of supported models and their capabilities, you can compare performance at [lmarena.ai](https://lmarena.ai/): Here's a detailed breakdown of supported models and their capabilities:
<Tabs> <Tabs>
<Tab title="OpenAI"> <Tab title="OpenAI">
@@ -43,17 +43,6 @@ Here's a detailed breakdown of supported models and their capabilities, you can
1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words. 1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words.
</Note> </Note>
</Tab> </Tab>
<Tab title="Gemini">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| Gemini 1.5 Flash | 1M tokens | Balanced multimodal model, good for most tasks |
| Gemini 1.5 Flash 8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| Gemini 1.5 Pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
<Tip>
Google's Gemini models are all multimodal, supporting audio, images, video and text, supporting context caching, json schema, function calling, etc.
</Tip>
</Tab>
<Tab title="Groq"> <Tab title="Groq">
| Model | Context Window | Best For | | Model | Context Window | Best For |
|-------|---------------|-----------| |-------|---------------|-----------|
@@ -139,10 +128,10 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
# llm: anthropic/claude-2.1 # llm: anthropic/claude-2.1
# llm: anthropic/claude-2.0 # llm: anthropic/claude-2.0
# Google Models - Strong reasoning, large cachable context window, multimodal # Google Models - Good for general tasks
# llm: gemini/gemini-pro
# llm: gemini/gemini-1.5-pro-latest # llm: gemini/gemini-1.5-pro-latest
# llm: gemini/gemini-1.5-flash-latest # llm: gemini/gemini-1.0-pro-latest
# llm: gemini/gemini-1.5-flash-8b-latest
# AWS Bedrock Models - Enterprise-grade # AWS Bedrock Models - Enterprise-grade
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0 # llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
@@ -361,18 +350,13 @@ Learn how to get the most out of your LLM configuration:
<Accordion title="Google"> <Accordion title="Google">
```python Code ```python Code
# Option 1. Gemini accessed with an API key.
# https://ai.google.dev/gemini-api/docs/api-key
GEMINI_API_KEY=<your-api-key> GEMINI_API_KEY=<your-api-key>
# Option 2. Vertex AI IAM credentials for Gemini, Anthropic, and anything in the Model Garden.
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
``` ```
Example usage: Example usage:
```python Code ```python Code
llm = LLM( llm = LLM(
model="gemini/gemini-1.5-pro-latest", model="gemini/gemini-pro",
temperature=0.7 temperature=0.7
) )
``` ```

View File

@@ -15,6 +15,7 @@ dependencies = [
"opentelemetry-exporter-otlp-proto-http>=1.22.0", "opentelemetry-exporter-otlp-proto-http>=1.22.0",
"instructor>=1.3.3", "instructor>=1.3.3",
"regex>=2024.9.11", "regex>=2024.9.11",
"crewai-tools>=0.17.0",
"click>=8.1.7", "click>=8.1.7",
"python-dotenv>=1.0.0", "python-dotenv>=1.0.0",
"appdirs>=1.4.4", "appdirs>=1.4.4",
@@ -29,7 +30,6 @@ dependencies = [
"chromadb>=0.5.18", "chromadb>=0.5.18",
"pdfplumber>=0.11.4", "pdfplumber>=0.11.4",
"openpyxl>=3.1.5", "openpyxl>=3.1.5",
"blinker>=1.9.0",
] ]
[project.urls] [project.urls]

View File

@@ -144,7 +144,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
formatted_answer formatted_answer
) )
if self.step_callback: if self.step_callback:
self.step_callback(tool_result) self.step_callback(tool_result)
formatted_answer.text += f"\nObservation: {tool_result.result}" formatted_answer.text += f"\nObservation: {tool_result.result}"
formatted_answer.result = tool_result.result formatted_answer.result = tool_result.result
@@ -413,6 +413,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
""" """
while self.ask_for_human_input: while self.ask_for_human_input:
human_feedback = self._ask_human_input(formatted_answer.output) human_feedback = self._ask_human_input(formatted_answer.output)
print("Human feedback: ", human_feedback)
if self.crew and self.crew._train: if self.crew and self.crew._train:
self._handle_crew_training_output(formatted_answer, human_feedback) self._handle_crew_training_output(formatted_answer, human_feedback)

View File

@@ -117,7 +117,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
published_handle = publish_response.json()["handle"] published_handle = publish_response.json()["handle"]
console.print( console.print(
f"Successfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}", f"Succesfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}",
style="bold green", style="bold green",
) )
@@ -138,7 +138,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
self._add_package(get_response.json()) self._add_package(get_response.json())
console.print(f"Successfully installed {handle}", style="bold green") console.print(f"Succesfully installed {handle}", style="bold green")
def login(self): def login(self):
login_response = self.plus_api_client.login_to_tool_repository() login_response = self.plus_api_client.login_to_tool_repository()

View File

@@ -14,15 +14,8 @@ from typing import (
cast, cast,
) )
from blinker import Signal
from pydantic import BaseModel, ValidationError from pydantic import BaseModel, ValidationError
from crewai.flow.flow_events import (
FlowFinishedEvent,
FlowStartedEvent,
MethodExecutionFinishedEvent,
MethodExecutionStartedEvent,
)
from crewai.flow.flow_visualizer import plot_flow from crewai.flow.flow_visualizer import plot_flow
from crewai.flow.utils import get_possible_return_constants from crewai.flow.utils import get_possible_return_constants
from crewai.telemetry import Telemetry from crewai.telemetry import Telemetry
@@ -166,7 +159,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
_routers: Dict[str, str] = {} _routers: Dict[str, str] = {}
_router_paths: Dict[str, List[str]] = {} _router_paths: Dict[str, List[str]] = {}
initial_state: Union[Type[T], T, None] = None initial_state: Union[Type[T], T, None] = None
event_emitter = Signal("event_emitter")
def __class_getitem__(cls: Type["Flow"], item: Type[T]) -> Type["Flow"]: def __class_getitem__(cls: Type["Flow"], item: Type[T]) -> Type["Flow"]:
class _FlowGeneric(cls): # type: ignore class _FlowGeneric(cls): # type: ignore
@@ -261,14 +253,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
Returns: Returns:
The final output from the flow execution. The final output from the flow execution.
""" """
self.event_emitter.send(
self,
event=FlowStartedEvent(
type="flow_started",
flow_name=self.__class__.__name__,
),
)
if inputs is not None: if inputs is not None:
self._initialize_state(inputs) self._initialize_state(inputs)
return asyncio.run(self.kickoff_async()) return asyncio.run(self.kickoff_async())
@@ -283,6 +267,8 @@ class Flow(Generic[T], metaclass=FlowMeta):
Returns: Returns:
The final output from the flow execution. The final output from the flow execution.
""" """
if inputs is not None:
self._initialize_state(inputs)
if not self._start_methods: if not self._start_methods:
raise ValueError("No start method defined") raise ValueError("No start method defined")
@@ -299,19 +285,11 @@ class Flow(Generic[T], metaclass=FlowMeta):
# Run all start methods concurrently # Run all start methods concurrently
await asyncio.gather(*tasks) await asyncio.gather(*tasks)
# Determine the final output (from the last executed method) # Return the final output (from the last executed method)
final_output = self._method_outputs[-1] if self._method_outputs else None if self._method_outputs:
return self._method_outputs[-1]
self.event_emitter.send( else:
self, return None # Or raise an exception if no methods were executed
event=FlowFinishedEvent(
type="flow_finished",
flow_name=self.__class__.__name__,
result=final_output,
),
)
return final_output
async def _execute_start_method(self, start_method_name: str) -> None: async def _execute_start_method(self, start_method_name: str) -> None:
result = await self._execute_method( result = await self._execute_method(
@@ -374,16 +352,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
async def _execute_single_listener(self, listener_name: str, result: Any) -> None: async def _execute_single_listener(self, listener_name: str, result: Any) -> None:
try: try:
method = self._methods[listener_name] method = self._methods[listener_name]
self.event_emitter.send(
self,
event=MethodExecutionStartedEvent(
type="method_execution_started",
method_name=listener_name,
flow_name=self.__class__.__name__,
),
)
sig = inspect.signature(method) sig = inspect.signature(method)
params = list(sig.parameters.values()) params = list(sig.parameters.values())
@@ -399,15 +367,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
# If listener does not expect parameters, call without arguments # If listener does not expect parameters, call without arguments
listener_result = await self._execute_method(listener_name, method) listener_result = await self._execute_method(listener_name, method)
self.event_emitter.send(
self,
event=MethodExecutionFinishedEvent(
type="method_execution_finished",
method_name=listener_name,
flow_name=self.__class__.__name__,
),
)
# Execute listeners of this listener # Execute listeners of this listener
await self._execute_listeners(listener_name, listener_result) await self._execute_listeners(listener_name, listener_result)
except Exception as e: except Exception as e:

View File

@@ -1,33 +0,0 @@
from dataclasses import dataclass, field
from datetime import datetime
from typing import Any, Optional
@dataclass
class Event:
type: str
flow_name: str
timestamp: datetime = field(init=False)
def __post_init__(self):
self.timestamp = datetime.now()
@dataclass
class FlowStartedEvent(Event):
pass
@dataclass
class MethodExecutionStartedEvent(Event):
method_name: str
@dataclass
class MethodExecutionFinishedEvent(Event):
method_name: str
@dataclass
class FlowFinishedEvent(Event):
result: Optional[Any] = None

View File

@@ -38,7 +38,7 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
if not path.exists(): if not path.exists():
self._logger.log( self._logger.log(
"error", "error",
f"File not found: {path}. Try adding sources to the knowledge directory. If it's inside the knowledge directory, use the relative path.", f"File not found: {path}. Try adding sources to the knowledge directory. If its inside the knowledge directory, use the relative path.",
color="red", color="red",
) )
raise FileNotFoundError(f"File not found: {path}") raise FileNotFoundError(f"File not found: {path}")

View File

@@ -43,10 +43,6 @@ LLM_CONTEXT_WINDOW_SIZES = {
"gpt-4-turbo": 128000, "gpt-4-turbo": 128000,
"o1-preview": 128000, "o1-preview": 128000,
"o1-mini": 128000, "o1-mini": 128000,
# gemini
"gemini-1.5-pro": 2097152,
"gemini-1.5-flash": 1048576,
"gemini-1.5-flash-8b": 1048576,
# deepseek # deepseek
"deepseek-chat": 128000, "deepseek-chat": 128000,
# groq # groq
@@ -65,9 +61,6 @@ LLM_CONTEXT_WINDOW_SIZES = {
"mixtral-8x7b-32768": 32768, "mixtral-8x7b-32768": 32768,
} }
DEFAULT_CONTEXT_WINDOW_SIZE = 8192
CONTEXT_WINDOW_USAGE_RATIO = 0.75
@contextmanager @contextmanager
def suppress_warnings(): def suppress_warnings():
@@ -131,7 +124,6 @@ class LLM:
self.api_version = api_version self.api_version = api_version
self.api_key = api_key self.api_key = api_key
self.callbacks = callbacks self.callbacks = callbacks
self.context_window_size = 0
self.kwargs = kwargs self.kwargs = kwargs
litellm.drop_params = True litellm.drop_params = True
@@ -199,16 +191,7 @@ class LLM:
def get_context_window_size(self) -> int: def get_context_window_size(self) -> int:
# Only using 75% of the context window size to avoid cutting the message in the middle # Only using 75% of the context window size to avoid cutting the message in the middle
if self.context_window_size != 0: return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75)
return self.context_window_size
self.context_window_size = int(
DEFAULT_CONTEXT_WINDOW_SIZE * CONTEXT_WINDOW_USAGE_RATIO
)
for key, value in LLM_CONTEXT_WINDOW_SIZES.items():
if self.model.startswith(key):
self.context_window_size = int(value * CONTEXT_WINDOW_USAGE_RATIO)
return self.context_window_size
def set_callbacks(self, callbacks: List[Any]): def set_callbacks(self, callbacks: List[Any]):
callback_types = [type(callback) for callback in callbacks] callback_types = [type(callback) for callback in callbacks]

View File

@@ -44,14 +44,14 @@ class BaseAgentTool(BaseTool):
if available_agent.role.casefold().replace("\n", "") == agent_name if available_agent.role.casefold().replace("\n", "") == agent_name
] ]
except Exception as _: except Exception as _:
return self.i18n.errors("agent_tool_unexisting_coworker").format( return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers="\n".join( coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents] [f"- {agent.role.casefold()}" for agent in self.agents]
) )
) )
if not agent: if not agent:
return self.i18n.errors("agent_tool_unexisting_coworker").format( return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers="\n".join( coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents] [f"- {agent.role.casefold()}" for agent in self.agents]
) )

View File

@@ -28,7 +28,7 @@
"errors": { "errors": {
"force_final_answer_error": "You can't keep going, this was the best you could do.\n {formatted_answer.text}", "force_final_answer_error": "You can't keep going, this was the best you could do.\n {formatted_answer.text}",
"force_final_answer": "Now it's time you MUST give your absolute best final answer. You'll ignore all previous instructions, stop using any tools, and just return your absolute BEST Final answer.", "force_final_answer": "Now it's time you MUST give your absolute best final answer. You'll ignore all previous instructions, stop using any tools, and just return your absolute BEST Final answer.",
"agent_tool_unexisting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n", "agent_tool_unexsiting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
"task_repeated_usage": "I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n", "task_repeated_usage": "I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n",
"tool_usage_error": "I encountered an error: {error}", "tool_usage_error": "I encountered an error: {error}",
"tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.", "tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.",

View File

@@ -85,7 +85,7 @@ def test_install_success(mock_get, mock_subprocess_run):
env=unittest.mock.ANY env=unittest.mock.ANY
) )
assert "Successfully installed sample-tool" in output assert "Succesfully installed sample-tool" in output
@patch("crewai.cli.plus_api.PlusAPI.get_tool") @patch("crewai.cli.plus_api.PlusAPI.get_tool")

View File

@@ -26,7 +26,7 @@
}, },
"errors": { "errors": {
"force_final_answer": "Lorem ipsum dolor sit amet", "force_final_answer": "Lorem ipsum dolor sit amet",
"agent_tool_unexisting_coworker": "Lorem ipsum dolor sit amet", "agent_tool_unexsiting_coworker": "Lorem ipsum dolor sit amet",
"task_repeated_usage": "Lorem ipsum dolor sit amet", "task_repeated_usage": "Lorem ipsum dolor sit amet",
"tool_usage_error": "Lorem ipsum dolor sit amet", "tool_usage_error": "Lorem ipsum dolor sit amet",
"tool_arguments_error": "Lorem ipsum dolor sit amet", "tool_arguments_error": "Lorem ipsum dolor sit amet",

13
uv.lock generated
View File

@@ -272,15 +272,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/b1/fe/e8c672695b37eecc5cbf43e1d0638d88d66ba3a44c4d321c796f4e59167f/beautifulsoup4-4.12.3-py3-none-any.whl", hash = "sha256:b80878c9f40111313e55da8ba20bdba06d8fa3969fc68304167741bbf9e082ed", size = 147925 }, { url = "https://files.pythonhosted.org/packages/b1/fe/e8c672695b37eecc5cbf43e1d0638d88d66ba3a44c4d321c796f4e59167f/beautifulsoup4-4.12.3-py3-none-any.whl", hash = "sha256:b80878c9f40111313e55da8ba20bdba06d8fa3969fc68304167741bbf9e082ed", size = 147925 },
] ]
[[package]]
name = "blinker"
version = "1.9.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/21/28/9b3f50ce0e048515135495f198351908d99540d69bfdc8c1d15b73dc55ce/blinker-1.9.0.tar.gz", hash = "sha256:b4ce2265a7abece45e7cc896e98dbebe6cead56bcf805a3d23136d145f5445bf", size = 22460 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/10/cb/f2ad4230dc2eb1a74edf38f1a38b9b52277f75bef262d8908e60d957e13c/blinker-1.9.0-py3-none-any.whl", hash = "sha256:ba0efaa9080b619ff2f3459d1d500c57bddea4a6b424b60a91141db6fd2f08bc", size = 8458 },
]
[[package]] [[package]]
name = "build" name = "build"
version = "1.2.2.post1" version = "1.2.2.post1"
@@ -577,9 +568,9 @@ source = { editable = "." }
dependencies = [ dependencies = [
{ name = "appdirs" }, { name = "appdirs" },
{ name = "auth0-python" }, { name = "auth0-python" },
{ name = "blinker" },
{ name = "chromadb" }, { name = "chromadb" },
{ name = "click" }, { name = "click" },
{ name = "crewai-tools" },
{ name = "instructor" }, { name = "instructor" },
{ name = "json-repair" }, { name = "json-repair" },
{ name = "jsonref" }, { name = "jsonref" },
@@ -647,9 +638,9 @@ requires-dist = [
{ name = "agentops", marker = "extra == 'agentops'", specifier = ">=0.3.0" }, { name = "agentops", marker = "extra == 'agentops'", specifier = ">=0.3.0" },
{ name = "appdirs", specifier = ">=1.4.4" }, { name = "appdirs", specifier = ">=1.4.4" },
{ name = "auth0-python", specifier = ">=4.7.1" }, { name = "auth0-python", specifier = ">=4.7.1" },
{ name = "blinker", specifier = ">=1.9.0" },
{ name = "chromadb", specifier = ">=0.5.18" }, { name = "chromadb", specifier = ">=0.5.18" },
{ name = "click", specifier = ">=8.1.7" }, { name = "click", specifier = ">=8.1.7" },
{ name = "crewai-tools", specifier = ">=0.17.0" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.14.0" }, { name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.14.0" },
{ name = "fastembed", marker = "extra == 'fastembed'", specifier = ">=0.4.1" }, { name = "fastembed", marker = "extra == 'fastembed'", specifier = ">=0.4.1" },
{ name = "instructor", specifier = ">=1.3.3" }, { name = "instructor", specifier = ">=1.3.3" },