logs and fix merge request

This commit is contained in:
Brandon Hancock
2024-10-11 10:36:39 -04:00
parent 8c83379cb9
commit 774bc9ea75
3 changed files with 26 additions and 20 deletions

View File

@@ -16,24 +16,25 @@ Collaboration in CrewAI is fundamental, enabling agents to combine their skills,
The `Crew` class has been enriched with several attributes to support advanced functionalities: The `Crew` class has been enriched with several attributes to support advanced functionalities:
| Feature | Description | | Feature | Description |
|:-------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | :-------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Language Model Management** (`manager_llm`, `function_calling_llm`) | Manages language models for executing tasks and tools. `manager_llm` is required for hierarchical processes, while `function_calling_llm` is optional with a default value for streamlined interactions. | | **Language Model Management** (`manager_llm`, `function_calling_llm`) | Manages language models for executing tasks and tools. `manager_llm` is required for hierarchical processes, while `function_calling_llm` is optional with a default value for streamlined interactions. |
| **Custom Manager Agent** (`manager_agent`) | Specifies a custom agent as the manager, replacing the default CrewAI manager. | | **Custom Manager Agent** (`manager_agent`) | Specifies a custom agent as the manager, replacing the default CrewAI manager. |
| **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. | | **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. |
| **Verbose Logging** (`verbose`) | Provides detailed logging for monitoring and debugging. Accepts integer and boolean values to control verbosity level. | | **Verbose Logging** (`verbose`) | Provides detailed logging for monitoring and debugging. Accepts integer and boolean values to control verbosity level. |
| **Rate Limiting** (`max_rpm`) | Limits requests per minute to optimize resource usage. Setting guidelines depend on task complexity and load. | | **Rate Limiting** (`max_rpm`) | Limits requests per minute to optimize resource usage. Setting guidelines depend on task complexity and load. |
| **Internationalization / Customization** (`language`, `prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) | | **Internationalization / Customization** (`language`, `prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
| **Execution and Output Handling** (`full_output`) | Controls output granularity, distinguishing between full and final outputs. | | **Execution and Output Handling** (`full_output`) | Controls output granularity, distinguishing between full and final outputs. |
| **Callback and Telemetry** (`step_callback`, `task_callback`) | Enables step-wise and task-level execution monitoring and telemetry for performance analytics. | | **Callback and Telemetry** (`step_callback`, `task_callback`) | Enables step-wise and task-level execution monitoring and telemetry for performance analytics. |
| **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. | | **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. |
| **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. | | **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. |
| **Memory Usage** (`memory`) | Enables memory for storing execution history, aiding in agent learning and task efficiency. | | **Memory Usage** (`memory`) | Enables memory for storing execution history, aiding in agent learning and task efficiency. |
| **Embedder Configuration** (`embedder`) | Configures the embedder for language understanding and generation, with support for provider customization. | | **Memory Provider** (`memory_provider`) | Specifies the memory provider to be used by the crew for storing memories. |
| **Cache Management** (`cache`) | Specifies whether to cache tool execution results, enhancing performance. | | **Embedder Configuration** (`embedder`) | Configures the embedder for language understanding and generation, with support for provider customization. |
| **Output Logging** (`output_log_file`) | Defines the file path for logging crew execution output. | | **Cache Management** (`cache`) | Specifies whether to cache tool execution results, enhancing performance. |
| **Planning Mode** (`planning`) | Enables action planning before task execution. Set `planning=True` to activate. | | **Output Logging** (`output_log_file`) | Defines the file path for logging crew execution output. |
| **Replay Feature** (`replay`) | Provides CLI for listing tasks from the last run and replaying from specific tasks, aiding in task management and troubleshooting. | | **Planning Mode** (`planning`) | Enables action planning before task execution. Set `planning=True` to activate. |
| **Replay Feature** (`replay`) | Provides CLI for listing tasks from the last run and replaying from specific tasks, aiding in task management and troubleshooting. |
## Delegation (Dividing to Conquer) ## Delegation (Dividing to Conquer)

View File

@@ -201,6 +201,8 @@ class Agent(BaseAgent):
task_prompt = task.prompt() task_prompt = task.prompt()
print("context for task", context)
if context: if context:
task_prompt = self.i18n.slice("task_with_context").format( task_prompt = self.i18n.slice("task_with_context").format(
task=task_prompt, context=context task=task_prompt, context=context

View File

@@ -82,8 +82,11 @@ class ContextualMemory:
""" """
Fetches relevant user memory information from User Memory related to the task's description and expected_output, Fetches relevant user memory information from User Memory related to the task's description and expected_output,
""" """
print("query", query)
um_results = self.um.search(query) um_results = self.um.search(query)
print("um_results", um_results)
formatted_results = "\n".join( formatted_results = "\n".join(
[f"- {result['memory']}" for result in um_results] [f"- {result['memory']}" for result in um_results]
) )
print(f"User memories/preferences:\n{formatted_results}")
return f"User memories/preferences:\n{formatted_results}" if um_results else "" return f"User memories/preferences:\n{formatted_results}" if um_results else ""