mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-05-03 08:12:39 +00:00
logs and fix merge request
This commit is contained in:
@@ -17,7 +17,7 @@ Collaboration in CrewAI is fundamental, enabling agents to combine their skills,
|
|||||||
The `Crew` class has been enriched with several attributes to support advanced functionalities:
|
The `Crew` class has been enriched with several attributes to support advanced functionalities:
|
||||||
|
|
||||||
| Feature | Description |
|
| Feature | Description |
|
||||||
|:-------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
| :-------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| **Language Model Management** (`manager_llm`, `function_calling_llm`) | Manages language models for executing tasks and tools. `manager_llm` is required for hierarchical processes, while `function_calling_llm` is optional with a default value for streamlined interactions. |
|
| **Language Model Management** (`manager_llm`, `function_calling_llm`) | Manages language models for executing tasks and tools. `manager_llm` is required for hierarchical processes, while `function_calling_llm` is optional with a default value for streamlined interactions. |
|
||||||
| **Custom Manager Agent** (`manager_agent`) | Specifies a custom agent as the manager, replacing the default CrewAI manager. |
|
| **Custom Manager Agent** (`manager_agent`) | Specifies a custom agent as the manager, replacing the default CrewAI manager. |
|
||||||
| **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. |
|
| **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. |
|
||||||
@@ -29,6 +29,7 @@ The `Crew` class has been enriched with several attributes to support advanced f
|
|||||||
| **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. |
|
| **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. |
|
||||||
| **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. |
|
| **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. |
|
||||||
| **Memory Usage** (`memory`) | Enables memory for storing execution history, aiding in agent learning and task efficiency. |
|
| **Memory Usage** (`memory`) | Enables memory for storing execution history, aiding in agent learning and task efficiency. |
|
||||||
|
| **Memory Provider** (`memory_provider`) | Specifies the memory provider to be used by the crew for storing memories. |
|
||||||
| **Embedder Configuration** (`embedder`) | Configures the embedder for language understanding and generation, with support for provider customization. |
|
| **Embedder Configuration** (`embedder`) | Configures the embedder for language understanding and generation, with support for provider customization. |
|
||||||
| **Cache Management** (`cache`) | Specifies whether to cache tool execution results, enhancing performance. |
|
| **Cache Management** (`cache`) | Specifies whether to cache tool execution results, enhancing performance. |
|
||||||
| **Output Logging** (`output_log_file`) | Defines the file path for logging crew execution output. |
|
| **Output Logging** (`output_log_file`) | Defines the file path for logging crew execution output. |
|
||||||
|
|||||||
@@ -201,6 +201,8 @@ class Agent(BaseAgent):
|
|||||||
|
|
||||||
task_prompt = task.prompt()
|
task_prompt = task.prompt()
|
||||||
|
|
||||||
|
print("context for task", context)
|
||||||
|
|
||||||
if context:
|
if context:
|
||||||
task_prompt = self.i18n.slice("task_with_context").format(
|
task_prompt = self.i18n.slice("task_with_context").format(
|
||||||
task=task_prompt, context=context
|
task=task_prompt, context=context
|
||||||
|
|||||||
@@ -82,8 +82,11 @@ class ContextualMemory:
|
|||||||
"""
|
"""
|
||||||
Fetches relevant user memory information from User Memory related to the task's description and expected_output,
|
Fetches relevant user memory information from User Memory related to the task's description and expected_output,
|
||||||
"""
|
"""
|
||||||
|
print("query", query)
|
||||||
um_results = self.um.search(query)
|
um_results = self.um.search(query)
|
||||||
|
print("um_results", um_results)
|
||||||
formatted_results = "\n".join(
|
formatted_results = "\n".join(
|
||||||
[f"- {result['memory']}" for result in um_results]
|
[f"- {result['memory']}" for result in um_results]
|
||||||
)
|
)
|
||||||
|
print(f"User memories/preferences:\n{formatted_results}")
|
||||||
return f"User memories/preferences:\n{formatted_results}" if um_results else ""
|
return f"User memories/preferences:\n{formatted_results}" if um_results else ""
|
||||||
|
|||||||
Reference in New Issue
Block a user