Compare commits

..

49 Commits

Author SHA1 Message Date
Lorenze Jay
c0ad4576e2 Merge branch 'main' of github.com:crewAIInc/crewAI into knowledge 2024-11-20 15:36:40 -08:00
Lorenze Jay
6359b64d22 added docstrings and type hints for cli 2024-11-20 15:36:12 -08:00
Lorenze Jay
9329119f76 clearer docs 2024-11-20 14:05:15 -08:00
Lorenze Jay
38c0d61b11 more fixes 2024-11-20 14:02:12 -08:00
Lorenze Jay
8564f5551f rm print 2024-11-20 13:49:58 -08:00
Lorenze Jay
8a5404275f linted 2024-11-20 13:48:11 -08:00
Lorenze Jay
52189a46bc more docs 2024-11-20 13:43:08 -08:00
Lorenze Jay
44ab749fda improvements from review 2024-11-20 13:32:00 -08:00
Lorenze Jay
3c4504bd4f better docs 2024-11-20 13:31:13 -08:00
Lorenze Jay
23276cbd76 adding docs 2024-11-19 18:31:09 -08:00
Lorenze Jay
fe18da5e11 fix 2024-11-19 18:22:05 -08:00
Lorenze Jay
76da972ce9 put a flag 2024-11-19 17:42:44 -08:00
Lorenze Jay
4663997b4c verbose run 2024-11-19 17:31:53 -08:00
Lorenze Jay
b185b9e289 linted 2024-11-19 17:29:06 -08:00
Lorenze Jay
787f2eaa7c mock knowledge query to not spin up db 2024-11-19 17:27:17 -08:00
Lorenze Jay
e7d816fb2a Merge branch 'main' of github.com:crewAIInc/crewAI into knowledge 2024-11-19 15:09:33 -08:00
Lorenze Jay
8373c9b521 linted 2024-11-19 14:50:26 -08:00
Lorenze Jay
ec2fe6ff91 just mocks 2024-11-19 14:48:00 -08:00
Lorenze Jay
58bf2d57f7 added extra cassette 2024-11-19 14:16:22 -08:00
Lorenze Jay
705ee16c1c type check fixes 2024-11-19 12:06:29 -08:00
Lorenze Jay
0c5b6f2a93 mypysrc fixes 2024-11-19 12:02:06 -08:00
Lorenze Jay
914067df37 fixed text_file_knowledge 2024-11-19 11:39:18 -08:00
Lorenze Jay
de742c827d improvements 2024-11-19 11:27:01 -08:00
Lorenze Jay
efa8a378a1 None embedder to use default on pipeline cloning 2024-11-19 10:53:09 -08:00
Lorenze Jay
e882725b8a updated default embedder 2024-11-19 10:43:06 -08:00
Lorenze Jay
cbfdbe3b68 generating cassettes for knowledge test 2024-11-19 10:10:14 -08:00
Lorenze Jay
c8bf242633 fix duplicate 2024-11-19 09:59:23 -08:00
Lorenze Jay
70910dd7b4 fix test 2024-11-19 09:41:33 -08:00
Lorenze Jay
b104404418 cleanup rm unused embedder 2024-11-18 16:03:48 -08:00
Lorenze Jay
d579c5ae12 linted 2024-11-18 13:58:23 -08:00
Lorenze Jay
4831dcb85b Merge branch 'main' of github.com:crewAIInc/crewAI into knowledge 2024-11-18 13:55:32 -08:00
Lorenze Jay
cbfcde73ec consolodation and improvements 2024-11-18 13:52:33 -08:00
Lorenze Jay
b2c06d5b7a properly reset memory+knowledge 2024-11-18 13:45:43 -08:00
Lorenze Jay
352d05370e properly reset memory 2024-11-18 13:37:16 -08:00
Lorenze Jay
b90793874c return this 2024-11-15 15:51:07 -08:00
Lorenze Jay
cdf5233523 Merge branch 'main' of github.com:crewAIInc/crewAI into knowledge 2024-11-15 15:42:32 -08:00
Lorenze Jay
cb03ee60b8 improvements all around Knowledge class 2024-11-15 15:28:07 -08:00
Lorenze Jay
10f445e18a ensure embeddings are persisted 2024-11-14 18:31:07 -08:00
Lorenze Jay
98a708ca15 Merge branch 'main' of github.com:crewAIInc/crewAI into knowledge 2024-11-14 12:22:07 -08:00
Brandon Hancock
7b59c5b049 adding in lorenze feedback 2024-11-07 12:10:09 -05:00
Brandon Hancock
86ede8344c update yaml to include optional deps 2024-11-07 11:41:49 -05:00
Brandon Hancock
59165cbad8 fix linting 2024-11-07 11:37:06 -05:00
Brandon Hancock
4af263ca1e Merge branch 'main' into knowledge 2024-11-07 11:33:08 -05:00
Brandon Hancock
617ee989cd added additional sources 2024-11-06 16:41:17 -05:00
Brandon Hancock
6131dbac4f Improve types and better support for file paths 2024-11-06 15:57:03 -05:00
Brandon Hancock
1a35114c08 Adding core knowledge sources 2024-11-06 12:33:55 -05:00
Brandon Hancock
a8a2f80616 WIP 2024-11-05 12:04:58 -05:00
Brandon Hancock
dc314c1151 Merge branch 'main' into knowledge 2024-11-04 15:02:47 -05:00
João Moura
75322b2de1 initial knowledge 2024-11-04 15:53:19 -03:00
16 changed files with 52 additions and 171 deletions

View File

@@ -100,7 +100,7 @@ You can now start developing your crew by editing the files in the `src/my_proje
#### Example of a simple crew with a sequential process:
Instantiate your crew:
Instatiate your crew:
```shell
crewai create crew latest-ai-development
@@ -399,7 +399,7 @@ Data collected includes:
- Roles of agents in a crew
- Understand high level use cases so we can build better tools, integrations and examples about it
- Tools names available
- Understand out of the publicly available tools, which ones are being used the most so we can improve them
- Understand out of the publically available tools, which ones are being used the most so we can improve them
Users can opt-in to Further Telemetry, sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.

View File

@@ -1,6 +1,6 @@
---
title: Knowledge
description: Understand what knowledge is in CrewAI and how to effectively use it.
description: What is knowledge in CrewAI and how to use it.
icon: book
---
@@ -8,14 +8,7 @@ icon: book
## Introduction
Knowledge in CrewAI serves as a foundational component for enriching AI agents with contextual and relevant information. It enables agents to access and utilize structured data sources during their execution processes, making them more intelligent and responsive.
The Knowledge class in CrewAI provides a powerful way to manage and query knowledge sources for your AI agents. This guide will show you how to implement knowledge management in your CrewAI projects.
## What is Knowledge?
The `Knowledge` class in CrewAI manages various sources that store information, which can be queried and retrieved by AI agents. This modular approach allows you to integrate diverse data formats such as text, PDFs, spreadsheets, and more into your AI workflows.
Additionally, we have specific tools for generate knowledge sources for strings, text files, PDF's, and Spreadsheets. You can expand on any source type by extending the `KnowledgeSource` class.
## Basic Implementation
@@ -32,14 +25,17 @@ string_source = StringKnowledgeSource(
content=content, metadata={"preference": "personal"}
)
# Create an agent with the knowledge store
llm = LLM(model="gpt-4o-mini", temperature=0)
# Create an agent with the knowledge store
agent = Agent(
role="About User",
goal="You know everything about the user.",
backstory="""You are a master at understanding people and their preferences.""",
verbose=True
verbose=True,
allow_delegation=False,
llm=llm,
)
task = Task(
description="Answer the following questions about the user: {question}",
expected_output="An answer to the question.",

View File

@@ -310,8 +310,8 @@ These are examples of how to configure LLMs for your agent.
from crewai import LLM
llm = LLM(
model="llama-3.1-sonar-large-128k-online",
base_url="https://api.perplexity.ai/",
model="perplexity/mistral-7b-instruct",
base_url="https://api.perplexity.ai/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
@@ -400,4 +400,4 @@ This is particularly useful when working with OpenAI-compatible APIs or when you
- **API Errors**: Check your API key, network connection, and rate limits.
- **Unexpected Outputs**: Refine your prompts and adjust temperature or top_p.
- **Performance Issues**: Consider using a more powerful model or optimizing your queries.
- **Timeout Errors**: Increase the `timeout` parameter or optimize your input.
- **Timeout Errors**: Increase the `timeout` parameter or optimize your input.

View File

@@ -1,59 +0,0 @@
---
title: Before and After Kickoff Hooks
description: Learn how to use before and after kickoff hooks in CrewAI
---
CrewAI provides hooks that allow you to execute code before and after a crew's kickoff. These hooks are useful for preprocessing inputs or post-processing results.
## Before Kickoff Hook
The before kickoff hook is executed before the crew starts its tasks. It receives the input dictionary and can modify it before passing it to the crew. You can use this hook to set up your environment, load necessary data, or preprocess your inputs. This is useful in scenarios where the input data might need enrichment or validation before being processed by the crew.
Here's an example of defining a before kickoff function in your `crew.py`:
```python
from crewai import CrewBase, before_kickoff
@CrewBase
class MyCrew:
@before_kickoff
def prepare_data(self, inputs):
# Preprocess or modify inputs
inputs['processed'] = True
return inputs
#...
```
In this example, the prepare_data function modifies the inputs by adding a new key-value pair indicating that the inputs have been processed.
## After Kickoff Hook
The after kickoff hook is executed after the crew has completed its tasks. It receives the result object, which contains the outputs of the crew's execution. This hook is ideal for post-processing results, such as logging, data transformation, or further analysis.
Here's how you can define an after kickoff function in your `crew.py`:
```python
from crewai import CrewBase, after_kickoff
@CrewBase
class MyCrew:
@after_kickoff
def log_results(self, result):
# Log or modify the results
print("Crew execution completed with result:", result)
return result
# ...
```
In the `log_results` function, the results of the crew execution are simply printed out. You can extend this to perform more complex operations such as sending notifications or integrating with other services.
## Utilizing Both Hooks
Both hooks can be used together to provide a comprehensive setup and teardown process for your crew's execution. They are particularly useful in maintaining clean code architecture by separating concerns and enhancing the modularity of your CrewAI implementations.
## Conclusion
Before and after kickoff hooks in CrewAI offer powerful ways to interact with the lifecycle of a crew's execution. By understanding and utilizing these hooks, you can greatly enhance the robustness and flexibility of your AI agents.

View File

@@ -8,7 +8,7 @@ icon: rocket
Let's create a simple crew that will help us `research` and `report` on the `latest AI developments` for a given topic or subject.
Before we proceed, make sure you have `crewai` and `crewai-tools` installed.
Before we proceed, make sure you have `crewai` and `crewai-tools` installed.
If you haven't installed them yet, you can do so by following the [installation guide](/installation).
Follow the steps below to get crewing! 🚣‍♂️
@@ -23,7 +23,7 @@ Follow the steps below to get crewing! 🚣‍♂️
```
</CodeGroup>
</Step>
<Step title="Modify your `agents.yaml` file">
<Step title="Modify your `agents.yaml` file">
<Tip>
You can also modify the agents as needed to fit your use case or copy and paste as is to your project.
Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{topic}` will be replaced by the value of the variable in the `main.py` file.
@@ -39,7 +39,7 @@ Follow the steps below to get crewing! 🚣‍♂️
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
@@ -51,7 +51,7 @@ Follow the steps below to get crewing! 🚣‍♂️
it easy for others to understand and act on the information you provide.
```
</Step>
<Step title="Modify your `tasks.yaml` file">
<Step title="Modify your `tasks.yaml` file">
```yaml tasks.yaml
# src/latest_ai_development/config/tasks.yaml
research_task:
@@ -73,8 +73,8 @@ Follow the steps below to get crewing! 🚣‍♂️
agent: reporting_analyst
output_file: report.md
```
</Step>
<Step title="Modify your `crew.py` file">
</Step>
<Step title="Modify your `crew.py` file">
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
@@ -121,34 +121,10 @@ Follow the steps below to get crewing! 🚣‍♂️
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)
)
```
</Step>
<Step title="[Optional] Add before and after crew functions">
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task, before_kickoff, after_kickoff
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
@before_kickoff
def before_kickoff_function(self, inputs):
print(f"Before kickoff function with inputs: {inputs}")
return inputs # You can return the inputs or modify them as needed
@after_kickoff
def after_kickoff_function(self, result):
print(f"After kickoff function with result: {result}")
return result # You can return the result or modify it as needed
# ... remaining code
```
</Step>
<Step title="Feel free to pass custom inputs to your crew">
<Step title="Feel free to pass custom inputs to your crew">
For example, you can pass the `topic` input to your crew to customize the research and reporting.
```python main.py
#!/usr/bin/env python
@@ -261,14 +237,14 @@ Follow the steps below to get crewing! 🚣‍♂️
### Note on Consistency in Naming
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
For example, you can reference the agent for specific tasks from `tasks.yaml` file.
For example, you can reference the agent for specific tasks from `tasks.yaml` file.
This naming consistency allows CrewAI to automatically link your configurations with your code; otherwise, your task won't recognize the reference properly.
#### Example References
<Tip>
Note how we use the same name for the agent in the `agents.yaml` (`email_summarizer`) file as the method name in the `crew.py` (`email_summarizer`) file.
</Tip>
</Tip>
```yaml agents.yaml
email_summarizer:
@@ -305,8 +281,6 @@ Use the annotations to properly reference the agent and task in the `crew.py` fi
* `@task`
* `@crew`
* `@tool`
* `@before_kickoff`
* `@after_kickoff`
* `@callback`
* `@output_json`
* `@output_pydantic`
@@ -330,7 +304,7 @@ def email_summarizer_task(self) -> Task:
<Tip>
In addition to the [sequential process](../how-to/sequential-process), you can use the [hierarchical process](../how-to/hierarchical-process),
which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results.
which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results.
You can learn more about the core concepts [here](/concepts).
</Tip>

View File

@@ -1,6 +1,6 @@
[project]
name = "crewai"
version = "0.83.0"
version = "0.80.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<=3.13"
@@ -29,8 +29,6 @@ dependencies = [
"tomli-w>=1.1.0",
"tomli>=2.0.2",
"chromadb>=0.5.18",
"pdfplumber>=0.11.4",
"openpyxl>=3.1.5",
]
[project.urls]

View File

@@ -16,7 +16,7 @@ warnings.filterwarnings(
category=UserWarning,
module="pydantic.main",
)
__version__ = "0.83.0"
__version__ = "0.80.0"
__all__ = [
"Agent",
"Crew",

View File

@@ -1,5 +1,5 @@
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task, before_kickoff, after_kickoff
from crewai.project import CrewBase, agent, crew, task
# Uncomment the following line to use an example of a custom tool
# from {{folder_name}}.tools.custom_tool import MyCustomTool
@@ -14,18 +14,6 @@ class {{crew_name}}():
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@before_kickoff # Optional hook to be executed before the crew starts
def pull_data_example(self, inputs):
# Example of pulling data from an external API, dynamically changing the inputs
inputs['extra_data'] = "This is extra data"
return inputs
@after_kickoff # Optional hook to be executed after the crew has finished
def log_results(self, output):
# Example of logging results, dynamically changing the output
print(f"Results: {output}")
return output
@agent
def researcher(self) -> Agent:
return Agent(

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.83.0,<1.0.0"
"crewai[tools]>=0.80.0,<1.0.0"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.83.0,<1.0.0",
"crewai[tools]>=0.80.0,<1.0.0",
]
[project.scripts]

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.83.0,<1.0.0" }
crewai = { extras = ["tools"], version = ">=0.80.0,<1.0.0" }
asyncio = "*"
[tool.poetry.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"]
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.83.0,<1.0.0"
"crewai[tools]>=0.80.0,<1.0.0"
]
[project.scripts]

View File

@@ -5,6 +5,6 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.83.0"
"crewai[tools]>=0.80.0"
]

View File

@@ -1,6 +1,6 @@
import io
import logging
import sys
import threading
import warnings
from contextlib import contextmanager
from typing import Any, Dict, List, Optional, Union
@@ -13,25 +13,16 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
)
class FilteredStream:
def __init__(self, original_stream):
self._original_stream = original_stream
self._lock = threading.Lock()
def write(self, s) -> int:
with self._lock:
if (
"Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new"
in s
or "LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True`"
in s
):
return 0
return self._original_stream.write(s)
def flush(self):
with self._lock:
return self._original_stream.flush()
class FilteredStream(io.StringIO):
def write(self, s):
if (
"Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new"
in s
or "LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True`"
in s
):
return
super().write(s)
LLM_CONTEXT_WINDOW_SIZES = {
@@ -69,8 +60,8 @@ def suppress_warnings():
# Redirect stdout and stderr
old_stdout = sys.stdout
old_stderr = sys.stderr
sys.stdout = FilteredStream(old_stdout)
sys.stderr = FilteredStream(old_stderr)
sys.stdout = FilteredStream()
sys.stderr = FilteredStream()
try:
yield

View File

@@ -20,10 +20,10 @@ from pydantic import (
from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.base_tool import BaseTool
from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput
from crewai.telemetry.telemetry import Telemetry
from crewai.tools.base_tool import BaseTool
from crewai.utilities.config import process_config
from crewai.utilities.converter import Converter, convert_to_model
from crewai.utilities.i18n import I18N
@@ -208,9 +208,7 @@ class Task(BaseModel):
"""Execute the task asynchronously."""
future: Future[TaskOutput] = Future()
threading.Thread(
daemon=True,
target=self._execute_task_async,
args=(agent, context, tools, future),
target=self._execute_task_async, args=(agent, context, tools, future)
).start()
return future
@@ -279,9 +277,7 @@ class Task(BaseModel):
content = (
json_output
if json_output
else pydantic_output.model_dump_json()
if pydantic_output
else result
else pydantic_output.model_dump_json() if pydantic_output else result
)
self._save_file(content)

13
uv.lock generated
View File

@@ -608,7 +608,7 @@ wheels = [
[[package]]
name = "crewai"
version = "0.83.0"
version = "0.80.0"
source = { editable = "." }
dependencies = [
{ name = "appdirs" },
@@ -622,11 +622,9 @@ dependencies = [
{ name = "langchain" },
{ name = "litellm" },
{ name = "openai" },
{ name = "openpyxl" },
{ name = "opentelemetry-api" },
{ name = "opentelemetry-exporter-otlp-proto-http" },
{ name = "opentelemetry-sdk" },
{ name = "pdfplumber" },
{ name = "pydantic" },
{ name = "python-dotenv" },
{ name = "pyvis" },
@@ -643,9 +641,6 @@ agentops = [
fastembed = [
{ name = "fastembed" },
]
mem0 = [
{ name = "mem0ai" },
]
openpyxl = [
{ name = "openpyxl" },
]
@@ -655,6 +650,9 @@ pandas = [
pdfplumber = [
{ name = "pdfplumber" },
]
mem0 = [
{ name = "mem0ai" },
]
tools = [
{ name = "crewai-tools" },
]
@@ -696,13 +694,11 @@ requires-dist = [
{ name = "litellm", specifier = ">=1.44.22" },
{ name = "mem0ai", marker = "extra == 'mem0'", specifier = ">=0.1.29" },
{ name = "openai", specifier = ">=1.13.3" },
{ name = "openpyxl", specifier = ">=3.1.5" },
{ name = "openpyxl", marker = "extra == 'openpyxl'", specifier = ">=3.1.5" },
{ name = "opentelemetry-api", specifier = ">=1.22.0" },
{ name = "opentelemetry-exporter-otlp-proto-http", specifier = ">=1.22.0" },
{ name = "opentelemetry-sdk", specifier = ">=1.22.0" },
{ name = "pandas", marker = "extra == 'pandas'", specifier = ">=2.2.3" },
{ name = "pdfplumber", specifier = ">=0.11.4" },
{ name = "pdfplumber", marker = "extra == 'pdfplumber'", specifier = ">=0.11.4" },
{ name = "pydantic", specifier = ">=2.4.2" },
{ name = "python-dotenv", specifier = ">=1.0.0" },
@@ -956,6 +952,7 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/c1/8b/5fe2cc11fee489817272089c4203e679c63b570a5aaeb18d852ae3cbba6a/et_xmlfile-2.0.0-py3-none-any.whl", hash = "sha256:7a91720bc756843502c3b7504c77b8fe44217c85c537d85037f0f536151b2caa", size = 18059 },
]
[[package]]
name = "exceptiongroup"
version = "1.2.2"