* initial knowledge

* WIP

* Adding core knowledge sources

* Improve types and better support for file paths

* added additional sources

* fix linting

* update yaml to include optional deps

* adding in lorenze feedback

* ensure embeddings are persisted

* improvements all around Knowledge class

* return this

* properly reset memory

* properly reset memory+knowledge

* consolodation and improvements

* linted

* cleanup rm unused embedder

* fix test

* fix duplicate

* generating cassettes for knowledge test

* updated default embedder

* None embedder to use default on pipeline cloning

* improvements

* fixed text_file_knowledge

* mypysrc fixes

* type check fixes

* added extra cassette

* just mocks

* linted

* mock knowledge query to not spin up db

* linted

* verbose run

* put a flag

* fix

* adding docs

* better docs

* improvements from review

* more docs

* linted

* rm print

* more fixes

* clearer docs

* added docstrings and type hints for cli

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
This commit is contained in:
Brandon Hancock (bhancock_ai)
2024-11-20 18:40:08 -05:00
committed by GitHub
parent fde1ee45f9
commit 14a36d3f5e
37 changed files with 2302 additions and 266 deletions

View File

@@ -0,0 +1,48 @@
from abc import ABC, abstractmethod
from typing import List, Dict, Any
import numpy as np
from pydantic import BaseModel, ConfigDict, Field
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
class BaseKnowledgeSource(BaseModel, ABC):
"""Abstract base class for knowledge sources."""
chunk_size: int = 4000
chunk_overlap: int = 200
chunks: List[str] = Field(default_factory=list)
chunk_embeddings: List[np.ndarray] = Field(default_factory=list)
model_config = ConfigDict(arbitrary_types_allowed=True)
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
metadata: Dict[str, Any] = Field(default_factory=dict)
@abstractmethod
def load_content(self) -> Dict[Any, str]:
"""Load and preprocess content from the source."""
pass
@abstractmethod
def add(self) -> None:
"""Process content, chunk it, compute embeddings, and save them."""
pass
def get_embeddings(self) -> List[np.ndarray]:
"""Return the list of embeddings for the chunks."""
return self.chunk_embeddings
def _chunk_text(self, text: str) -> List[str]:
"""Utility method to split text into chunks."""
return [
text[i : i + self.chunk_size]
for i in range(0, len(text), self.chunk_size - self.chunk_overlap)
]
def save_documents(self, metadata: Dict[str, Any]):
"""
Save the documents to the storage.
This method should be called after the chunks and embeddings are generated.
"""
self.storage.save(self.chunks, metadata)