mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-10 00:28:31 +00:00
Feature: Documentation Site (#188)
This commit is contained in:
75
docs/how-to/Creating-a-Crew-and-kick-it-off.md
Normal file
75
docs/how-to/Creating-a-Crew-and-kick-it-off.md
Normal file
@@ -0,0 +1,75 @@
|
||||
# Get a crew working
|
||||
|
||||
Assembling a Crew in CrewAI is like casting characters for a play. Each agent you create is a cast member with a unique part to play. When your crew is assembled, you'll give the signal, and they'll spring into action, each performing their role in the grand scheme of your project.
|
||||
|
||||
# Step 1: Assemble Your Agents
|
||||
|
||||
Start by creating your agents, each with its own role and backstory. These backstories add depth to the agents, influencing how they approach their tasks and interact with one another.
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
|
||||
# Create a researcher agent
|
||||
researcher = Agent(
|
||||
role='Senior Researcher',
|
||||
goal='Discover groundbreaking technologies',
|
||||
verbose=True,
|
||||
backstory='A curious mind fascinated by cutting-edge innovation and the potential to change the world, you know everything about tech.'
|
||||
)
|
||||
|
||||
# Create a writer agent
|
||||
writer = Agent(
|
||||
role='Writer',
|
||||
goal='Craft compelling stories about tech discoveries',
|
||||
verbose=True,
|
||||
backstory='A creative soul who translates complex tech jargon into engaging narratives for the masses, you write using simple words in a friendly and inviting tone that does not sounds like AI.'
|
||||
)
|
||||
```
|
||||
|
||||
# Step 2: Define the Tasks
|
||||
|
||||
Outline the tasks that your agents need to tackle. These tasks are their missions, the specific objectives they need to achieve.
|
||||
|
||||
```python
|
||||
from crewai import Task
|
||||
|
||||
# Task for the researcher
|
||||
research_task = Task(
|
||||
description='Identify the next big trend in AI',
|
||||
agent=researcher # Assigning the task to the researcher
|
||||
)
|
||||
|
||||
# Task for the writer
|
||||
write_task = Task(
|
||||
description='Write an article on AI advancements leveraging the research made.',
|
||||
agent=writer # Assigning the task to the writer
|
||||
)
|
||||
```
|
||||
|
||||
# Step 3: Form the Crew
|
||||
|
||||
Bring your agents together into a crew. This is where you define the process they'll follow to complete their tasks.
|
||||
|
||||
```python
|
||||
from crewai import Crew, Process
|
||||
|
||||
# Instantiate your crew
|
||||
tech_crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[research_task, write_task],
|
||||
process=Process.sequential # Tasks will be executed one after the other
|
||||
)
|
||||
```
|
||||
|
||||
# Step 4: Kick It Off
|
||||
|
||||
With the crew formed and the stage set, it's time to start the show. Kick off the process and watch as your agents collaborate to achieve their goals.
|
||||
|
||||
```python
|
||||
# Begin the task execution
|
||||
tech_crew.kickoff()
|
||||
```
|
||||
|
||||
# Conclusion
|
||||
|
||||
Creating a crew and setting it into motion is a straightforward process in CrewAI. With each agent playing their part and a clear set of tasks, your AI ensemble is ready to take on any challenge. Remember, the richness of their backstories and the clarity of their goals will greatly enhance their performance and the outcomes of their collaboration.
|
||||
66
docs/how-to/Customizing-Agents.md
Normal file
66
docs/how-to/Customizing-Agents.md
Normal file
@@ -0,0 +1,66 @@
|
||||
# Customizable Attributes
|
||||
|
||||
Customizing your AI agents is a cornerstone of creating an effective CrewAI team. Each agent can be tailored to fit the unique needs of your project, allowing for a dynamic and versatile AI workforce.
|
||||
|
||||
When you initialize an Agent, you can set various attributes that define its behavior and role within the Crew:
|
||||
|
||||
- **Role**: The job title or function of the agent within your crew. This can be anything from 'Analyst' to 'Customer Service Rep'.
|
||||
- **Goal**: What the agent is aiming to achieve. Goals should be aligned with the agent's role and the overall objectives of the crew.
|
||||
- **Backstory**: A narrative that provides depth to the agent's character. This could include previous experience, motivations, or anything that adds context to their role.
|
||||
- **Tools**: The abilities or methods the agent uses to complete tasks. This could be as simple as a 'search' function or as complex as a custom-built analysis tool.
|
||||
|
||||
# Understanding Tools in CrewAI
|
||||
|
||||
Tools in CrewAI are functions that empower agents to interact with the world around them. These can range from generic utilities like a search function to more complex ones like integrating with an external API. The integration with LangChain allows you to utilize a suite of ready-to-use tools such as [Google Serper](https://python.langchain.com/docs/integrations/tools/google_serper), which enables agents to perform web searches and gather data.
|
||||
|
||||
# Customizing Agents and Tools
|
||||
|
||||
You can customize an agent by passing parameters when creating an instance. Each parameter tweaks how the agent behaves and interacts within the crew.
|
||||
|
||||
Customizing an agent's tools is particularly important. Tools define what an agent can do and how it interacts with tasks. For instance, if a task requires data analysis, assigning an agent with data-related tools would be optimal.
|
||||
|
||||
When initializing your agents, you can equip them with a set of tools that enable them to perform their roles more effectively:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from langchain.agents import Tool
|
||||
from langchain.utilities import GoogleSerperAPIWrapper
|
||||
|
||||
# Initialize SerpAPI tool with your API key
|
||||
os.environ["OPENAI_API_KEY"] = "Your Key"
|
||||
os.environ["SERPER_API_KEY"] = "Your Key"
|
||||
|
||||
search = GoogleSerperAPIWrapper()
|
||||
|
||||
# Create tool to be used by agent
|
||||
serper_tool = Tool(
|
||||
name="Intermediate Answer",
|
||||
func=search.run,
|
||||
description="useful for when you need to ask with search",
|
||||
)
|
||||
|
||||
# Create an agent and assign the search tool
|
||||
agent = Agent(
|
||||
role='Research Analyst',
|
||||
goal='Provide up-to-date market analysis',
|
||||
backstory='An expert analyst with a keen eye for market trends.',
|
||||
tools=[serper_tool]
|
||||
)
|
||||
```
|
||||
|
||||
## Delegation and Autonomy
|
||||
|
||||
One of the most powerful aspects of CrewAI agents is their ability to delegate tasks to one another. Each agent by default can delegate work or ask question to anyone in the crew, but you can disable that by setting `allow_delegation` to `false`, this is particularly useful for straightforward agents that should execute their tasks in isolation.
|
||||
|
||||
```python
|
||||
agent = Agent(
|
||||
role='Content Writer',
|
||||
goal='Write the most amazing content related to market trends an business.',
|
||||
backstory='An expert writer with many years of experience in market trends, stocks and all business related things.',
|
||||
allow_delegation=False
|
||||
)
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Customization is what makes CrewAI powerful. By adjusting the attributes of each agent, you can ensure that your AI team is well-equipped to handle the challenges you set for them. Remember, the more thought you put into your agents' roles, goals, backstories, and tools, the more nuanced and effective their interactions and task execution will be.
|
||||
76
docs/how-to/Human-Input-on-Execution.md
Normal file
76
docs/how-to/Human-Input-on-Execution.md
Normal file
@@ -0,0 +1,76 @@
|
||||
# Human Input on Execution
|
||||
|
||||
Human inputs is important in many agent execution use cases, humans are AGI so they can can be prompted to step in and provide extra details ins necessary.
|
||||
Using it with crewAI is pretty straightforward and you can do it through a LangChain Tool.
|
||||
Check [LangChain Integration](https://python.langchain.com/docs/integrations/tools/human_tools) for more details:
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
import os
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
from langchain.tools import DuckDuckGoSearchRun
|
||||
from langchain.agents import load_tools
|
||||
|
||||
search_tool = DuckDuckGoSearchRun()
|
||||
|
||||
# Loading Human Tools
|
||||
human_tools = load_tools(["human"])
|
||||
|
||||
# Define your agents with roles and goals
|
||||
researcher = Agent(
|
||||
role='Senior Research Analyst',
|
||||
goal='Uncover cutting-edge developments in AI and data science in',
|
||||
backstory="""You are a Senior Research Analyst at a leading tech think tank.
|
||||
Your expertise lies in identifying emerging trends and technologies in AI and
|
||||
data science. You have a knack for dissecting complex data and presenting
|
||||
actionable insights.""",
|
||||
verbose=True,
|
||||
allow_delegation=False,
|
||||
# Passing human tools to the agent
|
||||
tools=[search_tool]+human_tools
|
||||
)
|
||||
writer = Agent(
|
||||
role='Tech Content Strategist',
|
||||
goal='Craft compelling content on tech advancements',
|
||||
backstory="""You are a renowned Tech Content Strategist, known for your insightful
|
||||
and engaging articles on technology and innovation. With a deep understanding of
|
||||
the tech industry, you transform complex concepts into compelling narratives.""",
|
||||
verbose=True,
|
||||
allow_delegation=True
|
||||
)
|
||||
|
||||
# Create tasks for your agents
|
||||
# Being explicit on the task to ask for human feedback.
|
||||
task1 = Task(
|
||||
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
|
||||
Identify key trends, breakthrough technologies, and potential industry impacts.
|
||||
Compile your findings in a detailed report.
|
||||
Make sure to check with the human if the draft is good before returning your Final Answer.
|
||||
Your final answer MUST be a full analysis report""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
task2 = Task(
|
||||
description="""Using the insights from the researcher's report, develop an engaging blog
|
||||
post that highlights the most significant AI advancements.
|
||||
Your post should be informative yet accessible, catering to a tech-savvy audience.
|
||||
Aim for a narrative that captures the essence of these breakthroughs and their
|
||||
implications for the future.
|
||||
Your final answer MUST be the full blog post of at least 3 paragraphs.""",
|
||||
agent=writer
|
||||
)
|
||||
|
||||
# Instantiate your crew with a sequential process
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[task1, task2],
|
||||
verbose=2
|
||||
)
|
||||
|
||||
# Get your crew to work!
|
||||
result = crew.kickoff()
|
||||
|
||||
print("######################")
|
||||
print(result)
|
||||
```
|
||||
192
docs/how-to/LLM-Connections.md
Normal file
192
docs/how-to/LLM-Connections.md
Normal file
@@ -0,0 +1,192 @@
|
||||
# Connect CrewAI to LLMs
|
||||
|
||||
There are different types of connections.
|
||||
Ollama is the recommended way to connect to local LLMs.
|
||||
Azure uses a slightly different API and therefore has it's own connection object.
|
||||
|
||||
crewAI is compatible with any of the LangChain LLM components. See this page for more information: https://python.langchain.com/docs/integrations/llms/
|
||||
|
||||
## Ollama
|
||||
|
||||
crewAI supports integration with local models thorugh [Ollama](https://ollama.ai/) for enhanced flexibility and customization. This allows you to utilize your own models, which can be particularly useful for specialized tasks or data privacy concerns. We will conver other options for using local models in later sections. However, ollama is the recommended tool to use to host local models when possible.
|
||||
|
||||
### Setting Up Ollama
|
||||
|
||||
- **Install Ollama**: Ensure that Ollama is properly installed in your environment. Follow the installation guide provided by Ollama for detailed instructions.
|
||||
- **Configure Ollama**: Set up Ollama to work with your local model. You will probably need to [tweak the model using a Modelfile](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md). I'd recommend adding `Observation` as a stop word and playing with `top_p` and `temperature`.
|
||||
|
||||
### Integrating Ollama with CrewAI
|
||||
- Instantiate Ollama Model: Create an instance of the Ollama model. You can specify the model and the base URL during instantiation. For example:
|
||||
|
||||
```python
|
||||
from langchain.llms import Ollama
|
||||
ollama_openhermes = Ollama(model="openhermes")
|
||||
# Pass Ollama Model to Agents: When creating your agents within the CrewAI framework, you can pass the Ollama model as an argument to the Agent constructor. For instance:
|
||||
|
||||
local_expert = Agent(
|
||||
role='Local Expert at this city',
|
||||
goal='Provide the BEST insights about the selected city',
|
||||
backstory="""A knowledgeable local guide with extensive information
|
||||
about the city, it's attractions and customs""",
|
||||
tools=[
|
||||
SearchTools.search_internet,
|
||||
BrowserTools.scrape_and_summarize_website,
|
||||
],
|
||||
llm=ollama_openhermes, # Ollama model passed here
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Open AI Compatible API Endpoints
|
||||
|
||||
In the context of integrating various language models with CrewAI, the flexibility to switch between different API endpoints is a crucial feature. By utilizing environment variables for configuration details such as `OPENAI_API_BASE_URL`, `OPENAI_API_KEY`, and `MODEL_NAME`, you can easily transition between different APIs or models. For instance, if you want to switch from using the standard OpenAI GPT model to a custom or alternative version, simply update the values of these environment variables.
|
||||
|
||||
The `OPENAI_API_BASE_URL` variable allows you to define the base URL of the API to connect to, while `OPENAI_API_KEY` is used for authentication purposes. Lastly, the `MODEL_NAME` variable specifies the particular language model to be used, such as "gpt-3.5-turbo" or any other available model.
|
||||
|
||||
This method offers an easy way to adapt the system to different models or plataforms, be it for testing, scaling, or accessing different features available on various platforms. By centralizing the configuration in environment variables, the process becomes streamlined, reducing the need for extensive code modifications when switching between APIs or models.
|
||||
|
||||
|
||||
```python
|
||||
from dotenv import load_dotenv
|
||||
from langchain.chat_models.openai import ChatOpenAI
|
||||
|
||||
load_dotenv()
|
||||
|
||||
defalut_llm = ChatOpenAI(openai_api_base=os.environ.get("OPENAI_API_BASE_URL", "https://api.openai.com/v1"),
|
||||
openai_api_key=os.environ.get("OPENAI_API_KEY", "NA"),
|
||||
model_name=os.environ.get("MODEL_NAME", "gpt-3.5-turbo"))
|
||||
|
||||
# Create an agent and assign the LLM
|
||||
example_agent = Agent(
|
||||
role='Example Agent',
|
||||
goal='Show how to assign a custom configured LLM',
|
||||
backstory='You hang out in the docs section of GitHub repos.',
|
||||
llm=default_llm
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
The following sections show examples of the configuration settings for various OpenAI API compatible applications and services. We have included links to relavant documentation for the various application and services.
|
||||
|
||||
|
||||
### Open AI
|
||||
|
||||
OpenAI is the default LLM that will be used if you do not specify a value for the `llm` argument when creating an agent. It will also use default values for the `OPENAI_API_BASE_URL` and `MODEL_NAME`. So the only value you need to set when using the OpenAI endpoint is the API key that from your account.
|
||||
|
||||
```sh
|
||||
# Required
|
||||
OPENAI_API_KEY="sk-..."
|
||||
|
||||
# Optional
|
||||
OPENAI_API_BASE_URL=https://api.openai.com/v1
|
||||
MODEL_NAME="gpt-3.5-turbo"
|
||||
```
|
||||
|
||||
### FastChat
|
||||
|
||||
FastChat is an open platform for training, serving, and evaluating large language model based chatbots.
|
||||
|
||||
[GitHub](https://github.com/lm-sys/FastChat)
|
||||
|
||||
[API Documentation](https://github.com/lm-sys/FastChat?tab=readme-ov-file#api)
|
||||
|
||||
Configuration settings:
|
||||
```sh
|
||||
# Required
|
||||
OPENAI_API_BASE_URL="http://localhost:8001/v1"
|
||||
OPENAI_API_KEY=NA
|
||||
MODEL_NAME='oh-2.5m7b-q51'
|
||||
```
|
||||
|
||||
### LM Studio
|
||||
|
||||
Discover, download, and run local LLMs
|
||||
|
||||
[lmstudio.ai](https://lmstudio.ai/)
|
||||
|
||||
|
||||
|
||||
Configuration settings:
|
||||
```sh
|
||||
# Required
|
||||
OPENAI_API_BASE_URL="http://localhost:8000/v1"
|
||||
|
||||
OPENAI_API_KEY=NA
|
||||
MODEL_NAME=NA
|
||||
```
|
||||
|
||||
|
||||
### Mistral API
|
||||
|
||||
Mistral AI's API endpoints
|
||||
|
||||
[Mistral AI](https://mistral.ai/)
|
||||
|
||||
[Documentation](https://docs.mistral.ai/)
|
||||
|
||||
```sh
|
||||
OPENAI_API_KEY=your-mistral-api-key
|
||||
OPENAI_API_BASE=https://api.mistral.ai/v1
|
||||
MODEL_NAME="mistral-small" # Check documentation for available models
|
||||
```
|
||||
|
||||
|
||||
|
||||
### text-gen-web-ui
|
||||
|
||||
A Gradio web UI for Large Language Models.
|
||||
|
||||
[GitHub](https://github.com/oobabooga/text-generation-webui)
|
||||
|
||||
[API Documentation](https://github.com/oobabooga/text-generation-webui/wiki/12-%E2%80%90-OpenAI-API)
|
||||
|
||||
Configuration settings:
|
||||
|
||||
```sh
|
||||
# Required
|
||||
API_BASE_URL=http://localhost:5000
|
||||
OPENAI_API_KEY=NA
|
||||
MODEL_NAME=NA
|
||||
```
|
||||
|
||||
## Other Inference API Endpoints
|
||||
|
||||
Other platforms offer inference APIs such as Anthropic, Azure, and HuggingFace to name a few. Unfortunately, the APIs on the following platforms are not compatible with the OpenAI API specification. So, the following platforms will require a slightly different configuration than the examples in the previous section.
|
||||
|
||||
### Azure Open AI
|
||||
|
||||
Azure hosted OpenAI API endpoints have their own LLM component that needs to be imported from `langchain_openai`.
|
||||
|
||||
For more information, check out the langchain documenation for [Azure OpenAI](https://python.langchain.com/docs/integrations/llms/azure_openai).
|
||||
|
||||
```python
|
||||
from dotenv import load_dotenv
|
||||
from langchain_openai import AzureChatOpenAI
|
||||
|
||||
load_dotenv()
|
||||
|
||||
default_llm = AzureChatOpenAI(
|
||||
openai_api_version=os.environ.get("AZURE_OPENAI_VERSION", "2023-07-01-preview"),
|
||||
azure_deployment=os.environ.get("AZURE_OPENAI_DEPLOYMENT", "gpt35"),
|
||||
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT", "https://<your-endpoint>.openai.azure.com/"),
|
||||
api_key=os.environ.get("AZURE_OPENAI_KEY")
|
||||
)
|
||||
|
||||
# Create an agent and assign the LLM
|
||||
example_agent = Agent(
|
||||
role='Example Agent',
|
||||
goal='Show how to assign a custom configured LLM',
|
||||
backstory='You hang out in the docs section of GitHub repos.',
|
||||
llm=default_llm
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
|
||||
Configuration settings:
|
||||
```sh
|
||||
AZURE_OPENAI_VERSION="2022-12-01"
|
||||
AZURE_OPENAI_DEPLOYMENT=""
|
||||
AZURE_OPENAI_ENDPOINT=""
|
||||
AZURE_OPENAI_KEY=""
|
||||
```
|
||||
Reference in New Issue
Block a user