mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-08 15:48:29 +00:00
Merge branch 'main' of github.com:crewAIInc/crewAI into lorenze/agent-query-for-knowledge
This commit is contained in:
@@ -504,7 +504,7 @@ This example demonstrates how to:
|
||||
|
||||
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
|
||||
|
||||
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models.
|
||||
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models.
|
||||
|
||||
## How CrewAI Compares
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ A crew in crewAI represents a collaborative group of agents working together to
|
||||
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
|
||||
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
|
||||
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
|
||||
| **Output Log File** _(optional)_ | `output_log_file` | Set to True to save logs as logs.txt in the current directory or provide a file path. Logs will be in JSON format if the filename ends in .json, otherwise .txt. Defautls to `None`. |
|
||||
| **Output Log File** _(optional)_ | `output_log_file` | Set to True to save logs as logs.txt in the current directory or provide a file path. Logs will be in JSON format if the filename ends in .json, otherwise .txt. Defaults to `None`. |
|
||||
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
|
||||
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
|
||||
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
|
||||
@@ -246,7 +246,7 @@ print(f"Token Usage: {crew_output.token_usage}")
|
||||
You can see real time log of the crew execution, by setting `output_log_file` as a `True(Boolean)` or a `file_name(str)`. Supports logging of events as both `file_name.txt` and `file_name.json`.
|
||||
In case of `True(Boolean)` will save as `logs.txt`.
|
||||
|
||||
In case of `output_log_file` is set as `False(Booelan)` or `None`, the logs will not be populated.
|
||||
In case of `output_log_file` is set as `False(Boolean)` or `None`, the logs will not be populated.
|
||||
|
||||
```python Code
|
||||
# Save crew logs
|
||||
|
||||
@@ -35,7 +35,8 @@ Let's get started building your first crew!
|
||||
Before starting, make sure you have:
|
||||
|
||||
1. Installed CrewAI following the [installation guide](/installation)
|
||||
2. Set up your OpenAI API key in your environment variables
|
||||
2. Set up your LLM API key in your environment, following the [LLM setup
|
||||
guide](/concepts/llms#setting-up-your-llm)
|
||||
3. Basic understanding of Python
|
||||
|
||||
## Step 1: Create a New CrewAI Project
|
||||
@@ -92,7 +93,8 @@ For our research crew, we'll create two agents:
|
||||
1. A **researcher** who excels at finding and organizing information
|
||||
2. An **analyst** who can interpret research findings and create insightful reports
|
||||
|
||||
Let's modify the `agents.yaml` file to define these specialized agents:
|
||||
Let's modify the `agents.yaml` file to define these specialized agents. Be sure
|
||||
to set `llm` to the provider you are using.
|
||||
|
||||
```yaml
|
||||
# src/research_crew/config/agents.yaml
|
||||
@@ -107,7 +109,7 @@ researcher:
|
||||
finding relevant information from various sources. You excel at
|
||||
organizing information in a clear and structured manner, making
|
||||
complex topics accessible to others.
|
||||
llm: openai/gpt-4o-mini
|
||||
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
|
||||
|
||||
analyst:
|
||||
role: >
|
||||
@@ -120,7 +122,7 @@ analyst:
|
||||
and technical writing. You have a talent for identifying patterns
|
||||
and extracting meaningful insights from research data, then
|
||||
communicating those insights effectively through well-crafted reports.
|
||||
llm: openai/gpt-4o-mini
|
||||
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
|
||||
```
|
||||
|
||||
Notice how each agent has a distinct role, goal, and backstory. These elements aren't just descriptive - they actively shape how the agent approaches its tasks. By crafting these carefully, you can create agents with specialized skills and perspectives that complement each other.
|
||||
@@ -282,12 +284,12 @@ This script prepares the environment, specifies our research topic, and kicks of
|
||||
|
||||
Create a `.env` file in your project root with your API keys:
|
||||
|
||||
```
|
||||
OPENAI_API_KEY=your_openai_api_key
|
||||
```sh
|
||||
SERPER_API_KEY=your_serper_api_key
|
||||
# Add your provider's API key here too.
|
||||
```
|
||||
|
||||
You can get a Serper API key from [Serper.dev](https://serper.dev/).
|
||||
See the [LLM Setup guide](/concepts/llms#setting-up-your-llm) for details on configuring your provider of choice. You can get a Serper API key from [Serper.dev](https://serper.dev/).
|
||||
|
||||
## Step 8: Install Dependencies
|
||||
|
||||
|
||||
@@ -45,7 +45,8 @@ Let's dive in and build your first flow!
|
||||
Before starting, make sure you have:
|
||||
|
||||
1. Installed CrewAI following the [installation guide](/installation)
|
||||
2. Set up your OpenAI API key in your environment variables
|
||||
2. Set up your LLM API key in your environment, following the [LLM setup
|
||||
guide](/concepts/llms#setting-up-your-llm)
|
||||
3. Basic understanding of Python
|
||||
|
||||
## Step 1: Create a New CrewAI Flow Project
|
||||
@@ -107,6 +108,8 @@ Now, let's modify the generated files for the content writer crew. We'll set up
|
||||
|
||||
1. First, update the agents configuration file to define our content creation team:
|
||||
|
||||
Remember to set `llm` to the provider you are using.
|
||||
|
||||
```yaml
|
||||
# src/guide_creator_flow/crews/content_crew/config/agents.yaml
|
||||
content_writer:
|
||||
@@ -119,7 +122,7 @@ content_writer:
|
||||
You are a talented educational writer with expertise in creating clear, engaging
|
||||
content. You have a gift for explaining complex concepts in accessible language
|
||||
and organizing information in a way that helps readers build their understanding.
|
||||
llm: openai/gpt-4o-mini
|
||||
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
|
||||
|
||||
content_reviewer:
|
||||
role: >
|
||||
@@ -132,7 +135,7 @@ content_reviewer:
|
||||
content. You have an eye for detail, clarity, and coherence. You excel at
|
||||
improving content while maintaining the original author's voice and ensuring
|
||||
consistent quality across multiple sections.
|
||||
llm: openai/gpt-4o-mini
|
||||
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
|
||||
```
|
||||
|
||||
These agent definitions establish the specialized roles and perspectives that will shape how our AI agents approach content creation. Notice how each agent has a distinct purpose and expertise.
|
||||
@@ -441,10 +444,15 @@ This is the power of flows - combining different types of processing (user inter
|
||||
|
||||
## Step 6: Set Up Your Environment Variables
|
||||
|
||||
Create a `.env` file in your project root with your API keys:
|
||||
Create a `.env` file in your project root with your API keys. See the [LLM setup
|
||||
guide](/concepts/llms#setting-up-your-llm) for details on configuring a provider.
|
||||
|
||||
```
|
||||
```sh .env
|
||||
OPENAI_API_KEY=your_openai_api_key
|
||||
# or
|
||||
GEMINI_API_KEY=your_gemini_api_key
|
||||
# or
|
||||
ANTHROPIC_API_KEY=your_anthropic_api_key
|
||||
```
|
||||
|
||||
## Step 7: Install Dependencies
|
||||
@@ -547,7 +555,10 @@ Let's break down the key components of flows to help you understand how to build
|
||||
Flows allow you to make direct calls to language models when you need simple, structured responses:
|
||||
|
||||
```python
|
||||
llm = LLM(model="openai/gpt-4o-mini", response_format=GuideOutline)
|
||||
llm = LLM(
|
||||
model="model-id-here", # gpt-4o, gemini-2.0-flash, anthropic/claude...
|
||||
response_format=GuideOutline
|
||||
)
|
||||
response = llm.call(messages=messages)
|
||||
```
|
||||
|
||||
|
||||
@@ -180,8 +180,9 @@ Follow the steps below to get Crewing! 🚣♂️
|
||||
</Step>
|
||||
<Step title="Set your environment variables">
|
||||
Before running your crew, make sure you have the following keys set as environment variables in your `.env` file:
|
||||
- An [OpenAI API key](https://platform.openai.com/account/api-keys) (or other LLM API key): `OPENAI_API_KEY=sk-...`
|
||||
- A [Serper.dev](https://serper.dev/) API key: `SERPER_API_KEY=YOUR_KEY_HERE`
|
||||
- The configuration for your choice of model, such as an API key. See the
|
||||
[LLM setup guide](/concepts/llms#setting-up-your-llm) to learn how to configure models from any provider.
|
||||
</Step>
|
||||
<Step title="Lock and install the dependencies">
|
||||
- Lock the dependencies and install them by using the CLI command:
|
||||
@@ -317,7 +318,7 @@ email_summarizer:
|
||||
Summarize emails into a concise and clear summary
|
||||
backstory: >
|
||||
You will create a 5 bullet point summary of the report
|
||||
llm: openai/gpt-4o
|
||||
llm: provider/model-id # Add your choice of model here
|
||||
```
|
||||
|
||||
<Tip>
|
||||
|
||||
@@ -8,10 +8,10 @@ icon: language
|
||||
|
||||
## Description
|
||||
|
||||
This tool is used to convert natural language to SQL queries. When passsed to the agent it will generate queries and then use them to interact with the database.
|
||||
This tool is used to convert natural language to SQL queries. When passed to the agent it will generate queries and then use them to interact with the database.
|
||||
|
||||
This enables multiple workflows like having an Agent to access the database fetch information based on the goal and then use the information to generate a response, report or any other output.
|
||||
Along with that proivdes the ability for the Agent to update the database based on its goal.
|
||||
Along with that provides the ability for the Agent to update the database based on its goal.
|
||||
|
||||
**Attention**: Make sure that the Agent has access to a Read-Replica or that is okay for the Agent to run insert/update queries on the database.
|
||||
|
||||
@@ -81,4 +81,4 @@ The Tool provides endless possibilities on the logic of the Agent and how it can
|
||||
|
||||
```md
|
||||
DB -> Agent -> ... -> Agent -> DB
|
||||
```
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user