mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-08 15:48:29 +00:00
* WIP * WIP * wip * wip * WIP * More WIP * Its working but needs a massive clean up * output type works now * Usage metrics fixed * more testing * WIP * cleaning up * Update logger * 99% done. Need to make docs match new example * cleanup * drop hard coded examples * docs * Clean up * Fix errors * Trying to fix CI issues * more type checker fixes * More type checking fixes * Update LiteAgent documentation for clarity and consistency; replace WebsiteSearchTool with SerperDevTool, and improve formatting in examples. * fix fingerprinting issues * fix type-checker * Fix type-checker issue by adding type ignore comment for cache read in ToolUsage class * Add optional agent parameter to CrewAgentParser and enhance action handling logic * Remove unused parameters from ToolUsage instantiation in tests and clean up debug print statement in CrewAgentParser. * Remove deprecated test files and examples for LiteAgent; add comprehensive tests for LiteAgent functionality, including tool usage and structured output handling. * Remove unused variable 'result' from ToolUsage class to clean up code. * Add initialization for 'result' variable in ToolUsage class to resolve type-checker warnings * Refactor agent_utils.py by removing unused event imports and adding missing commas in function definitions. Update test_events.py to reflect changes in expected event counts and adjust assertions accordingly. Modify test_tools_emits_error_events.yaml to include new headers and update response content for consistency with recent API changes. * Enhance tests in crew_test.py by verifying cache behavior in test_tools_with_custom_caching and ensuring proper agent initialization with added commas in test_crew_kickoff_for_each_works_with_manager_agent_copy. * Update agent tests to reflect changes in expected call counts and improve response formatting in YAML cassette. Adjusted mock call count from 2 to 3 and refined interaction formats for clarity and consistency. * Refactor agent tests to update model versions and improve response formatting in YAML cassettes. Changed model references from 'o1-preview' to 'o3-mini' and adjusted interaction formats for consistency. Enhanced error handling in context length tests and refined mock setups for better clarity. * Update tool usage logging to ensure tool arguments are consistently formatted as strings. Adjust agent test cases to reflect changes in maximum iterations and expected outputs, enhancing clarity in assertions. Update YAML cassettes to align with new response formats and improve overall consistency across tests. * Update YAML cassette for LLM tests to reflect changes in response structure and model version. Adjusted request and response headers, including updated content length and user agent. Enhanced token limits and request counts for improved testing accuracy. * Update tool usage logging to store tool arguments as native types instead of strings, enhancing data integrity and usability. * Refactor agent tests by removing outdated test cases and updating YAML cassettes to reflect changes in tool usage and response formats. Adjusted request and response headers, including user agent and content length, for improved accuracy in testing. Enhanced interaction formats for consistency across tests. * Add Excalidraw diagram file for visual representation of input-output flow Created a new Excalidraw file that includes a diagram illustrating the input box, database, and output box with connecting arrows. This visual aid enhances understanding of the data flow within the application. * Remove redundant error handling for action and final answer in CrewAgentParser. Update tests to reflect this change by deleting the corresponding test case. --------- Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com> Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
243 lines
8.3 KiB
Plaintext
243 lines
8.3 KiB
Plaintext
---
|
|
title: LiteAgent
|
|
description: A lightweight, single-purpose agent for simple autonomous tasks within the CrewAI framework.
|
|
icon: feather
|
|
---
|
|
|
|
## Overview
|
|
|
|
A `LiteAgent` is a streamlined version of CrewAI's Agent, designed for simpler, standalone tasks that don't require the full complexity of a crew-based workflow. It's perfect for quick automations, single-purpose tasks, or when you need a lightweight solution.
|
|
|
|
<Tip>
|
|
Think of a LiteAgent as a specialized worker that excels at individual tasks.
|
|
While regular Agents are team players in a crew, LiteAgents are solo
|
|
performers optimized for specific operations.
|
|
</Tip>
|
|
|
|
## LiteAgent Attributes
|
|
|
|
| Attribute | Parameter | Type | Description |
|
|
| :------------------------------- | :---------------- | :--------------------- | :-------------------------------------------------------------- |
|
|
| **Role** | `role` | `str` | Defines the agent's function and expertise. |
|
|
| **Goal** | `goal` | `str` | The specific objective that guides the agent's actions. |
|
|
| **Backstory** | `backstory` | `str` | Provides context and personality to the agent. |
|
|
| **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model powering the agent. Defaults to "gpt-4". |
|
|
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities available to the agent. Defaults to an empty list. |
|
|
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs. Default is False. |
|
|
| **Response Format** _(optional)_ | `response_format` | `Type[BaseModel]` | Pydantic model for structured output. Optional. |
|
|
|
|
## Creating a LiteAgent
|
|
|
|
Here's a simple example of creating and using a standalone LiteAgent:
|
|
|
|
```python
|
|
from typing import List, cast
|
|
|
|
from crewai_tools import SerperDevTool
|
|
from pydantic import BaseModel, Field
|
|
|
|
from crewai.lite_agent import LiteAgent
|
|
|
|
|
|
# Define a structured output format
|
|
class MovieReview(BaseModel):
|
|
title: str = Field(description="The title of the movie")
|
|
rating: float = Field(description="Rating out of 10")
|
|
pros: List[str] = Field(description="List of positive aspects")
|
|
cons: List[str] = Field(description="List of negative aspects")
|
|
|
|
|
|
# Create a LiteAgent
|
|
critic = LiteAgent(
|
|
role="Movie Critic",
|
|
goal="Provide insightful movie reviews",
|
|
backstory="You are an experienced film critic known for balanced, thoughtful reviews.",
|
|
tools=[SerperDevTool()],
|
|
verbose=True,
|
|
response_format=MovieReview,
|
|
)
|
|
|
|
# Use the agent
|
|
query = """
|
|
Review the movie 'Inception'. Include:
|
|
1. Your rating out of 10
|
|
2. Key positive aspects
|
|
3. Areas that could be improved
|
|
"""
|
|
|
|
result = critic.kickoff(query)
|
|
|
|
|
|
# Access the structured output
|
|
review = cast(MovieReview, result.pydantic)
|
|
print(f"\nMovie Review: {review.title}")
|
|
print(f"Rating: {review.rating}/10")
|
|
print("\nPros:")
|
|
for pro in review.pros:
|
|
print(f"- {pro}")
|
|
print("\nCons:")
|
|
for con in review.cons:
|
|
print(f"- {con}")
|
|
|
|
```
|
|
|
|
This example demonstrates the core features of a LiteAgent:
|
|
|
|
- Structured output using Pydantic models
|
|
- Tool integration with WebSearchTool
|
|
- Simple execution with `kickoff()`
|
|
- Easy access to both raw and structured results
|
|
|
|
## Using LiteAgent in a Flow
|
|
|
|
For more complex scenarios, you can integrate LiteAgents into a Flow. Here's an example of a market research flow:
|
|
|
|
````python
|
|
from typing import List
|
|
from pydantic import BaseModel, Field
|
|
from crewai.flow.flow import Flow, start, listen
|
|
from crewai.lite_agent import LiteAgent
|
|
from crewai.tools import WebSearchTool
|
|
|
|
# Define a structured output format
|
|
class MarketAnalysis(BaseModel):
|
|
key_trends: List[str] = Field(description="List of identified market trends")
|
|
market_size: str = Field(description="Estimated market size")
|
|
competitors: List[str] = Field(description="Major competitors in the space")
|
|
|
|
# Define flow state
|
|
class MarketResearchState(BaseModel):
|
|
product: str = ""
|
|
analysis: MarketAnalysis = None
|
|
|
|
# Create a flow class
|
|
class MarketResearchFlow(Flow[MarketResearchState]):
|
|
@start()
|
|
def initialize_research(self, product: str):
|
|
print(f"Starting market research for {product}")
|
|
self.state.product = product
|
|
|
|
@listen(initialize_research)
|
|
async def analyze_market(self):
|
|
# Create a LiteAgent for market research
|
|
analyst = LiteAgent(
|
|
role="Market Research Analyst",
|
|
goal=f"Analyze the market for {self.state.product}",
|
|
backstory="You are an experienced market analyst with expertise in "
|
|
"identifying market trends and opportunities.",
|
|
tools=[WebSearchTool()],
|
|
verbose=True,
|
|
response_format=MarketAnalysis
|
|
)
|
|
|
|
# Define the research query
|
|
query = f"""
|
|
Research the market for {self.state.product}. Include:
|
|
1. Key market trends
|
|
2. Market size
|
|
3. Major competitors
|
|
|
|
Format your response according to the specified structure.
|
|
"""
|
|
|
|
# Execute the analysis
|
|
result = await analyst.kickoff_async(query)
|
|
self.state.analysis = result.pydantic
|
|
return result.pydantic
|
|
|
|
@listen(analyze_market)
|
|
def present_results(self):
|
|
analysis = self.state.analysis
|
|
print("\nMarket Analysis Results")
|
|
print("=====================")
|
|
|
|
print("\nKey Market Trends:")
|
|
for trend in analysis.key_trends:
|
|
print(f"- {trend}")
|
|
|
|
print(f"\nMarket Size: {analysis.market_size}")
|
|
|
|
print("\nMajor Competitors:")
|
|
for competitor in analysis.competitors:
|
|
print(f"- {competitor}")
|
|
|
|
# Usage example
|
|
import asyncio
|
|
|
|
async def run_flow():
|
|
flow = MarketResearchFlow()
|
|
result = await flow.kickoff(inputs={"product": "AI-powered chatbots"})
|
|
return result
|
|
|
|
# Run the flow
|
|
if __name__ == "__main__":
|
|
asyncio.run(run_flow())
|
|
|
|
## Key Features
|
|
|
|
### 1. Simplified Setup
|
|
Unlike regular Agents, LiteAgents are designed for quick setup and standalone operation. They don't require crew configuration or task management.
|
|
|
|
### 2. Structured Output
|
|
LiteAgents support Pydantic models for response formatting, making it easy to get structured, type-safe data from your agent's operations.
|
|
|
|
### 3. Tool Integration
|
|
Just like regular Agents, LiteAgents can use tools to enhance their capabilities:
|
|
```python
|
|
from crewai.tools import SerperDevTool, CalculatorTool
|
|
|
|
agent = LiteAgent(
|
|
role="Research Assistant",
|
|
goal="Find and analyze information",
|
|
tools=[SerperDevTool(), CalculatorTool()],
|
|
verbose=True
|
|
)
|
|
````
|
|
|
|
### 4. Async Support
|
|
|
|
LiteAgents support asynchronous execution through the `kickoff_async` method, making them suitable for non-blocking operations in your application.
|
|
|
|
## Response Formatting
|
|
|
|
LiteAgents support structured output through Pydantic models using the `response_format` parameter. This feature ensures type safety and consistent output structure, making it easier to work with agent responses in your application.
|
|
|
|
### Basic Usage
|
|
|
|
```python
|
|
from pydantic import BaseModel, Field
|
|
|
|
class SearchResult(BaseModel):
|
|
title: str = Field(description="The title of the found content")
|
|
summary: str = Field(description="A brief summary of the content")
|
|
relevance_score: float = Field(description="Relevance score from 0 to 1")
|
|
|
|
agent = LiteAgent(
|
|
role="Search Specialist",
|
|
goal="Find and summarize relevant information",
|
|
response_format=SearchResult
|
|
)
|
|
|
|
result = await agent.kickoff_async("Find information about quantum computing")
|
|
print(f"Title: {result.pydantic.title}")
|
|
print(f"Summary: {result.pydantic.summary}")
|
|
print(f"Relevance: {result.pydantic.relevance_score}")
|
|
```
|
|
|
|
### Handling Responses
|
|
|
|
When using `response_format`, the agent's response will be available in two forms:
|
|
|
|
1. **Raw Response**: Access the unstructured string response
|
|
|
|
```python
|
|
result = await agent.kickoff_async("Analyze the market")
|
|
print(result.raw) # Original LLM response
|
|
```
|
|
|
|
2. **Structured Response**: Access the parsed Pydantic model
|
|
```python
|
|
print(result.pydantic) # Parsed response as Pydantic model
|
|
print(result.pydantic.dict()) # Convert to dictionary
|
|
```
|