Compare commits

..

2 Commits

Author SHA1 Message Date
Devin AI
ee335c7862 Improve test documentation and coverage for LLM import
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-03-13 01:13:14 +00:00
Devin AI
6be47ee64e Fix issue #2353: Add tests for importing LLM from crewai
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-03-13 01:07:01 +00:00
3 changed files with 67 additions and 53 deletions

View File

@@ -1,51 +0,0 @@
# Fix for Issue #2356: Missing Parentheses in Flow Documentation
## Issue
In the "first-flow.mdx" documentation, there's an error in the code example where a task method reference is missing parentheses:
```python
@task
def review_section_task(self) -> Task:
return Task(
config=self.tasks_config['review_section_task'],
context=[self.write_section_task] # Missing parentheses
)
```
This causes an AttributeError when running `crewai flow kickoff` because the Flow system requires explicit method calls with parentheses.
## Error Message
When users follow the documentation and use the code as shown, they encounter this error:
```
AttributeError: 'function' object has no attribute 'get'
```
## Root Cause
The core issue is that the Flow system in CrewAI requires explicit method calls with parentheses when processing context tasks. This is implemented in the `_map_task_variables` method in `crew_base.py`:
```python
if context_list := task_info.get("context"):
self.tasks_config[task_name]["context"] = [
tasks[context_task_name]() for context_task_name in context_list
]
```
When users follow the documentation and use `context=[self.write_section_task]` without parentheses, they get an AttributeError because a function object doesn't have a `get` attribute.
## Fix
The correct code should be:
```python
@task
def review_section_task(self) -> Task:
return Task(
config=self.tasks_config['review_section_task'],
context=[self.write_section_task()] # Added parentheses
)
```
## Verification
I've created a minimal reproducible example that demonstrates both the error and the fix. The error occurs because in `crew_base.py`, the `_map_task_variables` method explicitly requires method calls with parentheses when processing context tasks.
## Documentation Update Needed
The documentation at docs.crewai.com/guides/flows/first-flow needs to be updated to show the correct syntax with parentheses.

View File

@@ -232,7 +232,7 @@ class ContentCrew():
def review_section_task(self) -> Task:
return Task(
config=self.tasks_config['review_section_task'],
context=[self.write_section_task()]
context=[self.write_section_task]
)
@crew
@@ -601,4 +601,4 @@ Now that you've built your first flow, you can:
<Check>
Congratulations! You've successfully built your first CrewAI Flow that combines regular code, direct LLM calls, and crew-based processing to create a comprehensive guide. These foundational skills enable you to create increasingly sophisticated AI applications that can tackle complex, multi-stage problems through a combination of procedural control and collaborative intelligence.
</Check>
</Check>

65
tests/test_import_llm.py Normal file
View File

@@ -0,0 +1,65 @@
"""Tests for LLM import functionality."""
import pytest
@pytest.fixture
def bedrock_model_path():
"""Fixture providing the standard Bedrock model path for testing."""
return "bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
def test_import_llm_from_crewai():
"""
Test LLM import functionality from crewai package.
Verifies:
- Direct import of LLM class from crewai package
- Import statement succeeds without raising exceptions
"""
try:
from crewai import LLM
assert LLM is not None
except ImportError as e:
pytest.fail(f"Failed to import LLM from crewai: {e}")
def test_bedrock_llm_creation(bedrock_model_path):
"""
Test that a Bedrock LLM can be created with the correct model.
Verifies:
- LLM can be instantiated with a Bedrock model
- The model property is correctly set to the Bedrock model path
- No exceptions are raised during instantiation
"""
try:
from crewai import LLM
# Just test the object creation, not the actual API call
bedrock_llm = LLM(model=bedrock_model_path)
assert bedrock_llm is not None
assert bedrock_llm.model == bedrock_model_path
except Exception as e:
pytest.fail(f"Failed to create Bedrock LLM: {e}")
def test_llm_with_invalid_model():
"""
Test LLM creation with an invalid model name format.
Verifies:
- LLM can be instantiated with any model string
- No validation errors occur during instantiation
- The model property is correctly set to the provided string
"""
try:
from crewai import LLM
invalid_model = "invalid-model-name"
llm = LLM(model=invalid_model)
assert llm is not None
assert llm.model == invalid_model
except Exception as e:
pytest.fail(f"Failed to create LLM with invalid model: {e}")