Files
crewAI/lib
Joao Moura cdc4b43620 feat(llm): add tool loop support to LLM.call() with structured LLMResult
When LLM.call() is invoked with both tools and available_functions,
it now runs a tool loop — calling the model, executing requested tools,
and feeding results back — until the model responds with text or
max_iterations is reached.

Changes:
- New llm_result.py with LLMResult and ToolCallRecord models
- LLM.call() returns LLMResult (structured) when tools are provided,
  str when not (fully backwards compatible)
- Tool loop with max_iterations parameter (default 10)
- Cost estimation based on model name and token counts
- Comprehensive test suite (17 tests, all mocked)
- Exports LLMResult and ToolCallRecord from crewai.__init__
2026-04-25 15:22:18 -07:00
..
2026-04-25 00:04:46 +08:00