Files
crewAI/lib/crewai/tests/llms
Devin AI 558fc6eda4 Fix streaming token usage tracking in OpenAI provider
This commit fixes issue #4056 where token usage always returns 0 when using
async streaming crew kickoff.

Root cause: The streaming completion methods (_handle_streaming_completion and
_ahandle_streaming_completion) in OpenAICompletion never called
_track_token_usage_internal(), unlike the non-streaming methods.

Changes:
- Add stream_options={'include_usage': True} to streaming params so OpenAI API
  returns usage information in the final chunk
- Extract and track token usage from the final chunk in sync streaming
- Extract and track token usage from the final chunk in async streaming
- Extract and track token usage from final_completion in response_model paths
- Add _extract_chunk_token_usage method for ChatCompletionChunk objects
- Add tests to verify streaming token usage tracking works correctly

Co-Authored-By: João <joao@crewai.com>
2025-12-10 08:28:31 +00:00
..
2025-12-01 18:56:56 -05:00
2025-12-01 18:56:56 -05:00
2025-12-01 18:56:56 -05:00
2025-12-01 18:56:56 -05:00
2025-10-20 14:10:19 -07:00