fix(openai): parse finish_reason from chat completion stream#1526
Merged
fix(openai): parse finish_reason from chat completion stream#1526
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Important
Parse
finish_reasonfrom chat completion streams in_extract_streamed_openai_response()and include it in metadata if present.finish_reasonfrom chat completion streams in_extract_streamed_openai_response()inopenai.py.finish_reasonin returned metadata if notNone._extract_streamed_openai_response()to capturefinish_reasonfromchoicesin response chunks._extract_streamed_openai_response()to includefinish_reasonin metadata.This description was created by
for a4f65ba. You can customize this summary. It will automatically update as commits are pushed.
Disclaimer: Experimental PR review
Greptile Summary
This PR extracts
finish_reasonfrom streamed OpenAI chat completion responses and passes it as metadata to the Langfuse generation update. Previously, the 4th tuple element returned by_extract_streamed_openai_responsewas hardcoded toNone, meaningfinish_reasonwas lost during streaming. Now it is captured per-choice and wrapped in a metadata dict{"finish_reason": finish_reason}.finish_reasonvariable tracking in_extract_streamed_openai_responsefinish_reasonfrom each streamed choice (chat type only)_extract_streamed_response_api_responsereturns metadataConfidence Score: 4/5
Important Files Changed
finish_reasonextraction from streamed chat completion chunks and passes it as metadata. The logic correctly preserves the value from the last chunk with choices, which is the standard behavior for OpenAI streaming.Sequence Diagram
sequenceDiagram participant Client participant OpenAI as OpenAI API participant Extract as _extract_streamed_openai_response participant Update as _create_langfuse_update participant Langfuse as Langfuse Generation Client->>OpenAI: Chat completion (stream=True) loop For each streamed chunk OpenAI-->>Extract: chunk (delta, finish_reason=null) Extract->>Extract: Accumulate content end OpenAI-->>Extract: Final chunk (finish_reason="stop") Extract->>Extract: Capture finish_reason OpenAI-->>Extract: Usage chunk (choices=[]) Extract->>Extract: finish_reason preserved (no choices) Extract-->>Update: (model, completion, usage, {finish_reason}) Update->>Langfuse: generation.update(metadata={finish_reason})Last reviewed commit: a4f65ba