-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement asynch iterator for Open AI stream #14920
Implement asynch iterator for Open AI stream #14920
Conversation
8cdad62
to
86207e4
Compare
86207e4
to
8520a43
Compare
- adds openai async iterator test cases - adapts end of stream handling - log aborts with debug severity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @colin-grant-work! Thank you for your great work ❤️
Although the problematic conditions did not (yet) occur in the Theia codebase, an adopter who themselves invoked the API might have run into them. Also in general the code is much cleaner encapsulated this way. So, thanks for the initiative!
I added a number of unit tests and noticed a weird issue with the 'end'
handling. We always handed over finalChatCompletion
, however that is actually a promise. The code did not handle that correctly and therefore emitted an undefined before completing.
Now I don't know why we ever used finalChatCompletion
as the result of that promise is actually the full completion, but we handled the chunks before anyway. So I don't think we need to handle it at all.
Please have a look whether you agree and are fine with the code change. Thanks!
@sdirix, thanks for adding the tests for such a variety of cases. That certainly helps increase confidence that things should work correctly. I seem to have left one log in that I didn't mean to, so I'll remove that, but otherwise, it looks good to me. |
be72949
to
8c4c0bf
Compare
b0f91ae
into
eclipse-theia:master
What it does
Fixes #14902 with an alternative to #14914 based on the implementation of the chunk iterator in the Open AI client itself - I couldn't find a clean way to piggyback off of that implementation while adding messages, as we want to, but it did seem worth following. The main differences between the existing implementation and this one are
emitted
calls toonce
calls. Internally,emitted
delegates toonce
, so I think we should still get the same results, but please correct me if I'm wrong.How to test
Follow-ups
Breaking changes
Attribution
Review checklist
Reminder for reviewers