-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reference implementation of defer and stream spec #3659
Conversation
✅ Deploy Preview for compassionate-pike-271cb3 ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
Hi @robrichard, I'm @github-actions bot happy to help you with this PR 👋 Supported commandsPlease post this commands in separate comments and only one per comment:
|
This comment has been minimized.
This comment has been minimized.
@robrichard The latest changes of this PR are available on NPM as Also you can depend on latest version built from this PR: |
33f0460
to
fc08ed7
Compare
fc08ed7
to
55ac8df
Compare
This comment has been minimized.
This comment has been minimized.
@glasser The latest changes of this PR are available on NPM as Also you can depend on latest version built from this PR: |
@robrichard @IvanGoncharov I'm finding the types a bit hard to follow here. So However, ExecutionResult itself has AsyncExecutionResult can be ExecutionResult or SubsequentExecutionResult. The difference is that only ExecutionResult has But I don't get why I also don't get why This would make a lot more sense to me if there was the "non incremental delivery" return value that only has data/errors/extensions at the top level, and the "incremental delivery" return value that only has data/errors/extensions nested inside "incremental". Am I missing something? I'll take a look at the source too to see if that helps me understand it better. EDITED FOLLOWUP OK, it looks like the reason that all the fields of AsyncExecutionResult are also on ExecutionResult is related to the use of flattenAsyncIterable in mapSourceToResponse, a subscription-specific function that I don't really understand. This seems like it ought to hopefully be avoidable? |
I tried to simplify the types as described above. #3703 is what I came up with. A lot of the complexity comes from the |
Having tried to implement consuming in Apollo Server, I'm increasingly convinced that (assuming we don't want the first message to have an |
55ac8df
to
172c853
Compare
172c853
to
feb203a
Compare
# Conflicts: # src/execution/execute.ts # src/validation/index.d.ts # src/validation/index.ts
aa830a4
to
efda9be
Compare
We've hit a few concurrency bugs already (#2975, #3710) with the "raw" async generator objects currently used within incremental delivery. In #3694, we have a PR to introduce @brainkim 's repeaters into In particular:
#3694 just uses Repeaters to implement |
85abe97
to
ca151c7
Compare
This comment has been minimized.
This comment has been minimized.
@robrichard The latest changes of this PR are available on NPM as Also you can depend on latest version built from this PR: |
Hmmm, looks like I needed a longer think on all this. @glasser -- some helpful input from you (based on your comment within the Some links I've seen around about this:
Note that @freiksenet points out that allowing concurrent So it seems to me that this concurrent returns aren't quite a must, but I really would like to get others opinions on this! |
One option is to just avoid async iterators entirely. @benjamn privately suggested to me the idea that each chunk just contains a Fortunately this is purely a question about how to handle this in graphql-js, not something that should block the overall of the spec update from merging. I think sticking with the same API as subscriptions makes sense, and graphql-js could consider later creating a second new API (both could be supported) for following these streams if we want. |
(BTW my basic opinion about concurrent next/returns is that if we claim to implement AsyncIterator then we should do so correctly, but if we eventually decide a simpler API is good enough for our use case then that's good too. For the specific case of defer/stream results, the ordering between chunks is quite important — the chunks are designed to be applied in order. So the ability to wait on multiple |
ca151c7
to
ee0ea6c
Compare
@robrichard is the new |
@michaelstaib yes |
Ah, overlooked this one. Thanks for pointing this one out. |
@robrichard - I know this was a while ago but is it correct that Update: https://github.com/graphql/graphql-js/blob/main/website/docs/tutorials/defer-stream.md |
Continued from #2839
This is the reference impementation of the defer/stream spec proposal. The corresponding PR for spec changes are here: graphql/graphql-spec#742
Please limit comments on this PR to code review of this implementation. Discussion on the defer/stream spec is located in a dedicated repo: https://github.com/robrichard/defer-stream-wg/discussions