-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incremental Delivery Suggestions #154
Conversation
split executor from graphql-js
slightly more cross platform
will fix with deprecation of getOperationRootType
The codebase should refer to functions that execute the query, mutation, and/or subscription root fields as such, rather than as functions that execute the operations themselves. Executing a query or mutation returns a map of data and errors, while executing the root fields returns the data of the root fields.
The `subscribe` method calls `createSourceEventStream` and `execute`, each of which normalize the request properties separately. By normalizing requests within the `subscribe` function, we can pass the normalized arguments to the implementation for `createSourceEventStream` and `execute`, i.e. new functions `createSourceEventStreamImpl` and `executeQueryOrMutation`.
The function is used only by `buildExecutionContext` and can be called there rather than exported
...into the same file!
* Move non-class Executor functions to end of file * Move memoized collectSubFields adjacent to class functions * Add buildPerPayloadExecutionContext function * Add main exports to executor ...in preparation for wrapping as class * Refactor Executor as class
* Export internal Executor class for experimentation This class, similar to the Parser class, is exported only to assist people in implementing their own executors without duplicating too much code and should be used only as last resort for cases such as experimental syntax or if certain features could not be contributed upstream. It is still part of the internal API (so far), so any changes to it are never considered breaking changes. The `graphql-executor` package must therefore be pinned to a specific patch version, e.g. `0.0.4`. * Fix README
to speed up ci, benchmarks irrelevant to execution can be retired
to reference graphql-executor instead of graphql-js
extracted from #154 relies on bundlers that can be used for batched streaming as well
extracted from #154 relies on bundlers that can be used for batched streaming as well
extracted from #154 relies on bundlers that can be used for batched streaming as well
when using `@stream`, the `data` property will be an array only when batching. when/if graphql-js changes to always send data as an array for streamed results, graphql-executor will do so as well. until then, data will be an array only when necessary, i.e. when batch streaming
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* fix batched parallel stream error with defer where the initial responseNode was not added to the list using identical helpers for all bundlers makes this code better tested all around * add changeset
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
the Publisher and Bundler abstractions are such core parts of the execution algorithm that they probably belong within the execution folder (following the pattern of the map/flatten asyncIterable helpers)
to distinguish a single null item versus a null item in a list of non-nullable items where the null bubbles up to the list itself
...rather than just AsyncIterables to facilitate chunking
…ayloads rather than just a single payload
5b32993
to
63873a9
Compare
Defer with fragmentsOperation:query HeroNameQuery {
hero {
id
...TopFragment @defer(label: "DeferTop")
}
}
fragment TopFragment on Hero {
name
...NestedFragment @defer(label: "DeferNested")
}
fragment NestedFragment on Hero {
friends {
name
}
} Response:
|
Defer with slow field in initial payloadOperation:query HeroNameQuery {
hero {
id
...NameFragment @defer
}
}
fragment NameFragment on Hero {
slowField
friends {
...NestedFragment @defer
}
}
fragment NestedFragment on Friend {
name
} Response:Results in two sets of payloads.
|
StreamOperation:{
scalarList @stream
} OR {
scalarList @stream(initialCount: 1)
} Response:
|
Stream with chunks of greater than 1Operation:{
scalarList @stream(maxChunkSize: 2)
} Response:The first stream chunk hits its max.
|
Stream in correct order even when the first item is slowOperation:query {
asyncSlowList @stream(initialCount: 0) {
name
id
}
} Response:Note that a slow item 1 will move all the items to the second set of payloads.
|
Stream in parallelOperation:query {
asyncSlowList @stream(initialCount: 0, maxChunkSize: 1, inParallel: true) {
name
id
}
} Response:When streaming in parallel, the first set of payloads includes everything except the slow item 1.
|
Stream with asyncIterableListOperation:query {
asyncIterableList @stream {
name
id
}
} OR query {
asyncIterableList @stream(maxChunkSize: 1) {
name
id
}
} OR query {
asyncIterableList @stream(maxChunkSize: 2, maxInterval: 1) {
name
id
}
} Response:With AsyncIterables, each item will have to be in its own set of payloads, unless chunked. (The default maxChunkSize is 1.) But, even if chunked, if the interval elapses, it is sent separately before the chunk maxes out. So, a maxChunkSize greater than 1 could still end up sending smaller payloads if the maxInterval is exceeded.
|
Stream with asyncIterableList with chunk size > 1Operation:query {
asyncIterableList @stream(maxChunkSize: 2) {
name
id
}
} Response:
|
Stream where a non-nullable item returns nullOperation:query {
asyncIterableNonNullError @stream(initialCount: 1) {
name
}
} Response:
|
Returning to simple fork. |
Some of these were discussed during the Feb 2022 WG, some are new, possibly to be discussed in March :)
Execution:
= resolver functions for list fields should return AsyncIterables of Iterables, rather than just AsyncIterables to allow chunking
= change AsyncGenerator returned by
execute
to return an iterable of all available payloads instead of just the next payload@stream
:= change stream payload to return an array for data, new
atIndex
property to specify the initial index for the array= stream payloads currently act as error boundaries (as opposed to borking the entire stream/future payloads)
= add
maxChunkSize
Int argument for streaming in chunks= add 'maxInterval' Int argument to send chunks before
maxChunkSize
ifmaxInterval
ms have elapsed= add 'inParallel' Boolean argument to allow sending items whenever ready, uses 'atIndices' instead of 'atIndex' to specify the indices of what is being returned
Related:
graphql/defer-stream-wg#32
graphql/defer-stream-wg#23
graphql/defer-stream-wg#17