forked from graphql/graphql-spec
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathSection 6 -- Execution.md
1281 lines (1079 loc) · 52 KB
/
Section 6 -- Execution.md
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# Execution
A GraphQL service generates a response from a request via execution.
:: A _request_ for execution consists of a few pieces of information:
- The schema to use, typically solely provided by the GraphQL service.
- A {Document} which must contain GraphQL {OperationDefinition} and may contain
{FragmentDefinition}.
- Optionally: The name of the Operation in the Document to execute.
- Optionally: Values for any Variables defined by the Operation.
- An initial value corresponding to the root type being executed. Conceptually,
an initial value represents the "universe" of data available via a GraphQL
Service. It is common for a GraphQL Service to always use the same initial
value for every request.
Given this information, the result of {ExecuteRequest()} produces the response,
to be formatted according to the Response section below.
Note: GraphQL requests do not require any specific serialization format or
transport mechanism. Message serialization and transport mechanisms should be
chosen by the implementing service.
## Executing Requests
To execute a request, the executor must have a parsed {Document} and a selected
operation name to run if the document defines multiple operations, otherwise the
document is expected to only contain a single operation. The result of the
request is determined by the result of executing this operation according to the
"Executing Operations” section below.
ExecuteRequest(schema, document, operationName, variableValues, initialValue):
Note: the execution assumes implementing language supports coroutines.
Alternatively, the socket can provide a write buffer pointer to allow
{ExecuteRequest()} to directly write payloads into the buffer.
- Let {operation} be the result of {GetOperation(document, operationName)}.
- Let {coercedVariableValues} be the result of {CoerceVariableValues(schema,
operation, variableValues)}.
- If {operation} is a query operation:
- Return {ExecuteQuery(operation, schema, coercedVariableValues,
initialValue)}.
- Otherwise if {operation} is a mutation operation:
- Return {ExecuteMutation(operation, schema, coercedVariableValues,
initialValue)}.
- Otherwise if {operation} is a subscription operation:
- Return {Subscribe(operation, schema, coercedVariableValues, initialValue)}.
GetOperation(document, operationName):
- If {operationName} is {null}:
- If {document} contains exactly one operation.
- Return the Operation contained in the {document}.
- Otherwise raise a _request error_ requiring {operationName}.
- Otherwise:
- Let {operation} be the Operation named {operationName} in {document}.
- If {operation} was not found, raise a _request error_.
- Return {operation}.
### Validating Requests
As explained in the Validation section, only requests which pass all validation
rules should be executed. If validation errors are known, they should be
reported in the list of "errors" in the response and the request must fail
without execution.
Typically validation is performed in the context of a request immediately before
execution, however a GraphQL service may execute a request without immediately
validating it if that exact same request is known to have been validated before.
A GraphQL service should only execute requests which _at some point_ were known
to be free of any validation errors, and have since not changed.
For example: the request may be validated during development, provided it does
not later change, or a service may validate a request once and memoize the
result to avoid validating the same request again in the future.
### Coercing Variable Values
If the operation has defined any variables, then the values for those variables
need to be coerced using the input coercion rules of variable's declared type.
If a _request error_ is encountered during input coercion of variable values,
then the operation fails without execution.
CoerceVariableValues(schema, operation, variableValues):
- Let {coercedValues} be an empty unordered Map.
- Let {variablesDefinition} be the variables defined by {operation}.
- For each {variableDefinition} in {variablesDefinition}:
- Let {variableName} be the name of {variableDefinition}.
- Let {variableType} be the expected type of {variableDefinition}.
- Assert: {IsInputType(variableType)} must be {true}.
- Let {defaultValue} be the default value for {variableDefinition}.
- Let {hasValue} be {true} if {variableValues} provides a value for the name
{variableName}.
- Let {value} be the value provided in {variableValues} for the name
{variableName}.
- If {hasValue} is not {true} and {defaultValue} exists (including {null}):
- Add an entry to {coercedValues} named {variableName} with the value
{defaultValue}.
- Otherwise if {variableType} is a Non-Nullable type, and either {hasValue} is
not {true} or {value} is {null}, raise a _request error_.
- Otherwise if {hasValue} is true:
- If {value} is {null}:
- Add an entry to {coercedValues} named {variableName} with the value
{null}.
- Otherwise:
- If {value} cannot be coerced according to the input coercion rules of
{variableType}, raise a _request error_.
- Let {coercedValue} be the result of coercing {value} according to the
input coercion rules of {variableType}.
- Add an entry to {coercedValues} named {variableName} with the value
{coercedValue}.
- Return {coercedValues}.
Note: This algorithm is very similar to {CoerceArgumentValues()}.
## Executing Operations
The type system, as described in the "Type System" section of the spec, must
provide a query root operation type. If mutations or subscriptions are
supported, it must also provide a mutation or subscription root operation type,
respectively.
### Query
If the operation is a query, the result of the operation is the result of
executing the operation’s top level selection set with the query root operation
type.
An initial value may be provided when executing a query operation.
ExecuteQuery(query, schema, variableValues, initialValue):
- Let {subsequentPayloads} be an empty list.
- Let {queryType} be the root Query type in {schema}.
- Assert: {queryType} is an Object type.
- Let {selectionSet} be the top level Selection Set in {query}.
- Let {data} be the result of running {ExecuteSelectionSet(selectionSet,
queryType, initialValue, variableValues, subsequentPayloads)} _normally_
(allowing parallelization).
- Let {errors} be the list of all _field error_ raised while executing the
selection set.
- If {subsequentPayloads} is empty:
- Return an unordered map containing {data} and {errors}.
- If {subsequentPayloads} is not empty:
- Let {initialResponse} be an unordered map containing {data}, {errors}, and
an entry named {hasNext} with the value {true}.
- Let {iterator} be the result of running
{YieldSubsequentPayloads(initialResponse, subsequentPayloads)}.
- For each {payload} yielded by {iterator}:
- If a termination signal is received:
- Send a termination signal to {iterator}.
- Return.
- Otherwise:
- Yield {payload}.
### Mutation
If the operation is a mutation, the result of the operation is the result of
executing the operation’s top level selection set on the mutation root object
type. This selection set should be executed serially.
It is expected that the top level fields in a mutation operation perform
side-effects on the underlying data system. Serial execution of the provided
mutations ensures against race conditions during these side-effects.
ExecuteMutation(mutation, schema, variableValues, initialValue):
- Let {subsequentPayloads} be an empty list.
- Let {mutationType} be the root Mutation type in {schema}.
- Assert: {mutationType} is an Object type.
- Let {selectionSet} be the top level Selection Set in {mutation}.
- Let {data} be the result of running {ExecuteSelectionSet(selectionSet,
mutationType, initialValue, variableValues, subsequentPayloads)} _serially_.
- Let {errors} be the list of all _field error_ raised while executing the
selection set.
- If {subsequentPayloads} is empty:
- Return an unordered map containing {data} and {errors}.
- If {subsequentPayloads} is not empty:
- Let {initialResponse} be an unordered map containing {data}, {errors}, and
an entry named {hasNext} with the value {true}.
- Let {iterator} be the result of running
{YieldSubsequentPayloads(initialResponse, subsequentPayloads)}.
- For each {payload} yielded by {iterator}:
- If a termination signal is received:
- Send a termination signal to {iterator}.
- Return.
- Otherwise:
- Yield {payload}.
### Subscription
If the operation is a subscription, the result is an event stream called the
"Response Stream" where each event in the event stream is the result of
executing the operation for each new event on an underlying "Source Stream".
Executing a subscription operation creates a persistent function on the service
that maps an underlying Source Stream to a returned Response Stream.
Subscribe(subscription, schema, variableValues, initialValue):
- Let {sourceStream} be the result of running
{CreateSourceEventStream(subscription, schema, variableValues, initialValue)}.
- Let {responseStream} be the result of running
{MapSourceToResponseEvent(sourceStream, subscription, schema, variableValues)}
- Return {responseStream}.
Note: In a large-scale subscription system, the {Subscribe()} and
{ExecuteSubscriptionEvent()} algorithms may be run on separate services to
maintain predictable scaling properties. See the section below on Supporting
Subscriptions at Scale.
As an example, consider a chat application. To subscribe to new messages posted
to the chat room, the client sends a request like so:
```graphql example
subscription NewMessages {
newMessage(roomId: 123) {
sender
text
}
}
```
While the client is subscribed, whenever new messages are posted to chat room
with ID "123", the selection for "sender" and "text" will be evaluated and
published to the client, for example:
```json example
{
"data": {
"newMessage": {
"sender": "Hagrid",
"text": "You're a wizard!"
}
}
}
```
The "new message posted to chat room" could use a "Pub-Sub" system where the
chat room ID is the "topic" and each "publish" contains the sender and text.
**Event Streams**
An event stream represents a sequence of discrete events over time which can be
observed. As an example, a "Pub-Sub" system may produce an event stream when
"subscribing to a topic", with an event occurring on that event stream for each
"publish" to that topic. Event streams may produce an infinite sequence of
events or may complete at any point. Event streams may complete in response to
an error or simply because no more events will occur. An observer may at any
point decide to stop observing an event stream by cancelling it, after which it
must receive no more events from that event stream.
**Supporting Subscriptions at Scale**
Supporting subscriptions is a significant change for any GraphQL service. Query
and mutation operations are stateless, allowing scaling via cloning of GraphQL
service instances. Subscriptions, by contrast, are stateful and require
maintaining the GraphQL document, variables, and other context over the lifetime
of the subscription.
Consider the behavior of your system when state is lost due to the failure of a
single machine in a service. Durability and availability may be improved by
having separate dedicated services for managing subscription state and client
connectivity.
**Delivery Agnostic**
GraphQL subscriptions do not require any specific serialization format or
transport mechanism. GraphQL specifies algorithms for the creation of the
response stream, the content of each payload on that stream, and the closing of
that stream. There are intentionally no specifications for message
acknowledgement, buffering, resend requests, or any other quality of service
(QoS) details. Message serialization, transport mechanisms, and quality of
service details should be chosen by the implementing service.
#### Source Stream
A Source Stream represents the sequence of events, each of which will trigger a
GraphQL execution corresponding to that event. Like field value resolution, the
logic to create a Source Stream is application-specific.
CreateSourceEventStream(subscription, schema, variableValues, initialValue):
- Let {subscriptionType} be the root Subscription type in {schema}.
- Assert: {subscriptionType} is an Object type.
- Let {selectionSet} be the top level Selection Set in {subscription}.
- Let {groupedFieldSet} be the result of {CollectFields(subscriptionType,
selectionSet, variableValues)}.
- If {groupedFieldSet} does not have exactly one entry, raise a _request error_.
- Let {fields} be the value of the first entry in {groupedFieldSet}.
- Let {fieldName} be the name of the first entry in {fields}. Note: This value
is unaffected if an alias is used.
- Let {field} be the first entry in {fields}.
- Let {argumentValues} be the result of {CoerceArgumentValues(subscriptionType,
field, variableValues)}
- Let {fieldStream} be the result of running
{ResolveFieldEventStream(subscriptionType, initialValue, fieldName,
argumentValues)}.
- Return {fieldStream}.
ResolveFieldEventStream(subscriptionType, rootValue, fieldName, argumentValues):
- Let {resolver} be the internal function provided by {subscriptionType} for
determining the resolved event stream of a subscription field named
{fieldName}.
- Return the result of calling {resolver}, providing {rootValue} and
{argumentValues}.
Note: This {ResolveFieldEventStream()} algorithm is intentionally similar to
{ResolveFieldValue()} to enable consistency when defining resolvers on any
operation type.
#### Response Stream
Each event in the underlying Source Stream triggers execution of the
subscription selection set using that event as a root value.
MapSourceToResponseEvent(sourceStream, subscription, schema, variableValues):
- Return a new event stream {responseStream} which yields events as follows:
- For each {event} on {sourceStream}:
- Let {executionResult} be the result of running
{ExecuteSubscriptionEvent(subscription, schema, variableValues, event)}.
- For each {response} yielded by {executionResult}:
- Yield an event containing {response}.
- When {responseStream} completes: complete this event stream.
ExecuteSubscriptionEvent(subscription, schema, variableValues, initialValue):
- Let {subscriptionType} be the root Subscription type in {schema}.
- Assert: {subscriptionType} is an Object type.
- Let {selectionSet} be the top level Selection Set in {subscription}.
- Let {data} be the result of running {ExecuteSelectionSet(selectionSet,
subscriptionType, initialValue, variableValues)} _normally_ (allowing
parallelization).
- Let {errors} be the list of all _field error_ raised while executing the
selection set.
- Return an unordered map containing {data} and {errors}.
Note: The {ExecuteSubscriptionEvent()} algorithm is intentionally similar to
{ExecuteQuery()} since this is how each event result is produced.
#### Unsubscribe
Unsubscribe cancels the Response Stream when a client no longer wishes to
receive payloads for a subscription. This may in turn also cancel the Source
Stream. This is also a good opportunity to clean up any other resources used by
the subscription.
Unsubscribe(responseStream):
- Cancel {responseStream}
## Yield Subsequent Payloads
If an operation contains subsequent payload records resulting from `@stream` or
`@defer` directives, the {YieldSubsequentPayloads} algorithm defines how the
payloads should be processed.
YieldSubsequentPayloads(initialResponse, subsequentPayloads):
- Let {initialRecords} be any items in {subsequentPayloads} with a completed
{dataExecution}.
- Initialize {initialIncremental} to an empty list.
- For each {record} in {initialRecords}:
- Remove {record} from {subsequentPayloads}.
- If {isCompletedIterator} on {record} is {true}:
- Continue to the next record in {records}.
- Let {payload} be the completed result returned by {dataExecution}.
- Append {payload} to {initialIncremental}.
- If {initialIncremental} is not empty:
- Add an entry to {initialResponse} named `incremental` containing the value
{incremental}.
- Yield {initialResponse}.
- While {subsequentPayloads} is not empty:
- If a termination signal is received:
- For each {record} in {subsequentPayloads}:
- If {record} contains {iterator}:
- Send a termination signal to {iterator}.
- Return.
- Wait for at least one record in {subsequentPayloads} to have a completed
{dataExecution}.
- Let {subsequentResponse} be an unordered map with an entry {incremental}
initialized to an empty list.
- Let {records} be the items in {subsequentPayloads} with a completed
{dataExecution}.
- For each {record} in {records}:
- Remove {record} from {subsequentPayloads}.
- If {isCompletedIterator} on {record} is {true}:
- Continue to the next record in {records}.
- Let {payload} be the completed result returned by {dataExecution}.
- Append {payload} to the {incremental} entry on {subsequentResponse}.
- If {subsequentPayloads} is empty:
- Add an entry to {subsequentResponse} named `hasNext` with the value
{false}.
- Otherwise, if {subsequentPayloads} is not empty:
- Add an entry to {subsequentResponse} named `hasNext` with the value
{true}.
- Yield {subsequentResponse}
## Executing Selection Sets
To execute a selection set, the object value being evaluated and the object type
need to be known, as well as whether it must be executed serially, or may be
executed in parallel.
First, the selection set is turned into a grouped field set; then, each
represented field in the grouped field set produces an entry into a response
map.
ExecuteSelectionSet(selectionSet, objectType, objectValue, variableValues, path,
subsequentPayloads, asyncRecord):
- If {path} is not provided, initialize it to an empty list.
- If {subsequentPayloads} is not provided, initialize it to the empty set.
- Let {groupedFieldSet} and {deferredGroupedFieldsList} be the result of
{CollectFields(objectType, selectionSet, variableValues, path, asyncRecord)}.
- Initialize {resultMap} to an empty ordered map.
- For each {groupedFieldSet} as {responseKey} and {fields}:
- Let {fieldName} be the name of the first entry in {fields}. Note: This value
is unaffected if an alias is used.
- Let {fieldType} be the return type defined for the field {fieldName} of
{objectType}.
- If {fieldType} is defined:
- Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType,
fields, variableValues, path, subsequentPayloads, asyncRecord)}.
- Set {responseValue} as the value for {responseKey} in {resultMap}.
- For each {deferredGroupFieldSet} and {label} in {deferredGroupedFieldsList}
- Call {ExecuteDeferredFragment(label, objectType, objectValue,
deferredGroupFieldSet, path, variableValues, asyncRecord,
subsequentPayloads)}
- Return {resultMap}.
Note: {resultMap} is ordered by which fields appear first in the operation. This
is explained in greater detail in the Field Collection section below.
**Errors and Non-Null Fields**
If during {ExecuteSelectionSet()} a field with a non-null {fieldType} raises a
_field error_ then that error must propagate to this entire selection set,
either resolving to {null} if allowed or further propagated to a parent field.
If this occurs, any sibling fields which have not yet executed or have not yet
yielded a value may be cancelled to avoid unnecessary work.
Additionally, async payload records in {subsequentPayloads} must be filtered if
their path points to a location that has resolved to {null} due to propagation
of a field error. This is described in
[Filter Subsequent Payloads](#sec-Filter-Subsequent-Payloads). These async
payload records must be removed from {subsequentPayloads} and their result must
not be sent to clients. If these async records have not yet executed or have not
yet yielded a value they may also be cancelled to avoid unnecessary work.
Note: See [Handling Field Errors](#sec-Handling-Field-Errors) for more about
this behavior.
### Filter Subsequent Payloads
When a field error is raised, there may be async payload records in
{subsequentPayloads} with a path that points to a location that has been removed
or set to null due to null propagation. These async payload records must be
removed from subsequent payloads and their results must not be sent to clients.
In {FilterSubsequentPayloads}, {nullPath} is the path which has resolved to null
after propagation as a result of a field error. {currentAsyncRecord} is the
async payload record where the field error was raised. {currentAsyncRecord} will
not be set for field errors that were raised during the initial execution
outside of {ExecuteDeferredFragment} or {ExecuteStreamField}.
FilterSubsequentPayloads(subsequentPayloads, nullPath, currentAsyncRecord):
- For each {asyncRecord} in {subsequentPayloads}:
- If {asyncRecord} is the same record as {currentAsyncRecord}:
- Continue to the next record in {subsequentPayloads}.
- Initialize {index} to zero.
- While {index} is less then the length of {nullPath}:
- Initialize {nullPathItem} to the element at {index} in {nullPath}.
- Initialize {asyncRecordPathItem} to the element at {index} in the {path}
of {asyncRecord}.
- If {nullPathItem} is not equivalent to {asyncRecordPathItem}:
- Continue to the next record in {subsequentPayloads}.
- Increment {index} by one.
- Remove {asyncRecord} from {subsequentPayloads}. Optionally, cancel any
incomplete work in the execution of {asyncRecord}.
For example, assume the field `alwaysThrows` is a `Non-Null` type that always
raises a field error:
```graphql example
{
myObject {
... @defer {
name
}
alwaysThrows
}
}
```
In this case, only one response should be sent. The async payload record
associated with the `@defer` directive should be removed and its execution may
be cancelled.
```json example
{
"data": { "myObject": null },
"hasNext": false
}
```
### Normal and Serial Execution
Normally the executor can execute the entries in a grouped field set in whatever
order it chooses (normally in parallel). Because the resolution of fields other
than top-level mutation fields must always be side effect-free and idempotent,
the execution order must not affect the result, and hence the service has the
freedom to execute the field entries in whatever order it deems optimal.
For example, given the following grouped field set to be executed normally:
```graphql example
{
birthday {
month
}
address {
street
}
}
```
A valid GraphQL executor can resolve the four fields in whatever order it chose
(however of course `birthday` must be resolved before `month`, and `address`
before `street`).
When executing a mutation, the selections in the top most selection set will be
executed in serial order, starting with the first appearing field textually.
When executing a grouped field set serially, the executor must consider each
entry from the grouped field set in the order provided in the grouped field set.
It must determine the corresponding entry in the result map for each item to
completion before it continues on to the next item in the grouped field set:
For example, given the following selection set to be executed serially:
```graphql example
{
changeBirthday(birthday: $newBirthday) {
month
}
changeAddress(address: $newAddress) {
street
}
}
```
The executor must, in serial:
- Run {ExecuteField()} for `changeBirthday`, which during {CompleteValue()} will
execute the `{ month }` sub-selection set normally.
- Run {ExecuteField()} for `changeAddress`, which during {CompleteValue()} will
execute the `{ street }` sub-selection set normally.
As an illustrative example, let's assume we have a mutation field
`changeTheNumber` that returns an object containing one field, `theNumber`. If
we execute the following selection set serially:
```graphql example
{
first: changeTheNumber(newNumber: 1) {
theNumber
}
second: changeTheNumber(newNumber: 3) {
theNumber
}
third: changeTheNumber(newNumber: 2) {
theNumber
}
}
```
The executor will execute the following serially:
- Resolve the `changeTheNumber(newNumber: 1)` field
- Execute the `{ theNumber }` sub-selection set of `first` normally
- Resolve the `changeTheNumber(newNumber: 3)` field
- Execute the `{ theNumber }` sub-selection set of `second` normally
- Resolve the `changeTheNumber(newNumber: 2)` field
- Execute the `{ theNumber }` sub-selection set of `third` normally
A correct executor must generate the following result for that selection set:
```json example
{
"first": {
"theNumber": 1
},
"second": {
"theNumber": 3
},
"third": {
"theNumber": 2
}
}
```
When subsections contain a `@stream` or `@defer` directive, these subsections
are no longer required to execute serially. Execution of the deferred or
streamed sections of the subsection may be executed in parallel, as defined in
{ExecuteStreamField} and {ExecuteDeferredFragment}.
### Field Collection
Before execution, the selection set is converted to a grouped field set by
calling {CollectFields()}. Each entry in the grouped field set is a list of
fields that share a response key (the alias if defined, otherwise the field
name). This ensures all fields with the same response key (including those in
referenced fragments) are executed at the same time. A deferred selection set's
fields will not be included in the grouped field set. Rather, a record
representing the deferred fragment and additional context will be stored in a
list. The executor revisits and resumes execution for the list of deferred
fragment records after the initial execution is initiated. This deferred
execution would ‘re-execute’ fields with the same response key that were present
in the grouped field set.
As an example, collecting the fields of this selection set would collect two
instances of the field `a` and one of field `b`:
```graphql example
{
a {
subfield1
}
...ExampleFragment
}
fragment ExampleFragment on Query {
a {
subfield2
}
b
}
```
The depth-first-search order of the field groups produced by {CollectFields()}
is maintained through execution, ensuring that fields appear in the executed
response in a stable and predictable order.
CollectFields(objectType, selectionSet, variableValues, path, asyncRecord,
visitedFragments, deferredGroupedFieldsList):
- If {visitedFragments} is not provided, initialize it to the empty set.
- Initialize {groupedFields} to an empty ordered map of lists.
- If {deferredGroupedFieldsList} is not provided, initialize it to an empty
list.
- For each {selection} in {selectionSet}:
- If {selection} provides the directive `@skip`, let {skipDirective} be that
directive.
- If {skipDirective}'s {if} argument is {true} or is a variable in
{variableValues} with the value {true}, continue with the next {selection}
in {selectionSet}.
- If {selection} provides the directive `@include`, let {includeDirective} be
that directive.
- If {includeDirective}'s {if} argument is not {true} and is not a variable
in {variableValues} with the value {true}, continue with the next
{selection} in {selectionSet}.
- If {selection} is a {Field}:
- Let {responseKey} be the response key of {selection} (the alias if
defined, otherwise the field name).
- Let {groupForResponseKey} be the list in {groupedFields} for
{responseKey}; if no such list exists, create it as an empty list.
- Append {selection} to the {groupForResponseKey}.
- If {selection} is a {FragmentSpread}:
- Let {fragmentSpreadName} be the name of {selection}.
- If {fragmentSpreadName} provides the directive `@defer` and its {if}
argument is not {false} and is not a variable in {variableValues} with the
value {false}:
- Let {deferDirective} be that directive.
- If this execution is for a subscription operation, raise a _field
error_.
- If {deferDirective} is not defined:
- If {fragmentSpreadName} is in {visitedFragments}, continue with the next
{selection} in {selectionSet}.
- Add {fragmentSpreadName} to {visitedFragments}.
- Let {fragment} be the Fragment in the current Document whose name is
{fragmentSpreadName}.
- If no such {fragment} exists, continue with the next {selection} in
{selectionSet}.
- Let {fragmentType} be the type condition on {fragment}.
- If {DoesFragmentTypeApply(objectType, fragmentType)} is false, continue
with the next {selection} in {selectionSet}.
- Let {fragmentSelectionSet} be the top-level selection set of {fragment}.
- If {deferDirective} is defined:
- Let {label} be the value or the variable to {deferDirective}'s {label}
argument.
- Let {deferredGroupedFields} be the result of calling
{CollectFields(objectType, fragmentSelectionSet, variableValues, path,
asyncRecord, visitedFragments, deferredGroupedFieldsList)}.
- Append a record containing {label} and {deferredGroupedFields} to
{deferredGroupedFieldsList}.
- Continue with the next {selection} in {selectionSet}.
- Let {fragmentGroupedFieldSet} be the result of calling
{CollectFields(objectType, fragmentSelectionSet, variableValues, path,
asyncRecord, visitedFragments, deferredGroupedFieldsList)}.
- For each {fragmentGroup} in {fragmentGroupedFieldSet}:
- Let {responseKey} be the response key shared by all fields in
{fragmentGroup}.
- Let {groupForResponseKey} be the list in {groupedFields} for
{responseKey}; if no such list exists, create it as an empty list.
- Append all items in {fragmentGroup} to {groupForResponseKey}.
- If {selection} is an {InlineFragment}:
- Let {fragmentType} be the type condition on {selection}.
- If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType,
fragmentType)} is false, continue with the next {selection} in
{selectionSet}.
- Let {fragmentSelectionSet} be the top-level selection set of {selection}.
- If {InlineFragment} provides the directive `@defer` and its {if} argument
is not {false} and is not a variable in {variableValues} with the value
{false}:
- Let {deferDirective} be that directive.
- If this execution is for a subscription operation, raise a _field
error_.
- If {deferDirective} is defined:
- Let {label} be the value or the variable to {deferDirective}'s {label}
argument.
- Let {deferredGroupedFields} be the result of calling
{CollectFields(objectType, fragmentSelectionSet, variableValues, path,
asyncRecord, visitedFragments, deferredGroupedFieldsList)}.
- Append a record containing {label} and {deferredGroupedFields} to
{deferredGroupedFieldsList}.
- Continue with the next {selection} in {selectionSet}.
- Let {fragmentGroupedFieldSet} be the result of calling
{CollectFields(objectType, fragmentSelectionSet, variableValues, path,
asyncRecord, visitedFragments, deferredGroupedFieldsList)}.
- For each {fragmentGroup} in {fragmentGroupedFieldSet}:
- Let {responseKey} be the response key shared by all fields in
{fragmentGroup}.
- Let {groupForResponseKey} be the list in {groupedFields} for
{responseKey}; if no such list exists, create it as an empty list.
- Append all items in {fragmentGroup} to {groupForResponseKey}.
- Return {groupedFields} and {deferredGroupedFieldsList}.
Note: The steps in {CollectFields()} evaluating the `@skip` and `@include`
directives may be applied in either order since they apply commutatively.
DoesFragmentTypeApply(objectType, fragmentType):
- If {fragmentType} is an Object Type:
- if {objectType} and {fragmentType} are the same type, return {true},
otherwise return {false}.
- If {fragmentType} is an Interface Type:
- if {objectType} is an implementation of {fragmentType}, return {true}
otherwise return {false}.
- If {fragmentType} is a Union:
- if {objectType} is a possible type of {fragmentType}, return {true}
otherwise return {false}.
#### Async Payload Record
An Async Payload Record is either a Deferred Fragment Record or a Stream Record.
All Async Payload Records are structures containing:
- {label}: value derived from the corresponding `@defer` or `@stream` directive.
- {path}: a list of field names and indices from root to the location of the
corresponding `@defer` or `@stream` directive.
- {iterator}: The underlying iterator if created from a `@stream` directive.
- {isCompletedIterator}: a boolean indicating the payload record was generated
from an iterator that has completed.
- {errors}: a list of field errors encountered during execution.
- {dataExecution}: A result that can notify when the corresponding execution has
completed.
#### Execute Deferred Fragment
ExecuteDeferredFragment(label, objectType, objectValue, groupedFieldSet, path,
variableValues, parentRecord, subsequentPayloads):
- Let {deferRecord} be an async payload record created from {label} and {path}.
- Initialize {errors} on {deferRecord} to an empty list.
- Let {dataExecution} be the asynchronous future value of:
- Let {payload} be an unordered map.
- Initialize {resultMap} to an empty ordered map.
- For each {groupedFieldSet} as {responseKey} and {fields}:
- Let {fieldName} be the name of the first entry in {fields}. Note: This
value is unaffected if an alias is used.
- Let {fieldType} be the return type defined for the field {fieldName} of
{objectType}.
- If {fieldType} is defined:
- Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType,
fields, variableValues, path, subsequentPayloads, asyncRecord)}.
- Set {responseValue} as the value for {responseKey} in {resultMap}.
- Append any encountered field errors to {errors}.
- If {parentRecord} is defined:
- Wait for the result of {dataExecution} on {parentRecord}.
- If {errors} is not empty:
- Add an entry to {payload} named `errors` with the value {errors}.
- If a field error was raised, causing a {null} to be propagated to
{responseValue}:
- Add an entry to {payload} named `data` with the value {null}.
- Otherwise:
- Add an entry to {payload} named `data` with the value {resultMap}.
- If {label} is defined:
- Add an entry to {payload} named `label` with the value {label}.
- Add an entry to {payload} named `path` with the value {path}.
- Return {payload}.
- Set {dataExecution} on {deferredFragmentRecord}.
- Append {deferRecord} to {subsequentPayloads}.
## Executing Fields
Each field requested in the grouped field set that is defined on the selected
objectType will result in an entry in the response map. Field execution first
coerces any provided argument values, then resolves a value for the field, and
finally completes that value either by recursively executing another selection
set or coercing a scalar value.
ExecuteField(objectType, objectValue, fieldType, fields, variableValues, path,
subsequentPayloads, asyncRecord):
- Let {field} be the first entry in {fields}.
- Let {fieldName} be the field name of {field}.
- Append {fieldName} to {path}.
- Let {argumentValues} be the result of {CoerceArgumentValues(objectType, field,
variableValues)}
- Let {resolvedValue} be {ResolveFieldValue(objectType, objectValue, fieldName,
argumentValues)}.
- Let {result} be the result of calling {CompleteValue(fieldType, fields,
resolvedValue, variableValues, path, subsequentPayloads, asyncRecord)}.
- Return {result}.
### Coercing Field Arguments
Fields may include arguments which are provided to the underlying runtime in
order to correctly produce a value. These arguments are defined by the field in
the type system to have a specific input type.
At each argument position in an operation may be a literal {Value}, or a
{Variable} to be provided at runtime.
CoerceArgumentValues(objectType, field, variableValues):
- Let {coercedValues} be an empty unordered Map.
- Let {argumentValues} be the argument values provided in {field}.
- Let {fieldName} be the name of {field}.
- Let {argumentDefinitions} be the arguments defined by {objectType} for the
field named {fieldName}.
- For each {argumentDefinition} in {argumentDefinitions}:
- Let {argumentName} be the name of {argumentDefinition}.
- Let {argumentType} be the expected type of {argumentDefinition}.
- Let {defaultValue} be the default value for {argumentDefinition}.
- Let {hasValue} be {true} if {argumentValues} provides a value for the name
{argumentName}.
- Let {argumentValue} be the value provided in {argumentValues} for the name
{argumentName}.
- If {argumentValue} is a {Variable}:
- Let {variableName} be the name of {argumentValue}.
- Let {hasValue} be {true} if {variableValues} provides a value for the name
{variableName}.
- Let {value} be the value provided in {variableValues} for the name
{variableName}.
- Otherwise, let {value} be {argumentValue}.
- If {hasValue} is not {true} and {defaultValue} exists (including {null}):
- Add an entry to {coercedValues} named {argumentName} with the value
{defaultValue}.
- Otherwise if {argumentType} is a Non-Nullable type, and either {hasValue} is
not {true} or {value} is {null}, raise a _field error_.
- Otherwise if {hasValue} is true:
- If {value} is {null}:
- Add an entry to {coercedValues} named {argumentName} with the value
{null}.
- Otherwise, if {argumentValue} is a {Variable}:
- Add an entry to {coercedValues} named {argumentName} with the value
{value}.
- Otherwise:
- If {value} cannot be coerced according to the input coercion rules of
{argumentType}, raise a _field error_.
- Let {coercedValue} be the result of coercing {value} according to the
input coercion rules of {argumentType}.
- Add an entry to {coercedValues} named {argumentName} with the value
{coercedValue}.
- Return {coercedValues}.
Note: Variable values are not coerced because they are expected to be coerced
before executing the operation in {CoerceVariableValues()}, and valid operations
must only allow usage of variables of appropriate types.
### Value Resolution
While nearly all of GraphQL execution can be described generically, ultimately
the internal system exposing the GraphQL interface must provide values. This is
exposed via {ResolveFieldValue}, which produces a value for a given field on a
type for a real value.
As an example, this might accept the {objectType} `Person`, the {field}
{"soulMate"}, and the {objectValue} representing John Lennon. It would be
expected to yield the value representing Yoko Ono.
List values are resolved similarly. For example, {ResolveFieldValue} might also
accept the {objectType} `MusicBand`, the {field} {"members"}, and the
{objectValue} representing the Beatles. It would be expected to yield a
collection of values representing John Lennon, Paul McCartney, Ringo Starr and
George Harrison.
ResolveFieldValue(objectType, objectValue, fieldName, argumentValues):
- Let {resolver} be the internal function provided by {objectType} for
determining the resolved value of a field named {fieldName}.
- Return the result of calling {resolver}, providing {objectValue} and
{argumentValues}.
Note: It is common for {resolver} to be asynchronous due to relying on reading
an underlying database or networked service to produce a value. This
necessitates the rest of a GraphQL executor to handle an asynchronous execution
flow. In addition, an implementation for collections may leverage asynchronous
iterators or asynchronous generators provided by many programming languages.
This may be particularly helpful when used in conjunction with the `@stream`
directive.
### Value Completion
After resolving the value for a field, it is completed by ensuring it adheres to
the expected return type. If the return type is another Object type, then the
field execution process continues recursively. If the return type is a List
type, each member of the resolved collection is completed using the same value
completion process. In the case where `@stream` is specified on a field of list
type, value completion iterates over the collection until the number of items
yielded items satisfies `initialCount` specified on the `@stream` directive.
#### Execute Stream Field
ExecuteStreamField(label, iterator, index, fields, innerType, path,
streamRecord, variableValues, subsequentPayloads):
- Let {streamRecord} be an async payload record created from {label}, {path},
and {iterator}.
- Initialize {errors} on {streamRecord} to an empty list.
- Let {itemPath} be {path} with {index} appended.
- Let {dataExecution} be the asynchronous future value of:
- Wait for the next item from {iterator}.
- If an item is not retrieved because {iterator} has completed:
- Set {isCompletedIterator} to {true} on {streamRecord}.
- Return {null}.
- Let {payload} be an unordered map.
- If an item is not retrieved because of an error:
- Append the encountered error to {errors}.
- Add an entry to {payload} named `items` with the value {null}.
- Otherwise:
- Let {item} be the item retrieved from {iterator}.
- Let {data} be the result of calling {CompleteValue(innerType, fields,
item, variableValues, itemPath, subsequentPayloads, parentRecord)}.
- Append any encountered field errors to {errors}.
- Increment {index}.
- Call {ExecuteStreamField(label, iterator, index, fields, innerType, path,
streamRecord, variableValues, subsequentPayloads)}.
- If a field error was raised, causing a {null} to be propagated to {data},
and {innerType} is a Non-Nullable type:
- Add an entry to {payload} named `items` with the value {null}.
- Otherwise:
- Add an entry to {payload} named `items` with a list containing the value
{data}.
- If {errors} is not empty:
- Add an entry to {payload} named `errors` with the value {errors}.
- If {label} is defined:
- Add an entry to {payload} named `label` with the value {label}.
- Add an entry to {payload} named `path` with the value {itemPath}.
- If {parentRecord} is defined:
- Wait for the result of {dataExecution} on {parentRecord}.
- Return {payload}.
- Set {dataExecution} on {streamRecord}.
- Append {streamRecord} to {subsequentPayloads}.
CompleteValue(fieldType, fields, result, variableValues, path,
subsequentPayloads, asyncRecord):
- If the {fieldType} is a Non-Null type:
- Let {innerType} be the inner type of {fieldType}.
- Let {completedResult} be the result of calling {CompleteValue(innerType,
fields, result, variableValues, path)}.
- If {completedResult} is {null}, raise a _field error_.
- Return {completedResult}.
- If {result} is {null} (or another internal value similar to {null} such as
{undefined}), return {null}.
- If {fieldType} is a List type:
- If {result} is not a collection of values, raise a _field error_.
- Let {field} be the first entry in {fields}.
- Let {innerType} be the inner type of {fieldType}.
- If {field} provides the directive `@stream` and its {if} argument is not
{false} and is not a variable in {variableValues} with the value {false} and
{innerType} is the outermost return type of the list type defined for
{field}:
- Let {streamDirective} be that directive.
- If this execution is for a subscription operation, raise a _field error_.
- Let {initialCount} be the value or variable provided to
{streamDirective}'s {initialCount} argument.
- If {initialCount} is less than zero, raise a _field error_.
- Let {label} be the value or variable provided to {streamDirective}'s
{label} argument.
- Let {iterator} be an iterator for {result}.