Support a maximum batch size per DataSource
#76
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
maxBatchSizePhase
interpreter which splits fetches to data sources with a maximum batch size in to multiple sequential fetches. This required a change toConcurrent
to return aInMemoryCache
instead of aDataSourceCache
making it possible to combine multiple caches for sequential batches.processConcurrent
method which is used in thecoreInterpreter
(previously calledinterpreter
) by usingXorT
instead of nestedif
/else
andOption.fold
and by extracting some helper methods.Fetch
constructors a better name (eg.FetchMany.as
toFetchMany.ids
)Fetch.multiple
method which makes it possible to create aFetch[List[...]]
directly (usingFetchMany
underneath). This can currently (and previously) be done by usingsequence
ortraverse
, butmultiple
doesn't have the overhead of inspecting the Fetch structure to find independent fetches which can be parallelized.