You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I try to load multiple keys (some being duplicates) before a batch is dispatched, I would expect that my batch function would trigger Math.ceil(queueSize / maxBatchSize) times after the intentional delay using batchScheduleFn. queueSize being the number of unique keys that have yet to be dispatched.
Current Behavior
The getCurrentBatch function is creating a new batch when existingBatch.cacheHits.length reaches the maxBatchSize, thus spreading out the keys across more batches than are needed.
Possible Solution
https://github.com/graphql/dataloader/blob/master/src/index.js#L92
On the line above, it is currently pushing to the cacheHits array, regardless of if the key is a duplicate of a previous load on the same batch. If this were instead a Map<key, fn>, it would prevent duplicate loads from incrementing this array's length and causing new batches to be created when they aren't needed.
Steps to Reproduce
constloader=newDataLoader(async(keys)=>{console.log(keys);},{maxBatchSize: 3,batchScheduleFn: (callback)=>setTimeout(callback,100)});constkeys=['a','b','a','a','a','b','c'];for(constkeyofkeys){loader.load(key);}/*The code above will log 2 separate batches:[ 'a' , 'b' ][ 'c' ]Instead of the expected single batch of:[ 'a', 'b', 'c' ]*/
Context
We are trying to batch as many IDs as we can into a single request using DataLoader, however, since we are using Apollo GraphQL, there are times where a query might attempt to load the same ID for different fields within the schema. When this happens enough times, it reaches the maxBatchSize and causes the issue I outlined above where new batches are being created when they don't need to be.
The text was updated successfully, but these errors were encountered:
Expected Behavior
If I try to load multiple keys (some being duplicates) before a batch is dispatched, I would expect that my batch function would trigger
Math.ceil(queueSize / maxBatchSize)
times after the intentional delay using batchScheduleFn.queueSize
being the number of unique keys that have yet to be dispatched.Current Behavior
The
getCurrentBatch
function is creating a new batch whenexistingBatch.cacheHits.length
reaches themaxBatchSize
, thus spreading out the keys across more batches than are needed.Possible Solution
https://github.com/graphql/dataloader/blob/master/src/index.js#L92
On the line above, it is currently pushing to the
cacheHits
array, regardless of if the key is a duplicate of a previous load on the same batch. If this were instead aMap<key, fn>
, it would prevent duplicate loads from incrementing this array's length and causing new batches to be created when they aren't needed.Steps to Reproduce
Context
We are trying to batch as many IDs as we can into a single request using DataLoader, however, since we are using Apollo GraphQL, there are times where a query might attempt to load the same ID for different fields within the schema. When this happens enough times, it reaches the
maxBatchSize
and causes the issue I outlined above where new batches are being created when they don't need to be.The text was updated successfully, but these errors were encountered: