bug: fix a logical (locking) error in the Loader.Load method #113
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I've noticed that this package isn't updated for quite a while now, but I hope this PR can be merged and warrants a new release. The reason for opening this PR is because we encountered some errors where a request wouldn't get a result.
It took some debugging and searching but in the end we found out what happened. I tried to write a test for it, but as it is very time sensitive (a bunch of things need to happen at more or less the exact same time) I was not able to come up with a way to write such a test. So hopefully describing the issue here is enough to explain both the problem and the fix...
The problem is related to the locking logic. Currently the
Loader.Load
method first locks thecacheLock
to check and, if needed, insert an new request and then unlocks thecacheLock
again.Right after that it lock the
batchLock
in order to, if needed, create a new batcher, then send the new request to the batcher and finally check ifbatchCap
is reached in which case it does two things:clearCacheOnBatch
Note that these two things will also happen when a sleeping batcher reaches its wait timeout!
Now if a new request comes in, gets passed the
cacheLock
and then needs to wait for thebatchLock
, it might be that the logic that currently holds thebatchLock
will clear the cache and reset the batcher so a new batcher will be created. In that case the request that was inserted in the "old" cache, will now be send to a "new" batcher as soon as thebatchLock
is released.If in this specific situation the same request is send again (so before the "new" batcher is executed) it will be seen as a new request to the cache, so it will be inserted in the cache and send to the batcher again which will break the guarantee that each request send to the batcher will be unique. This in turn causes different types of issues depending on the logic you've written in your loaders.
Hope this all makes sense, please let me know if you need anything else from me in order to get this merged and released.
Thanks!