-
Notifications
You must be signed in to change notification settings - Fork 383
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request - Cache Failure should be Non Fatal #259
Comments
How would this work in terms of data consistency? We only consider a record update to be successful when it is written to the cache so this would cause a big bottleneck and risk all the data in process to be lost since it would be in a in-memory queue. Ideally you would use a cache connector with multiple nodes in a cluster and the cache driver should automatically switch. This is covered I believe in the 'ioredis' driver that redis is now using. |
Will have to read a bit more about this:
I believe this is a dangerous assumption in general when talking about caches. A record update should only be considered successful once written to storage? What are the implications of that?
Not sure... |
@yasserf: Given redis. What if items are evicted (due to e.g. LRU) before it's written to the storage? Isn't that the same scenario, from deep streams perspective, as if redis died? I believe this is related to: #278
If this is the case then I believe the cache api needs to be updated so you can signal when items are allowed to be evicted by the cache... which would be strange... |
-> write to cache ( resets LRU ) If the write to cache fails, then user gets notified that the write failed by a storage_error, meaning he should revert changes or present some other mechanism to resend updata. |
-> write 1 to cache Although, such a scenario is probably unlikely, not sure |
Write to cache fails, client recieves a RECORD_UPDATE_ERROR Your scenario is perfectly valid! This does present an issue, I thought the client received a storage write error but by the time the storage is written the handlers to the sender is already cleared and hence just the log. I'll raise this with the core team and discuss how this plans out in robustness. Will add a new label for robustness issues. |
* can query for other present clients * onLogout & onLogin working * initial gherkin test passing * updating unit tests with refactored presence API * querying tests * making presence 'private' * Bumping up package versions * Enabling listening tests
If the cache fails that should be not fatal and only result in degraded performance. Deepstream should try to reconnect to the cache.
Related to: #207
The text was updated successfully, but these errors were encountered: