Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request - Cache Failure should be Non Fatal #259

Closed
ronag opened this issue Jul 2, 2016 · 6 comments
Closed

Feature Request - Cache Failure should be Non Fatal #259

ronag opened this issue Jul 2, 2016 · 6 comments

Comments

@ronag
Copy link

ronag commented Jul 2, 2016

If the cache fails that should be not fatal and only result in degraded performance. Deepstream should try to reconnect to the cache.

Related to: #207

@yasserf
Copy link
Contributor

yasserf commented Jul 10, 2016

How would this work in terms of data consistency? We only consider a record update to be successful when it is written to the cache so this would cause a big bottleneck and risk all the data in process to be lost since it would be in a in-memory queue.

Ideally you would use a cache connector with multiple nodes in a cluster and the cache driver should automatically switch. This is covered I believe in the 'ioredis' driver that redis is now using.

@ronag
Copy link
Author

ronag commented Jul 10, 2016

This is covered I believe in the 'ioredis' driver that redis is now using.

Will have to read a bit more about this:

We only consider a record update to be successful when it is written to the cache...

I believe this is a dangerous assumption in general when talking about caches. A record update should only be considered successful once written to storage? What are the implications of that?

How would this work in terms of data consistency?

Not sure...

@ronag
Copy link
Author

ronag commented Jul 12, 2016

@yasserf: Given redis. What if items are evicted (due to e.g. LRU) before it's written to the storage? Isn't that the same scenario, from deep streams perspective, as if redis died?

I believe this is related to: #278

We only consider a record update to be successful when it is written to the cache...

If this is the case then I believe the cache api needs to be updated so you can signal when items are allowed to be evicted by the cache... which would be strange...

@yasserf
Copy link
Contributor

yasserf commented Jul 13, 2016

-> write to cache ( resets LRU )
-> if write to cache successful, write to storage

If the write to cache fails, then user gets notified that the write failed by a storage_error, meaning he should revert changes or present some other mechanism to resend updata.

@ronag
Copy link
Author

ronag commented Jul 13, 2016

-> write 1 to cache
-> written 1 to cache
-> write 1 to storage
-> write 2...100 to cache
-> evict 1 from cache
-> failed write 1 to storage
-> invariant broken?

Although, such a scenario is probably unlikely, not sure

@yasserf
Copy link
Contributor

yasserf commented Jul 13, 2016

Write to cache fails, client recieves a RECORD_UPDATE_ERROR
Write to storage fail, it currently just logs an error.

Your scenario is perfectly valid!

This does present an issue, I thought the client received a storage write error but by the time the storage is written the handlers to the sender is already cleared and hence just the log. I'll raise this with the core team and discuss how this plans out in robustness.

Will add a new label for robustness issues.

@yasserf yasserf closed this as completed Mar 11, 2019
jaime-ez pushed a commit to jaime-ez/deepstream.io that referenced this issue Feb 20, 2024
* can query for other present clients
* onLogout & onLogin working
* initial gherkin test passing
* updating unit tests with refactored presence API
* querying tests
* making presence 'private'
* Bumping up package versions
* Enabling listening tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants