-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redis indexation #102
Comments
Sounds like a breaking change for the redis implementation... (have to write this at least in the release notes) |
I managed to make something work, but I need to do more tests. |
I think it's ok to replace the current implementation when we can proof the new one to be better in form of performance etc... |
Sure, but the store needs to be indexed, before any usage. It is done on |
But is the already stored data compatible with the new implementation? |
Yes sure, it's fully compatible. What I can do, when you want to use an indexed redis store, is:
Why not offering a script for this, but I was wondering how to discover all the options needed to connect to the redis instance. Maybe giving a common example in the documentation would be a first step forward. You're OK with that? |
Yes, let's try :-)
Il giorno 11-feb-2017, alle ore 17:16, J?r?me Avoustin <notifications@github.com<mailto:notifications@github.com>> ha scritto:
Yes sure, it's fully compatible.
The events are not updated. Only some keys are added to better index events and snapshots.
But once you switch to an indexed redis store, the indexation is done. You can go back to a not indexed redis store, but you will have to do it manually (by removing a dedicated key).
What I can do, when you want to use an indexed redis store, is:
* checking if the database is already indexed (using a dedicated key). if not, you will have to do it manually using a call to an index() method on the store (maybe in a script).
* if you don't do the indexation, the common scan() method will be used, using SCAN redis command
* once the indexation is done, the scan() method will use indexes
Why not offering a script for this, but I was wondering how to discover all the options needed to connect to the redis instance. Maybe giving a common example in the documentation would be a first step forward.
You're OK with that?
-
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#102 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ABCS8rR1gGd40Bis1LZ5XpTEv9uzNq4rks5rbeFAgaJpZM4Lwpi2>.
|
@rehia do you still have interest in this, or plans to execute on it? We are sitting with around 30k events in redis, and noticing it slowing down as well. I think this solution (indexing on If you don't have plans to continue it or work on it, would you be willing to share a WIP branch with your implementation? If/when I find some time, I could pick up where you left off, and carry this forward. |
@TyGuy hi! Sorry, but I couldn't take some time to start something on this. You can do it, if you have time for it. The solution I suggested could be a good one, but as usual, it must first be validated on the field. I hop you'll find more time than me 😅 Good luck ! |
In #94, I was talking about some ways to improve redis implementation performance, mostly when there are a large number of events (or snapshots).
The problem with actual the implementation is the
scan
which usually need to scan every events (or snapshots) of an aggregate, and then make the necessary calculations to only take needed events.I have made a quick test with my production data.
I have an aggregate with +46K events with a total amount of +117K
I tried to scan all the keys (only scanning, no
get
ormget
) using 2 strategies:scan
on all the keys. It took 6181ms.zrange
on all the keys. It took 415msI think that sorted sets could be a good option to index aggregates keys, at least for 2 reasons:
This means that the database need to be indexed at first, and the index maintained. So whenever a client connects, we need to check if the store has been indexed, and if it's not the case, scan all the aggregates, and scan for their events and snapshots keys, and finally index the keys (events and snapshots separated obviously). Once done, each time an event is added, its key should be added to the corresponding sorted set.
What do you think about this solution? I'll try to work on an implementation. But if you have any remarks, objections or suggestions, don't hesitate!
The text was updated successfully, but these errors were encountered: