-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ui: Reconcile ember-data store when records are deleted via blocking #5745
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels strange to me, to track this state independently on each model in this way. Wouldn't this effectively recycle state of UI every time we modify synctime? Couldn't we just reconcile with watching the length of records in the response and updating the view if that changes?
d43947f
to
4ca64ea
Compare
What is causing the store build up currently? Shouldn't records be removed from the store based on which IDs remain in the incoming response? |
// unload anything older than our current sync date/time | ||
store.peekAll(repo.getModelName()).forEach(function(item) { | ||
const date = get(item, 'SyncTime'); | ||
if (typeof date !== 'undefined' && date != meta.date) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment on line 19 suggests an inequality here. I'm assuming the !=
is functionally equivalent to a <
since there will only ever be two dates in the store (the old one and the new one)?
Hey @pearkes ! Thanks for looking at this, I think we briefly spoke about it, but I'm going to respond to @DingoEatingFuzz here, so I'll drop some info in here also.
The UI has a dom-recycling scroller, so only 10-20 table rows worth of DOM elements are in the document at any one time (depending on the size or your browser window), this eliminates the classic web app bottleneck of mutating lots of DOM elements at once. Briefly, If we are showing rows 20,000 - 20,020 and rows 30,100-30,120 change, we don't mutate the DOM at all. If all 40,000 rows change, as we don't use SyncTime in the templates at all, the DOM should never change anything when we update this value anyway (depending on the templating engine). Do we mutate a property on potentially 40,000 rows of 'data'? Yes we do. But I don't think that will cause a large issue right now ( So if by 'state of UI' you mean 'what you can see', then no it shouldn't alter that.
If we remove 10 records and add 10 new records in one response, the length of the response will be the same, so if I've understood you correctly that would rule this approach out? @DingoEatingFuzz thanks for giving this the once over!
As far as I'm aware From what I remember you took a different approach and tackled it with the multidimensional looping thing I explain above (I think you maybe misremembering that I did this here also), i.e. loop through all your past records for every new record you have to figure out the difference between the two. I originally was going to do similar but guessed the approach I have here would likely be faster than potentially 9-10 zeros worth of looping/conditionals. Since we are doing this on the main thread currently, I'd much prefer a loop of say 40,000 in size over 1,600,000,000 which would potentially block the main thread. 'Generally' I'm aiming for higher datasets here, so what works more than fine for Nomad UI can potentially be problematic in Consul UI. Once we finish the work on #5637 we'll have the option of doing any diffs like this in a WebWorker off the main thread (I've spoken about this elsewhere also), and all this can potentially be removed. I'd love to get this in soon, so if you have any other questions or suggestions before an approval, please let me know! |
Yes, I thought Consul was already doing some form of record culling.
A typical set intersection will not have O(n^2) complexity. The implementation in Nomad is O(n log n) since the inner loop both exits early ( If you are willing to trade memory for cpu you could also make an intersection as fast as O(2n) by first looping over the new records to create a hash map of all new IDs, then looping over the old records and removing any that has an ID not found in the hash map (O(1) lookup). Plugging in the 40,000 example number, that would be ~184,000 operations for the first approach and 80,000 for the second approach. Both significantly better than 1,600,000,000. This is all an exercise in premature optimization though since it assumes all operations in the loop are equal. It's possible that removing a record in the store causes some sort of state reconciliation and removing all old records in a single pass is in fact ultimately slower due to the penalty of an expensive operation. |
Oh! No idea. I think the main point here is, culling is needed, one way or another and preferably without causing a pause on the main thread, especially if the culling is happening every .5 seconds for example (although I think you are right here that we are in the zone of premature optimisation). I don't mind how we do it, but in my previous tests a few months ago, this is one of the things that caused the pause, multi-dimensional looping, but maybe I wasn't using Moving everything to a WebWorker solved it, but I haven't had time to get the WebWorker work in in a way that makes sense yet. I'm happy to switch to using the multidimensional approach if you think that works better? Anyway lemme know. |
I think it's nicer in that it doesn't introduce a new client-side only prop, but since in this case better == faster, I dunno. If you have the time, I'd love to see some benchmarking of the two options. Since this is already implemented, it probably makes sense to merge as is and circle back. |
👍 yup I getcha, agree
I think thats a great plan, thanks Michael! 🙌 Right now, this still needs an approval, not sure if we want to add this to 1.5 or not, @pearkes ? |
Hey, So I had a little time to look at this. I fed 5,000 different records into It gave me a 'loop count' of 25,000,000 with 5000 changing records. I tried loading up 40,000 records but it didn't like it, I'm sure I managed to do it once and it gave me something like 160,000,??? but I can't remember exactly how many zeros there were, pretty sure it was 1 less than I said in my comment above, so maybe 8-9 zeros? I dunno, basically lots. Also that count doesn't include the I then reset the extra code I added for the counting and did some manual 'testing'.
I know this isn't the most scientific approach, but it was quick to try out and gives some sort of impression of real usage.
For me neither of the first 2 above is really that satisfactory. A fe months ago, the only way I found to drastically improve on the 4-5 seconds for consul was to:
Are these numbers realistic? 40,000 maybe not (although we've had reports of this many for consul), 5,000 I would say is reasonable - in consul at least. Is it reasonable that all of them are changing every single time, maybe not, but if I can squeeze a bit more responsiveness out of it for a better user experience than that's probably the best thing to do. Anyway, thanks for the approval will merge down. John |
Currently we are barely using the ember-data store/cache, but it will still cache records in the store even though technically we aren't using it. This adds a SyncTime to every record that uses blocking queries so we can delete older records from the ember-data cache to prevent them building up
Adds a timestamp method that we can access from within tests so we can test that the SyncTime is being set. There is probably a better way to do this, but this is also probably the simplest approach - we are also likely to revisit this at a later date
4ca64ea
to
8a1f849
Compare
…5745) * ui: Reconciliate ember-data store when records are deleted via blocking Currently we are barely using the ember-data store/cache, but it will still cache records in the store even though technically we aren't using it. This adds a SyncTime to every record that uses blocking queries so we can delete older records from the ember-data cache to prevent them building up * ui: Add basic timestamp method we can access from tests, fixup tests Adds a timestamp method that we can access from within tests so we can test that the SyncTime is being set. There is probably a better way to do this, but this is also probably the simplest approach - we are also likely to revisit this at a later date
…5745) * ui: Reconciliate ember-data store when records are deleted via blocking Currently we are barely using the ember-data store/cache, but it will still cache records in the store even though technically we aren't using it. This adds a SyncTime to every record that uses blocking queries so we can delete older records from the ember-data cache to prevent them building up * ui: Add basic timestamp method we can access from tests, fixup tests Adds a timestamp method that we can access from within tests so we can test that the SyncTime is being set. There is probably a better way to do this, but this is also probably the simplest approach - we are also likely to revisit this at a later date
…5745) * ui: Reconciliate ember-data store when records are deleted via blocking Currently we are barely using the ember-data store/cache, but it will still cache records in the store even though technically we aren't using it. This adds a SyncTime to every record that uses blocking queries so we can delete older records from the ember-data cache to prevent them building up * ui: Add basic timestamp method we can access from tests, fixup tests Adds a timestamp method that we can access from within tests so we can test that the SyncTime is being set. There is probably a better way to do this, but this is also probably the simplest approach - we are also likely to revisit this at a later date
…5745) * ui: Reconciliate ember-data store when records are deleted via blocking Currently we are barely using the ember-data store/cache, but it will still cache records in the store even though technically we aren't using it. This adds a SyncTime to every record that uses blocking queries so we can delete older records from the ember-data cache to prevent them building up * ui: Add basic timestamp method we can access from tests, fixup tests Adds a timestamp method that we can access from within tests so we can test that the SyncTime is being set. There is probably a better way to do this, but this is also probably the simplest approach - we are also likely to revisit this at a later date
…5745) * ui: Reconciliate ember-data store when records are deleted via blocking Currently we are barely using the ember-data store/cache, but it will still cache records in the store even though technically we aren't using it. This adds a SyncTime to every record that uses blocking queries so we can delete older records from the ember-data cache to prevent them building up * ui: Add basic timestamp method we can access from tests, fixup tests Adds a timestamp method that we can access from within tests so we can test that the SyncTime is being set. There is probably a better way to do this, but this is also probably the simplest approach - we are also likely to revisit this at a later date
Currently we are barely using the ember-data store/cache, but it will
still cache records in the store even though technically we aren't using
it.
This adds a SyncTime to every record that uses blocking queries so we
can delete older records from the ember-data cache to prevent them
building up.
We decided against using the server time here, and are using a client timestamp instead. We also removed keeping track of id's of records in the meta data as using a SyncTime covers the usecase that keeping track of these id's was for.