-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache evaluated trigger specs #1540
Conversation
a80add0
to
5d35f0d
Compare
I've looked at this a couple times and while I understand your use-case and want to support it, it feels like a pretty heavy implementation for a reasonably niche use-case. I kind of dislike the Can we achieve similar performance using the |
Yeah it's clearly a niche use case as you only start to encounter performance problems with hundreds/thousands of them
Keep in mind though that it runs faster than re-evaluating the specs, so it's actually less heavy than doing the logic everytime
I'll try again on that, that's what I wanted to do initially but encountered reference issues (didn't dig enough in the trigger specs system to understand what was going on exactly), thus this foreach to "clone" the specs and make a new array reference (as the spread operator isn't IE 11 compatible, I had to go for a manual loop to copy each item 1 by 1)
The code in this PR is already caching the specs by associating an array of trigger specs to a given I'll dig more into this and hopefully come up with a way to avoid having to copy the array, though I'd expect I would have to copy it at some other place in the code, we'll see how it goes |
5d35f0d
to
bfd822b
Compare
@alexpetros following up on this: I have removed the array copies, don't know if I had hallucinated back then but there's no reference issues anymore by simply reusing the same trigger specs array, which alleviated the code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this looks much simpler! I added one comment that I think will help improve the readability of this, because there's a lot of reassigning going on that I think we can mitigate.
src/htmx.js
Outdated
shouldEvaluateTrigger = false | ||
} | ||
} | ||
if (shouldEvaluateTrigger) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you pull this into a separate function called parseAndCacheTrigger
? That will let you write lines 1250-1258 as:
if (shouldEvaluateTrigger) {
triggerSpecs = triggerSpecsCache[explicitTrigger] || parseAndCacheTrigger(explicitTrigger)
}
which is smaller and I think communicates the intent better than reassigning the if
condition
6366ad7
to
c32b3d7
Compare
Regrettably that makes the diff a little harder to follow on account of the indent change, but I do think the resulting code is much better. Okie dokie! |
@Telroshan is it possible to eliminate the whitespace changes here to make the actual change clearer and then submit a whitespace fix in a separate PR? |
c32b3d7
to
ec2a6e6
Compare
Done @1cg |
OK, what I don't like about this is, afaict, this is an unbounded cache, so it'll eat up more and more memory over time. I don't mind a bounded cache, although I'd want to see it help a lot in a real world situation before I agreed to it. remember, there are two hard problems in computer science:
|
@1cg it'll only eat up more memory if you define a different
As I mentioned in the description, it helped me on Zorro with thousands of tasks! Feel free to reject if if you don't like it though, but I'll definitely be using it on my end 😆 |
How do you feel about bounding the size of the cache? Is there a LRU Map implementation in JavaScript? (We could do random eviction assuming processing the elts is in order so it's likely to hit a bunch of the same specs over and over) |
If you're worried about the cache, then do you think the following would work?
|
^ That seems somewhat over-engineered. What if just put the cache behind a config option that defaults to false? The vast majority of our users will never need it, but a handful of intensive apps (zorro, contexte) will definitely be glad it exists. |
Btw, logging the cache size (number of entries + the length of the cache converted to a JSON string) after running the whole test suite gives the following
So basically 3 KB of memory overhead when running the whole test suite
I don't know how I feel about this tbh, I meant this PR as a nice under-the-hood improvement, in that case I think it'd be better to implement some cache max size & random eviction as @1cg suggested if you're worried about the cache size over time, rather than make this a config option that users should be aware of. I'll update the code with the former approach when I get the chance and let you know so we can discuss it! |
Btw which maximum cache size do you have in mind @1cg, starting from which we should start evicting older (or random) entries? |
Well my point is that the vast majority of users won't have to be aware of the caching option. Adding a config that lets you fine-tune performance at the cost of a little memory seems like a very standard way of handling that problem—and far less involved than implementing random eviction that has to work 100% correctly for every user. |
what if we punt on this a bit and move logic would be if(htmx.config.triggerSpecCache) consult the cache default would be null cache size management could be done on the side by people who want to turn it on then, we kinda wipe our hands of it |
@Telroshan what do you think about the idea above? If we check wdyt? |
If you want to do a compromise solution, wouldn't it make more sense to define some kind of functional interface (like |
TL;DR: Skip ahead and just read the suggestion part below Yeah I'm not sure if I know where I stand on this matter... I agree with @alexpetros that just exposing a property is going to be very limiting in terms of custom caching system as you're bound to using that object key-value pair retrieval syntax 😕 So if you wanted to define your own system and implement some cache eviction, you'd have to rely on some other stuff to detect "when to clear the cache" or at least check its size On the other hand, I find it too "heavy" to have to define 2 interfaces when you (by you I actually mean myself and my own usecase here) just want to have a cache that never clears and are fine with it. Again, cache size after running the whole test suite is 3 KB, a very likely-to-be overestimated number as it's the length of the encoded JSON which is probably heavier than the actual cache's memory footprint. In my case, I really don't care about that footprint and I'd rather have a cache that always grows and never shrinks as it fits my usecase. So conceptually, I agree with @alexpetros that a more robust system would allow more customized systems and benefit everyone interested in this feature. On the other hand, I don't have any intent to clear my own cache and I'm not fond of a too "annoying to configure" approach when I just want a dumb "store everything and never evict" cache. So in my specific situation, as a htmx user I'd rather have @1cg 's suggestion that is easy to set up. SuggestionWould you accept to take the This would allow the easy configuration of We would document this ofc Let me know! |
Actually let me push what I have in mind so you can directly see what it looks like (easily revertable if you don't like it ofc) |
i think we could do the simple case, where we keep the current code but move the cache object to https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy to avoid adding another layer of complexity to the system. Then people can just jam a plain javascript object in there if they just want unbounded caching, or implement a proxy if they want something more sophisticated. (Trying to keep the # of new concepts to a minimum) |
Oh I didn't know about proxies, that seems like the perfect fit to keep a simple syntax within the core lib indeed! |
ff55403
to
8d86463
Compare
Like this @1cg ? If accepted, I'll add documentation as there's now a new configuration property (and tests to make sure the cache is used if defined) |
Oh cool, I also didn't know about proxy objects. That satisfies my concerns as well. |
sure |
I'll add docs & tests in the next couple days! |
* Fix parseAndCacheTrigger indentation as discussed in #1540 #1540 (comment) * Trigger specs cache documentation + tests
When specifying a
hx-trigger
attribute,getTriggerSpecs
tokenizes & parses the trigger spec string everytime it's called.This isn't an issue for most cases, but it performs poorly when operating on a large set of elements.
I get in average the following:
GetTriggerSpecs: 140 ms, called 2001 times
GetTriggerSpecs: 3 ms, called 2001 times
The example code here is simply letting htmx initialize 2 000 divs, that have the exact same hx-trigger specified, which is in this example
hx-trigger="click[!ctrlKey&&!shiftKey&&!altKey]
This example isn't taken out of nowhere, it's actually a real use case I have on Zorro, where a board could totally have 2 000 tasks (and more) at once. I'm using this exact
hx-trigger
specifier for every task that opens a details tab for the clicked task (excluding shift/alt/ctrl clicks that have selection behaviours and shouldn't trigger a request).The suggested change here helps cutting down initial page load times, which contributes to user experience.
Given that there aren't that many possible hx-trigger specifications (there is technically a defined number of available events & modifiers), I think the added memory footprint of storing an extra copy is negligible, thus I suggest this system of caching the evaluated triggers to skip the tokenization & parsing whenever possible.
Didn't break any test when I ran them locally but let me know if I missed something