You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could we consider just dropping the batch query completely? This was asked in #9134 (comment) and #9134 (comment). AFAICT all it gives us is a fairly small speedup on some systems, while doubling the time for anyone with any broken packages (which is common: #8930) and adding complexity.
(It would be even better if we cached results on a per-package basis, as in #9360.)
As was answered in many other comments, the speedup is not necessarily small.
Fair enough. My hunch is that the trade-off is worth it, but admittedly I'd have to collect some stats to be sure. If we did do per-package caching like #9360 then I'd expect incremental calls to always be significantly faster (except in an environment with implausibly frequent cache invalidation). But as has been discussed in various linked threads, this isn't easy, and #9422 is mostly an adequate substitute.
Could we consider just dropping the batch query completely? This was asked in #9134 (comment) and #9134 (comment). AFAICT all it gives us is a fairly small speedup on some systems, while doubling the time for anyone with any broken packages (which is common: #8930) and adding complexity.
(It would be even better if we cached results on a per-package basis, as in #9360.)
Originally posted by @georgefst in #9391 (comment)
The text was updated successfully, but these errors were encountered: