Tag Searching/Loading Needs to be Optimized #17499
Labels
0. Needs triage
Pending check for reproducibility or if it fits our roadmap
enhancement
feature: tags
needs info
I remembered randomly that I was asked to put this into its own issue.
Tags need to be faster for many of the tag issues (the GitHub definition of issue) to be feasible.
If people really use tags, then it's not unreasonable to expect this for 3 minutes:
Followed by a 5 minute hang when you type a letter....
That's after aggressive caching and adding indexes to the systemtag and systemtag_object_mapping table.
I don't even think that's unreasonable, really. The issue isn't with the database queries. Those are almost instantaneous with my setup, but the webui still grinds to a halt trying to deal with it.
From DataGrip:
Assuming we kept the ID of the systemtag when we select it in the list, then we can cut out a join and act on
systemtag_object_mapping.systemtagid
Even faster, and that counts time that DataGrip uses to process and render the results. Faster still if I didn't include the
filecache.name
for readability and validation of the results. Each item could be looked up separately in lazy loading by fileid.When I limit it to 50 result batches (the id is an asc index, so it'll order itself), it's ridiculously fast.
EDIT: I realize after the fact that I forgot
objecttype = 'files' and
, but it yielded very similar numbers, within the error of margin on other system tasks. objecttype is indexed, too, after all.This is clearly an issue with how the tags are fetched, cached, and displayed. I'm no webdev, and couldn't begin to guess what script and where in the page code to focus profiling, but this narrows down and gives some options for optimization.
This is a problem in 16.0.5 and certainly other versions prior. I don't know what other info might be requested.
The text was updated successfully, but these errors were encountered: