You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for this work. I am using this as part of openvslam. I am seeing very slow load times when loading the openvslam-provided orb vocab file. It takes almost 1 minute to load the 50mb file on iPhone 11.
If I replace these numerous small allocations with a single large allocation, then the load is instant.
What are your thoughts on keeping a big n_nodes x F::L block of descriptors in TemplatedVocabulary and letting each row contain the descriptor memory for a particular node?
The text was updated successfully, but these errors were encountered:
Hello,
Thank you for this work. I am using this as part of openvslam. I am seeing very slow load times when loading the openvslam-provided orb vocab file. It takes almost 1 minute to load the 50mb file on iPhone 11.
After profiling this, it seems that pretty much the entire time is spent allocating the individual descriptors here.
If I replace these numerous small allocations with a single large allocation, then the load is instant.
What are your thoughts on keeping a big n_nodes x F::L block of descriptors in
TemplatedVocabulary
and letting each row contain the descriptor memory for a particular node?The text was updated successfully, but these errors were encountered: