Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loadFromBinaryFile very slow on mobile iOS #7

Open
adelpit opened this issue Oct 13, 2020 · 0 comments
Open

loadFromBinaryFile very slow on mobile iOS #7

adelpit opened this issue Oct 13, 2020 · 0 comments

Comments

@adelpit
Copy link

adelpit commented Oct 13, 2020

Hello,

Thank you for this work. I am using this as part of openvslam. I am seeing very slow load times when loading the openvslam-provided orb vocab file. It takes almost 1 minute to load the 50mb file on iPhone 11.

After profiling this, it seems that pretty much the entire time is spent allocating the individual descriptors here.

image

If I replace these numerous small allocations with a single large allocation, then the load is instant.

What are your thoughts on keeping a big n_nodes x F::L block of descriptors in TemplatedVocabulary and letting each row contain the descriptor memory for a particular node?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant