-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compare performance to h3ron-ndarray #1
Comments
Benchmark of h3ron-ndarray - using the H3 C implementation (best run out of five):
Benchmark of this crate - using the h3o implementation (best run out of five):
The generated output files are identical. @grim7reaper: That is a very impressive improvement. Thanks for putting all the effort into h3o. |
Ha, so cool to see such speedup in a real use case that uses a mix of h3o calls! Was the migration from h3ron smooth? |
Migration was pretty smooth, it is nice that less operations are failable in the API than in the H3 API itself. I could have used a |
I'm glad to heard that. For the conversion from |
Nice - I will use that then. It may be a bit confusing to users as - I would guess - most will expect degrees in geo-types types. Thanks. |
If most user expect degrees, I can change these public |
Sounds like a good solution 👍 |
After updating h3o v0.3.3 -> v0.3.5:
v0.3.5 contains performance improvements of the geometry to cells functionality. |
Good to see the benefits propagate to the upper levels 🙂 Normally the bigger the number of cells (big geom and/or small cells), the bigger the speedup since the new algo is kinda |
I am tiling the raster array and handling each tile separately in a thread pool - I suppose these smaller tiles somewhat limits the possible speedup here. Maybe I should review my implementation here sometime due to the performance improvements you achieved. |
Yeah, there is probably a sweet spot in term of number of tiles (to benefit from parallelism) and the size of the tile (to have enough work/tile), with the new algo you can probably split less/go for larger tiles. |
I experimented a bit with the tile size and found increasing the size leads to worse results. This has its roots in the test image having lots of no-data values - these are pixels which are ignored in the conversion process. The tiling scheme discards tiles only consisting of such pixels, before h3o even gets applied. So in the end an increased tile size leads to more work for the h3o library. As this library was originally build for earth observation data processed by machine learning models, so data often having large amounts of no-data pixels, I think I will leave the tiling scheme as it is for now. |
h3o shows quite a few performance improvements compared to libh3. As this crate is at this stage mostly a 1:1 port of h3ron-ndarray to h3o, with only a few - mostly insignificant - changes, this a good occasion to compare the H3 C library to h3o.
The text was updated successfully, but these errors were encountered: