Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inline and optimize some decoding methods to improve performance #18

Merged
merged 1 commit into from
Dec 26, 2015

Conversation

phraktle
Copy link
Contributor

Inlined and optimized sizeFromCtrlByte and decodePointer. Reduces object churn on return values (see #13) and also improves JIT inlining patterns.

Net speedup on benchmark 10-15%.

Inlined and optimized sizeFromCtrlByte and decodePointer. Reduces
object churn on return values and also improves JIT inlining patterns.

Net speedup on benchmark 10-15%.
@phraktle
Copy link
Contributor Author

While the test coverage tool indicates a decrease, it's simply because the number of lines was reduced. The effective coverage is still the same, so this check should pass.

@oschwald
Copy link
Member

Thanks! I am seeing a 5-7% speedup with this on the included benchmark.

oschwald added a commit that referenced this pull request Dec 26, 2015
Inline and optimize some decoding methods to improve performance
@oschwald oschwald merged commit 8a874ee into maxmind:master Dec 26, 2015
@phraktle
Copy link
Contributor Author

Thanks for merging! Submitted some benchmark improvements in #19 to get more consistency in measurements.

For this change (#18), on my 2011 MacbookPro 2.2Ghz i7, Java 1.8.0_66 HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode), I've measured a baseline of ~69k ops/s, which improved to ~78k ops/s after this change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants