-
Notifications
You must be signed in to change notification settings - Fork 525
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize IO's encoding #90
Conversation
Codecov Report
@@ Coverage Diff @@
## master #90 +/- ##
==========================================
+ Coverage 85.93% 86.77% +0.83%
==========================================
Files 19 20 +1
Lines 384 378 -6
Branches 21 27 +6
==========================================
- Hits 330 328 -2
+ Misses 54 50 -4 |
} | ||
|
||
/** Pops the next bind function from the stack, but filters out | ||
* `Mapping.OnError` references, because we know they won't do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/Mapping.OnError/IOFrame.ErrorHandler/g
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great. Interested to see some updated benchmarks.
@alexandru this is excellent news. Btw does new IO encoding support the flatMapping over result, i.e. |
@pchlupacek the new internal encoding is optimized for flatMap-ing over both successful values and errors, however we are not exposing it in the API. Monix's |
@alexandru yeah, would be excellent to have that flatMap-ish design if possible, with same parformance as normal flatMap. |
@pchlupacek note that these changes makes A See the |
@alexandru This looks very impressive at first glance. Thanks for taking the time to do this! I won't have a chance to review it potentially for a few days. @mpilquist has already given his stamp of approval, so if you guys want to move forward, feel free to do so. :-) Otherwise, I'll get to it asap. |
👍 This looks great, thanks @alexandru! |
@alexandru Could you take a look at the compilation failure on the 2.10 build? Once that's fixed, I think we can go ahead and merge this given Daniel's response from yesterday. |
Amazing work! |
@mpilquist thanks for the review and the merge. @djspiewak when you have the time, please publish a hash version for testing purposes. This might not be the last PR for performance optimizations. I'm tormenting myself with some profiling tools from Intel with an UI made in 1994 and I'm doing experiments, but as I said it would be better to introduce further optimizations piecemeal with some proof that they work. |
Fixes #89 and improves performance.
This implementation is inspired by the internal encoding of the Monix
Task
— it's not an exact port because Monix's Task is cancellable and has to do more work because of that. Also some minor optimizations were left out for now until further benchmarks (e.g. having a separate state forIO.apply
was left out), but the gist is here.Benchmarking (last update: 2017-11-29)
The PR includes a JMH setup with
benchmarks/vPrev
andbenchmarks/vNext
as sub-projects for measuring the performance impact of changes, compared with whatever previous version we want.In order to run the benchmarks one needs to execute the script:
The results will be dumped in
bechmarks/results
.ShallowBindBenchmark
This measures a plain tail-recursive
flatMap
loop. Previous version:PR changes:
Over twice the throughput.
DeepBindBenchmark
This one measures a non-tail-recursive
flatMap
loop (like in issue #89). Previous version:After PR changes:
The differences are dramatic due to memory usage.
AttemptBenchmark
This one measures the performance of
attempt
, both for the happy path and for handling errors:Previous version:
After the PR changes:
The differences are dramatic for when errors get handled.
HandleErrorBenchmark
This one measure the performance of
handleErrorWith
, that is optimized in the new version.So previous version:
After PR changes:
Moving Forward
More optimizations are possible, but at this point this provides a good baseline — other micro-optimizations can come in separate PRs, along with the proof that they work.