-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add FromLinear
and IntoLinear
lookup table creation to build script
#416
base: master
Are you sure you want to change the base?
Conversation
CodSpeed Performance ReportMerging #416 will degrade performances by 19.68%Comparing Summary
Benchmarks breakdown
|
I'm not sure I understand what's causing the tests to fail at this point. |
Interesting. The longer build times are worrying, but there may be optimization opportunities. I'm still having that cold I mentioned, so I'll have a proper look when I'm feeling better. A few quick things for now:
|
Alright, thanks for the direction. I'll fix the issue with |
I changed around the features as you mentioned and I think I was also able to fix the issue with the build script running more often than it needs to. It should now, hopefully, only rerun if any of the files in the build folder are changed or if any features are changed. I also made it so that the code for Currently, the conditional compilation for the 16-bit lookup table methods/structs are controlled by repeated uses of Additionally, even though the build script runs less often now, it is still quite slow and seems to drastically increase the time the PR tests take, so, as mentioned before, do you have a suggestion on how I can reconcile including the float to uint lookup table generation code somewhere in the codebase while not fully rerunning it each time? |
I may be starting to realize a way to optimize the build script, but it's going to take some further research. I'll hopefully be pushing a new commit within the week that will at least somewhat improve the build time. |
I went ahead and optimized the table building to use integration instead of summation since these are all exponential or linear functions. I also removed the tests for errors and just put in the values for the number of mantissa bits used in the index since they do not seem to change (3 for |
Would it also be a good idea to generate the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, I'm feeling much better now and the workweek is over. Thanks for your patience in the meantime. I have also had some time to think about this and the slower build times from the code generation makes me think we should move away from build.rs
for this. Considering none of this really depends on anything other than cargo features, I think it would be possible to generate the code once and making sure the larger tables are opt-in.
I'll try some things with a separate codegen crate...
It's also going to take me a moment to digest what everything does here. Some of the generated code becomes a bit hard to follow. Bear with me.
Would it also be a good idea to generate the u8 to f32 tables separately? I'm not sure how much of a performance cost it is to convert from f64 to f32
You can try and see if it affects the benchmarks. Something seems to have made it a bit slower.
I'm porting over the named colors in #417. I have also modernized it and made use of |
I'll start working on consolidating those into singular library functions and moving things to the codegen crate. I'll also try to document the code a bit more so it's clearer what's going on |
Thank you and sorry for the extra work. I do think this direction will be better. Now when the cost of generating the code is paid for in advance, we could change the cargo feature setup to something simpler (sorry again). I would suggest something like this:
Simple as that. This avoids inflating the test set too much and relies on the compiler's ability to disregard constants it doesn't use. The feature is more there so it's possible to keep the binary size slim. A As for documentation, anything that explains the reasoning (especially where it diverges from the original) will help future us or other people who haven't been part of the process. Like with other things, I don't want this to turn into a black box later. I would also like to have references to the original implementation(s). That's key for tracing them back to the original reasoning and having something to compare to if there refactoring is required or if there are any issues. I know it's perhaps not the most fun thing to do, but it's even less fun to not have them later. So thanks in advance for taking some time to explain it. 🙏 A side note with the new codegen crate; let me know if you run into any rough edges. One thing I have noticed with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, thank you! This is already easier to follow. The split with generated tables and handwritten functions seems to work well too. I will go into more details later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hope all is well and sorry for the absence. I think I lost more momentum than expected when I got sick, but it's better now after some time away from programming related things during my free time.
I have added a few comments where I would like to have some documentation. I think this will be good to go once them and the other comments have been resolved. How does that sound?
codegen/src/lut.rs
Outdated
let (linear_scale, alpha, beta) = | ||
if let Some((linear_scale, linear_end)) = is_linear_as_until { | ||
( | ||
Some(linear_scale), | ||
(linear_scale * linear_end - 1.0) / (linear_end.powf(gamma.recip()) - 1.0), | ||
linear_end, | ||
) | ||
} else { | ||
(None, 1.0, 0.0) | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A comment that describes what this calculates and how it relates to the transfer function would be nice. The u16
version can refer to this.
Sorry that I've been less active. The school semester started and is kicking my butt. Thank you for the extra comments and I'll try to address them soon |
No worries, I know how it can be. 😬 Thanks for the update, and let me know if it turns out to be too stressful to find time. Good luck with the studies and don't forget to sleep! |
Hi, how's it going? I just wanted to check in and also let you know that there are some fixes in How does it look regarding getting a moment for this PR? Would it help if we scope it down a bit? The most important change from my perspective is to have the rationale for the algorithm changes in your words. The rest are things I can take care of. |
Yeah, scoping it down would help. I also am trying to type up a short document describing how I arrived at the formulae that I did, since it's a bit too in-depth to be written in comments in the code. I'll still include a basic explanation of what's going on in the documentation, though. Does that sound alright? |
That sounds great, thank you! I don't want you to feel like you are stuck with this, so the basic explanation could be good enough on its own if you want to keep it short. The point is to answer why this change was made (performance?) and what makes it equivalent to the original (referring to or showing an equation, algorithm, etc.). That usually works as reference for refactoring and bug fixing. Details for extra clarity are still appreciated, but don't sweat it. When you feel like this is done, I would be grateful if you could also squash the commits into a single commit, like before. That should be it unless you want to address any of the other comments. |
Hello again! I hope things are going well. I just want to check in and see if there's a chance that you will have time to at least squash and rebase this change anytime soon. Skip the documentation if it's too much. That can always be improved later if you happen to have time. I would love to have it in the next release, which I'll put together sometime in the beginning of the next year. I may look into options to merge this anyway, if you didn't have time for it before, while of course preserving its connection to you for proper credits. Only to move it forwards. No hard feelings involved. That's not happening during the coming few weeks, though. |
Okay, I did a lot to hopefully better organize the code and make it more readable/understandable. I still hope to make a full write-up at some point in the future, but it's pretty involved. I didn't quite understand what you meant regarding hand-writing certain functions, so I didn't change anything on that front. I also added the Let me know if you have any other comments. Once it looks good I can squash the commits into one again. |
Thank you, this is great! I should upgrade the benchmark action, like you noticed... I'll let you know when it should work again.
I'm absolutely satisfied with the current comments, but feel free to add to them later if you want to and have the time. This is voluntary work, after all, and things like your studies are more important.
It's fine, I just think they would be better to have in the "regular" source files, since they don't contain any generated values (other than the constant names), and only have the table constants in the generated code. Just to keep the types and their implementations together when possible. It's the kind of nit picks I can take care of later, so feel free to skip it.
We can hold off on them for now, since it's not a regression if they are a bit slower. I would like to check how they affect the package size before deciding. I realized that large tables may eventually become an issue with the max package size. |
The benchmark should work now, after rebasing to bring in the latest test config. 🙂 |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #416 +/- ##
==========================================
+ Coverage 82.63% 83.11% +0.47%
==========================================
Files 131 133 +2
Lines 20485 20622 +137
Branches 20485 20622 +137
==========================================
+ Hits 16927 17139 +212
+ Misses 3378 3343 -35
+ Partials 180 140 -40
|
This PR adds the creation of float ↔ uint conversion lookup tables for
FromLinear
andIntoLinear
to the crate's build script. As a result, this crate no longer has a dependency onfast-srgb8
(I have confirmed that the lookup table used byfast-srgb8
is identical to the one generated by the build script).Along with this, I have added the features:
float_lut
(foru8
to float conversions),float_lut16
(foru8
to float andu16
to float conversions),fast_uint_lut
(for fastf32
tou8
conversions), andfast_uint_lut16
(for fastf32
tou8
andf32
tou16
conversions). Of these, I have addedfloat_lut
andfast_uint_lut16
as the default features.float_lut
must be default to not cause a breaking change forSrgb
andRecOetf
, whose lookup tables were replaced by the build script. I includedfast_uint_lut16
as a default feature since its largest generated table contains only 1152u64
s, although I would understand wanting to replace it withfast_uint_lut
as the default since that only generates tables of less than 200u32
s.Building the crate seems to take considerably longer now due to the new code. I added some statements to the main.rs file in the build script that might prevent the crate being built unnecessarily often (although I doubt it since every time I run a test it seems to rerun the build script even though I haven't changed anything). If there are improvements to be made, please let me know and I'll implement them.
Also, does removing a dependency qualify as a breaking change? If so, I can add back
fast-srgb8
so that it can be removed at the next major update.