-
-
Notifications
You must be signed in to change notification settings - Fork 851
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JpegDecoder: post-process baseline spectral data per MCU-row #1694
JpegDecoder: post-process baseline spectral data per MCU-row #1694
Conversation
@JimBobSquarePants found an error in jpeg decoder reference output files which was even marked as a bug:
I've re-saved actual jpeg to png using photoshop and got this difference from my file and existing png file: Those differences are actually visible by human eye with proper zoom in photoshop, looks like edges were anti-aliased for some reason. With this fix image passes tolerance test:
I'm not sure how I can update it, can you elaborate please? |
Awesome! It should be a case of replacing the reference image in this folder with the actual correct output from your tests as long as you have got Git LFS installed (see the readme for instructions) |
Thanks! |
…eSharp into jpeg-decoder-memory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking really great so far!
tests/ImageSharp.Benchmarks/Codecs/Jpeg/DecodeJpegParseStreamOnly.cs
Outdated
Show resolved
Hide resolved
Update
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really well done! This is an excellent piece of refactoring!
I can't believe you managed to solve that longstanding issue also! 😍
I did some WPA profiling on this using my 10 Core 1.
|
Those numbers 😘 |
Prerequisites
Description
Fixes #1597.
This also fixes #1692 simply because tests were failing with new architecture.
Performance
No change. Baseline images should obviously be faster on the global scale due to lower memory footprint but in local benchmarks there's no change. And it's easy to explain: actual decoding code wasn't changed, only memory allocation and temporal buffers management.
Note: PR benchmark is a bit faster than master branch (it's even faster than that because of a virtual call per each image stride). But I recall this false positive due to a really small image size. Most likely it's due to lesser memory footprint of spectral blocks allocation. I assume that at 4k resolutions there should be a little to no difference between them as actual code behind parsing/decoding wasn't touched.
Unfortunately, I don't have access to jetbrains memory profiler so I had a lot simplier check - custom debug allocator which simply prints all allocations. Here are dumps from the same image with same subsampling but different save methods - baseline and progressive:
All small
byte
allocations are removed for clarityProgressive
Baseline
P.S. If somebody has some free time and access to jetbrains memory profiler - this is super welcome for the full picture!
Tests & Legacy code
Everything is up and running. Old pipeline is removed.
TODO
SpectralConverter<TPixel>