-
-
Notifications
You must be signed in to change notification settings - Fork 854
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce Shared General Decoder Options plus Specialization #2180
Conversation
I must have broken something with the last commit. I promise I had all tests passing! 😝 |
Phenomenal, will try to review this week! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just scrolled through the changes, some nits.
Didn't read the xml-comments and tests though.
Co-authored-by: Günther Foidl <gue@korporal.at>
Co-authored-by: Günther Foidl <gue@korporal.at>
Co-authored-by: Günther Foidl <gue@korporal.at>
Co-authored-by: Günther Foidl <gue@korporal.at>
To remove the cancellation token from the public API we need to remove This gives us a ton more flexibility. We can make the default Decode methods protected abstract in the base class and only ever expose the specialized methods in the public API, funneling users always through |
I'm not suggesting to remove it from the Decoder API. That's the best way for us to implement the cancellation, so makes total sense to pass it to decoder implementations. The concern I raised was simply:
|
@antonfirsov I've tried to follow your suggestions as closely as I can so please take another look. The API is a lot more sane now IMO. I still need to add tests though. |
Cool! Will do another round of review on Monday. |
Combined, | ||
|
||
/// <summary> | ||
/// IDCT-only to nearest block scale. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose it's worth mentioning that quality of this mode is close to the 'Box' resampling/scaling.
ResizeOptions resizeOptions = new() | ||
{ | ||
Size = options.TargetSize.Value, | ||
Sampler = KnownResamplers.Box, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we should make resampler type configurable to mimic MagicScaler functionality. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess so, could easily add it to the general options. My original thinking though was that if they wanted more control they could use the processor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My original thinking though was that if they wanted more control they could use the processor.
Totally missed that possibility, need more sleep.
But I think all-in-one API may still be superior than two-step API.
Would like to hear Anton on this subject.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The the target-size decoder is a memory-optimized version of the two step decode-resize workflow. In an ideal world I would feature-match them, so we don't force the user to choose between perf and flexibility. However I understand that it also makes sense to keep DecoderOptions
as simple as possible. I think adding Resampler makes sense, we'll se if users need more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving the general direction, since I couldn't spot anything principally wrong after a quick look, but don't have time to do a full scale review now.
I'm leaving to a vacation tomorrow with no computer access at all, and will be back next Monday. Would prefer to do a final "nitpicking" review early next week if possible, we need to add tests anyways before merging. Sorry that it's going this slow ...
Almost forgot the most important question regarding the proposed design: What are our plans for the encoders? Do we plan to remove the disparity? If yes, can we put together a rough API proposal to make sure we are going to the right direction? If not: are we really OK decoder and encoder design being very different? |
I'm not sure yet. I think I can make them similar but I don't want to block the API improvements here figuring it out. |
OK.... Added a bunch of extra tests. Had another look at the encoder story and stateless with options looks like a sensible pattern we can adopt also. It allows us to ensure only registered encoders are used (currently it's possible to allow passing unregistered). If I can get some eyes on this branch again please I'd be very grateful. I'm very keen to move on. |
I'm gonna push on with this. Lot's to do. |
Prerequisites
Description
Based on the discussion #2091 I've refactored our decoding API to do a few things.
ImageDecoder<T>
class whereT
isISpecializedDecoderOptions
which allows external implementations to easily implement decoders using our new resize-on-decode strategy.Image.Load(...)
as it was leading to too many user errors based on incorrect encoding assumptions.ReadonlySpan<T>
conversionThe general decoder options look like this.
The options replace Configuration as the primary parameter in all our decoding APIs
I've deliberately chosen not to allow passing
ResizeOptions
to the decoder as I want to leave a clear path between general and specialist pipelines. Default resize-on-decode usesResizeMode.Max
with theKnownResamplers.Box
.An example
ISpecializedDecoderOptions
implementation looks like the following.Usage is allowed only on decoder instances via individual
IImageDecoderSpecialized<T>
implementationsAll but the
JpegDecoder
utilize the same resizing strategy for specialized methods as the default loading behavior.JpegDecoder
will only downscale to the nearest size that is supported by the decoder allowing users to apply their own resize behavior in advanced scenarios. CC @br3aker