-
Notifications
You must be signed in to change notification settings - Fork 635
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify Pixel trait #1099
Simplify Pixel trait #1099
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -86,24 +86,12 @@ pub trait Pixel: Copy + Clone { | |
/// that the slice is long enough to present panics if the pixel is used later on. | ||
fn from_slice_mut(slice: &mut [Self::Subpixel]) -> &mut Self; | ||
|
||
/// Convert this pixel to RGB | ||
fn to_rgb(&self) -> Rgb<Self::Subpixel>; | ||
|
||
/// Convert this pixel to RGB with an alpha channel | ||
fn to_rgba(&self) -> Rgba<Self::Subpixel>; | ||
|
||
/// Convert this pixel to luma | ||
fn to_luma(&self) -> Luma<Self::Subpixel>; | ||
|
||
/// Convert this pixel to luma with an alpha channel | ||
fn to_luma_alpha(&self) -> LumaA<Self::Subpixel>; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Was this missed during removal of methods or is keeping this method intentional? |
||
|
||
/// Convert this pixel to BGR | ||
fn to_bgr(&self) -> Bgr<Self::Subpixel>; | ||
|
||
/// Convert this pixel to BGR with an alpha channel | ||
fn to_bgra(&self) -> Bgra<Self::Subpixel>; | ||
|
||
/// Apply the function ```f``` to each channel of this pixel. | ||
fn map<F>(&self, f: F) -> Self | ||
where | ||
|
@@ -159,9 +147,6 @@ pub trait Pixel: Copy + Clone { | |
where | ||
F: FnMut(Self::Subpixel, Self::Subpixel) -> Self::Subpixel; | ||
|
||
/// Invert this pixel | ||
fn invert(&mut self); | ||
|
||
/// Blend the color of a given pixel into ourself, taking into account alpha channels | ||
fn blend(&mut self, other: &Self); | ||
} | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -457,7 +457,12 @@ impl DynamicImage { | |
/// Invert the colors of this image. | ||
/// This method operates inplace. | ||
pub fn invert(&mut self) { | ||
dynamic_map!(*self, ref mut p -> imageops::invert(p)) | ||
use traits::Primitive; | ||
fn invert_pixel<T: Primitive>(p: &mut impl Pixel<Subpixel = T>) { | ||
p.apply_with_alpha(|c|T::max_value() - c, |a|a); | ||
} | ||
|
||
dynamic_map!(*self, ref mut img -> img.pixels_mut().for_each(invert_pixel)) | ||
} | ||
|
||
/// Resize this image using the specified filter algorithm. | ||
|
@@ -747,32 +752,54 @@ impl GenericImage for DynamicImage { | |
type InnerImage = DynamicImage; | ||
|
||
fn put_pixel(&mut self, x: u32, y: u32, pixel: color::Rgba<u8>) { | ||
let color::Rgba([r, g, b, a]) = pixel; | ||
let (r16, g16, b16, a16) = (r as u16 * 257, b as u16 * 257, g as u16 * 257, a as u16 * 257); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Could this be There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Now that I think about it, all of these could be replaced with DynamicImage::ImageBgra8(ref mut p) => p.put_pixel(x, y, pixel.into_color()), For the 16bpc types you just need an extra, let pixel16: color::Rgba<u16> = pixel.into_color();
...
DynamicImage::ImageRgb16(ref mut p) => p.put_pixel(x, y, pixel16.into_color()), This is consistent with the two-level API described above in the other comment, with To note this means that fn<P: Pixel> put_pixel(&mut self, x: u32, y: u32, pixel: P)
where P: IntoColor<color::Rgba<u8>> + IntoColor<color::Rgba<u16>> {
let pixel8: color::Rgba<u8> = pixel.into_color();
let pixel16: color::Rgba<u16> = pixel.into_color();
...
} |
||
match *self { | ||
DynamicImage::ImageLuma8(ref mut p) => p.put_pixel(x, y, pixel.to_luma()), | ||
DynamicImage::ImageLuma8(ref mut p) => { | ||
let color::LumaA([l, _]) = pixel.to_luma_alpha(); | ||
p.put_pixel(x, y, color::Luma([l])) | ||
} | ||
DynamicImage::ImageLumaA8(ref mut p) => p.put_pixel(x, y, pixel.to_luma_alpha()), | ||
DynamicImage::ImageRgb8(ref mut p) => p.put_pixel(x, y, pixel.to_rgb()), | ||
DynamicImage::ImageRgb8(ref mut p) => p.put_pixel(x, y, color::Rgb([r, g, b])), | ||
DynamicImage::ImageRgba8(ref mut p) => p.put_pixel(x, y, pixel), | ||
DynamicImage::ImageBgr8(ref mut p) => p.put_pixel(x, y, pixel.to_bgr()), | ||
DynamicImage::ImageBgra8(ref mut p) => p.put_pixel(x, y, pixel.to_bgra()), | ||
DynamicImage::ImageLuma16(ref mut p) => p.put_pixel(x, y, pixel.to_luma().into_color()), | ||
DynamicImage::ImageLumaA16(ref mut p) => p.put_pixel(x, y, pixel.to_luma_alpha().into_color()), | ||
DynamicImage::ImageRgb16(ref mut p) => p.put_pixel(x, y, pixel.to_rgb().into_color()), | ||
DynamicImage::ImageRgba16(ref mut p) => p.put_pixel(x, y, pixel.into_color()), | ||
DynamicImage::ImageBgr8(ref mut p) => p.put_pixel(x, y, color::Bgr([b, g, r])), | ||
DynamicImage::ImageBgra8(ref mut p) => p.put_pixel(x, y, color::Bgra([b, g, r, a])), | ||
DynamicImage::ImageLuma16(ref mut p) =>{ | ||
let color::LumaA([l, _]) = color::Rgb([r16, g16, b16]).to_luma_alpha(); | ||
p.put_pixel(x, y, color::Luma([l])) | ||
} | ||
DynamicImage::ImageLumaA16(ref mut p) => | ||
p.put_pixel(x, y, color::Rgba([r16, g16, b16, a16]).to_luma_alpha()), | ||
DynamicImage::ImageRgb16(ref mut p) => | ||
p.put_pixel(x, y, color::Rgb([r16, g16, b16])), | ||
DynamicImage::ImageRgba16(ref mut p) => | ||
p.put_pixel(x, y, color::Rgba([r16, g16, b16, a16])), | ||
} | ||
} | ||
/// DEPRECATED: Use iterator `pixels_mut` to blend the pixels directly. | ||
fn blend_pixel(&mut self, x: u32, y: u32, pixel: color::Rgba<u8>) { | ||
let color::Rgba([r, g, b, a]) = pixel; | ||
let (r16, g16, b16, a16) = (r as u16 * 257, b as u16 * 257, g as u16 * 257, a as u16 * 257); | ||
match *self { | ||
DynamicImage::ImageLuma8(ref mut p) => p.blend_pixel(x, y, pixel.to_luma()), | ||
DynamicImage::ImageLuma8(ref mut p) => { | ||
let color::LumaA([l, _]) = pixel.to_luma_alpha(); | ||
p.blend_pixel(x, y, color::Luma([l])) | ||
} | ||
DynamicImage::ImageLumaA8(ref mut p) => p.blend_pixel(x, y, pixel.to_luma_alpha()), | ||
DynamicImage::ImageRgb8(ref mut p) => p.blend_pixel(x, y, pixel.to_rgb()), | ||
DynamicImage::ImageRgb8(ref mut p) => p.blend_pixel(x, y, color::Rgb([r, g, b])), | ||
DynamicImage::ImageRgba8(ref mut p) => p.blend_pixel(x, y, pixel), | ||
DynamicImage::ImageBgr8(ref mut p) => p.blend_pixel(x, y, pixel.to_bgr()), | ||
DynamicImage::ImageBgra8(ref mut p) => p.blend_pixel(x, y, pixel.to_bgra()), | ||
DynamicImage::ImageLuma16(ref mut p) => p.blend_pixel(x, y, pixel.to_luma().into_color()), | ||
DynamicImage::ImageLumaA16(ref mut p) => p.blend_pixel(x, y, pixel.to_luma_alpha().into_color()), | ||
DynamicImage::ImageRgb16(ref mut p) => p.blend_pixel(x, y, pixel.to_rgb().into_color()), | ||
DynamicImage::ImageRgba16(ref mut p) => p.blend_pixel(x, y, pixel.into_color()), | ||
DynamicImage::ImageBgr8(ref mut p) => p.blend_pixel(x, y, color::Bgr([b, g, r])), | ||
DynamicImage::ImageBgra8(ref mut p) => p.blend_pixel(x, y, color::Bgra([b, g, r, a])), | ||
DynamicImage::ImageRgb16(ref mut p) => | ||
p.blend_pixel(x, y, color::Rgb([r16, g16, b16])), | ||
DynamicImage::ImageRgba16(ref mut p) => | ||
p.blend_pixel(x, y, color::Rgba([r16, g16, b16, a16])), | ||
DynamicImage::ImageLuma16(ref mut p) =>{ | ||
let color::LumaA([l, _]) = color::Rgb([r16, g16, b16]).to_luma_alpha(); | ||
p.blend_pixel(x, y, color::Luma([l])) | ||
} | ||
DynamicImage::ImageLumaA16(ref mut p) => | ||
p.blend_pixel(x, y, color::Rgba([r16, g16, b16, a16]).to_luma_alpha()), | ||
} | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As long as
to_rgba
is required onPixel
it seems to still hold that its color space must be interpreted assRGB
. But then the other methods, exceptinvert
, could also be default implemented utilizing this one. Why remove them outright instead?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was trying to remove as much as I could without breaking too much other code in the crate. Perhaps it would be better to step back and consider what generic functionality we can provide on pixels without baking in assumptions about color spaces and so forth. For instance, all of the map/apply variations are rather questionable without knowing the color space
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was contrary to my thoughts,
map
andapply
were the most reasonable in my eye. They do not themselves enforce any particular interpretation. Sure, the caller can not currently find out the color space throughimage
but if it is known then the function can correctly interpret the channel value. And no color type has to come up with an interpretation of its own, just copy values into the function argument and back.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't the channels have to be either L or RGB (plus an optional alpha) for that to work? I don't think it could work for things like HSV or xyY since different channels have different meanings but the
map
andapply
functions would run the same closure on each?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess one view is that
Pixel
should just be a wrapper around a&[Subpixel]
plus an associatedColorType
andColorSpace
. It would then fall on the user to properly handle each possible pair they cared about. There would be convenience functions to operate on channels but they'd have no higher level understanding of what they meant.The alternative is to discourage users from interacting with pixels that way and instead provide higher level functionality so that generic code could work while remaining oblivious to the ColorType and ColorSpace of the pixel. It isn't really obvious to me what this would look like, but seems like the preferable option if feasible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those may not be mutually exclusive alternatives, but levels of abstraction in the API. Pixels are just buffers with color types and spaces as you say, then higher level pixel traits provide higher level functionality where available. So if a user just needs (strawmen)
Blendable
,Invertible
,HasChannel<Channel::Luminance>
orInterpretableAs<Channel::Red, ColorSpace::sRGB>
, they can constructfn<A: Invertible + HasChannel<Channel::Luminance>, B: Blendable<A> + InterpretableAs<Channel::Red, ColorSpace::sRGB>> myFancyFilter(a: &A, b: &mut B)
, etc.If they need something more format-specific not captured in those higher level traits provided by
image
orimageproc
or other crates, then they can operate onPixel
s directly and match/bound the color types and spaces they need to, creating new higher level traits if they wish, e.g.,trait Darken: Pixel {...} impl<P: Pixel<ColorType=RGB, ColorSpace=sRGB>> Darken for P { ... }
.So while I agree with @HeroicKatora that it's not an entirely coherent change to remove some of these conversions while leaving others, I can see the point in at least shrinking the exposed API since what's exposed is not really coherent as it is, with color spaces and types a bit abused.