Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid reading beyond the IEND chunk #531

Open
anforowicz opened this issue Nov 13, 2024 · 7 comments
Open

Avoid reading beyond the IEND chunk #531

anforowicz opened this issue Nov 13, 2024 · 7 comments

Comments

@anforowicz
Copy link
Contributor

Upfront disclaimer: I think that this issue is a relatively low priority for Chromium - I am mainly reporting it to openly share and discuss some of the more exotic requirements/scenarios that may be imposed on image decoders.

Consider the following scenario (based on what a Skia test does here):

  • Main input (e.g. Read in Rust or SkStream in Skia) contains a PNG image concatenated with some other data
  • The input is wrapped in another, secondary Read / SkStream (which just forwards all calls to the main input; it is present only to accomodate the fact that the png::Decoder::new consumes/takes-ownership of the input).
  • The secondary input is passed to png crate for decoding the whole image (including the IEND chunk via Reader.finish call).

Expected behavior:

  • The client code should be able to figure out where the PNG / IEND data ends. In the case of the Codec_end test in Skia, this is achieved through the expectations that the main input stream is positioned right after the IEND chunk.

Actual behavior:

  • std::io::BufReader can aggressively read arbitrarily many bytes from the wrapped Read, potentially overshooting beyond the IEND chunk.

I realize that the Read trait doesn't have any APIs to 1) "peek" forward, but 2) only "consume" up to the end of IEND. And therefore I don't quite see a nice way to address this. Very silly, brainstorming-quality, probably no-go ideas:

  • Maybe Reader can track and report how many bytes have actually been consumed via BufRead.consume (this should be always less then or equal to the number of bytes that have been read via Read.read)
  • Or maybe if the input is Read + Seek then BufReader.seek can be called by Reader.finish to position the wrapper reader at the end of the image.

This issue is tracked in https://crbug.com/378936780 from Chromium side. This issue seems like a relatively low priority from Chromium perspective, but (based on a test comment) it may potentially cause issues with adoption of SkPngRustCodec in Android.

@kornelski
Copy link
Contributor

kornelski commented Nov 15, 2024

BufReader is not exposed in public APIs in this create, so we can easily replace it with a custom buffer that never reads past chunk boundaries.

Apart from that, the Reader could have into_inner() that returns the reader to the caller, so the caller can rewind it if possible.

@fintelia
Copy link
Contributor

The BufReader we use now serves two purposes: (a) it prevents many small reads directly hitting the underlying reader, and (b) it handles throttling the amount of data processed at once, producing better cache behavior.

The former could be replaced by adding a BufRead bound (as image::PngDecoder already has), but the second point is more tricky. I've been working on-and-off on a revived version of #458 that may be able to address that though.

A simpler workaround would be adding a Seek bound and seeking backward by reader.buffer().len() bytes (to "un-read" precisely the number of buffered but not yet consumed bytes), immediately upon hitting IEND

@kornelski
Copy link
Contributor

This post made me question whether it's a good idea to use BufReader internally:

https://graphallthethings.com/posts/better-buf-read

The state machine is overcomplicated to handle 1-byte buffers, but such buffers are useless. Being forced to support such edge case seems to be a design flaw of the BufRead trait.

@fintelia
Copy link
Contributor

fintelia commented Nov 15, 2024

Yeah, if you look closely at the provided methods on BufRead, you'll notice that they're basically a roundup of everything you might want to do if you had a 1-byte internal buffer. Read until you get to a specific byte. Split lines based on whether the next byte is \n. Check whether there's at least one more byte left to read.

My current theory is that a combination of methods from BufRead, Read, and 'Seek' would be a better approach. Use fill_buf / consume to get the image data stream and feed it directly into the decompressor without copying. Use read_exact to grab 4-bytes at a time for header fields. And since read_exact leaves the reader in an unspecified location after hitting EOF, track the stream position of the most recent chunk header and seek back to it when resuming after an EOF.

@kornelski
Copy link
Contributor

kornelski commented Nov 17, 2024

This makes me wonder whether anybody actually uses a BufRead implementation that isn't BufReader nor Vec/slice.

If the user has a slice, then BufRead+Seek becomes just a high-overhead bug-prone interface for it. With Seek you now have Cursor indirection, and read_exact that generates code bloat for looping over read, and all of them process relatively fat io::Error over infallible source.

If the user doesn't have the whole data buffered, then Seek prevents them from simply using BufReader.

For animations, I think there will be two very different users: one that plays looped animation, and one that processes/converts frames once. Playing animation, especially from the network, will want a complex growing buffer and keep the whole file in memory. However, for 1-pass conversions, it is undesirable to keep the already processed frames in memory just for the Seek bound.

I think the library could handle buffering internally, and explicitly allow creation with a &[u8] for those who have the whole buffer, for trivial, infallible, zero-copy reading. And then for other cases take just Read, and handle buffering and seeking itself, with configuration whether it should buffer for 1-pass or with support for frame seeking.

@fintelia
Copy link
Contributor

If the user has a slice, then BufRead+Seek becomes just a high-overhead bug-prone interface for it. With Seek you now have Cursor indirection, and read_exact that generates code bloat for looping over read, and all of them process relatively fat io::Error over infallible source.

I'm not sure there's really much overhead from Cursor<&[u8]>. It amounts to a slice plus a u64 to track the current position. And if you need Seek you would probably have tracked the position somehow regardless. Cursor also has special cased implementations of the Read methods, so for instance, its read_exact doesn't do any looping.

The io::Error type is a bulkier than necessary though, given that EOF is the only applicable error case.

If the user doesn't have the whole data buffered, then Seek prevents them from simply using BufReader.

I'm not sure I understand this point. BufReader<File> implements Seek but only loads data on-demand.

@kornelski
Copy link
Contributor

Nevermind, I've missed that BufReader implements Seek for R: Seek.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants