Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Always prefer the PDF.js JPEG decoder for very large images, in order to reduce peak memory usage (issue 11694) #11707

Merged
merged 1 commit into from
Mar 24, 2020

Commits on Mar 20, 2020

  1. Always prefer the PDF.js JPEG decoder for very large images, in order…

    … to reduce peak memory usage (issue 11694)
    
    When JPEG images are decoded by the browser, on the main-thread, there's a handful of short-lived copies of the image data; see https://github.com/mozilla/pdf.js/blob/c3f4690bde8137d80c74203b1ad91476fc2ca160/src/display/api.js#L2364-L2408
    That code thus becomes quite problematic for very big JPEG images, since it increases peak memory usage a lot during decoding. In the referenced issue there's a couple of JPEG images whose dimensions are `10006 x 7088` (i.e. ~68 mega-pixels), which causes the *peak* memory usage to increase by close to `1 GB` (i.e. one giga-byte) in my testing.
    
    By letting the PDF.js JPEG decoder, rather than the browser, handle very large images the *peak* memory usage is considerably reduced and the allocated memory also seem to be reclaimed faster.
    
    *Please note:* This will lead to movement in some existing `eq` tests.
    Snuffleupagus committed Mar 20, 2020
    Configuration menu
    Copy the full SHA
    62a9c26 View commit details
    Browse the repository at this point in the history