Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add the capabilities of imgmin for optimizing images. #12

Open
133794m3r opened this issue Mar 6, 2014 · 11 comments
Open

Add the capabilities of imgmin for optimizing images. #12

133794m3r opened this issue Mar 6, 2014 · 11 comments

Comments

@133794m3r
Copy link

Imgmin is a lossy optimize but it does it by making the image appear no worse so it's a really good system. Also just optimizing the huffman trees I doubt will do a whole of extra work.

https://github.com/rflynn/imgmin

That is the link to his project where it goes through attempting to optimize the jpegs. If you've not already, I'd try to look at 7zip's deflate encoder and see if their huffman encoder is better than what most everyone else does with it. I know that their system can be beat every-so-slightly in full deflate streams when optimizing the huffman trees but still since this is trying to be the single best system out there I would like for you to look at that system, and also looking at imgmin to encode jpegs at a lossy level(not by default but an option) so that way the files are going to be greatly smaller whilst not losing any visual quality.

P.S. I imagine this is going to be used on the web/for websites so I see no reason to not add the option to do the lossy optimizations from imgmin in the file step/a non-standard option.

@pengvado
Copy link
Contributor

pengvado commented Mar 6, 2014

7zip is a irrelevant. Unlike deflate, jpeg doesn't allow you to switch huffman codebooks in mid-stream. And as long as you only get one codebook, there's a simple algorithm that gives the exact optimal huffman codes, and everyone already uses it.

@133794m3r
Copy link
Author

ah ok I didn't know that as I've not looked at jpeg in full detail.

@bdaehlie bdaehlie modified the milestone: v2.0 May 7, 2014
@kowpa1990
Copy link

Support lossy/aggressive optimizations

JPEGs sometimes are stored with unnecessarily high quality, and there are tools like adept and JPEGmini that try to recompress JPEGs at lowest possible quality.

In my opinion, you should introduce the lossy compression of images. I recently checked jpeg-recompress and works perfectly. I mean implement a solution that would enter no visible change on first sight in the appearance of the image.

Other interesting projects of this type:

@dwbuiten
Copy link
Contributor

dwbuiten commented Jul 9, 2014

Also maybe of interest: JPEGmini's metric (BBCQ) is detailed in their SPIE 2011 paper here. It's basically geometrically weighted PSNR, local variance, and edge detection on 8x8 block voundaries, over tiles / a window with a few hacks based on the max luma value. I implemented their PSNR and AAE metrics here, but not the local variance (I used a linear weighting because I was lazy, and matlab made deriving it easy).

They do other fancy things like mucking with deadzones and stuff too.

As for Adept's ROI coding, it seems nifty, but dangerous. e.g. on flat images like cartoons or anime it could go very, very wrong.

@kornelski
Copy link
Member

@dwbuiten Unfortunately I can't check it out, as the link to vimeows.com github requires login, and the linear weighting link seems to link to the same page.

@dwbuiten
Copy link
Contributor

dwbuiten commented Jul 9, 2014

@pornel I edited my post now to link to the correct URL now. I have been having issues with the latest firefox updating the urlbar properly, and I keep messing up copy/paste of URLs.

@gunta
Copy link

gunta commented Jul 15, 2014

+1 for jpeg-recompress / JPEGmini like algorithms.
I think this feature is the most important feature to add,
since it will make more than 10% compression in the end.

Manual tuning of compression quality should be left for very special cases only.

@danielgtaylor
Copy link

Author of jpeg-recompress here. I plan to evaluate and switch to using libmozjpeg for encoding in the near future. The problem that I see with adding this functionality to the mozjpeg encoder by default is that it requires many encoding passes, so you see your image encoding times increase by e.g. 5x. That may not be the best default behavior.

That said, if you plan to add it to mozjpeg, let me know how I can help.

@dwbuiten
Copy link
Contributor

@danielgtaylor Most of that slowness is from the entropy coding optimization. Disabling that can probably speed it up quite a bit, while still using trellis.

@danielgtaylor
Copy link

I've added support for libmozjpeg to jpeg-recompress in danielgtaylor/jpeg-archive@aa9abe1 and the results so far are promising. Running jpeg-recompress now takes about twice as long as it did before, with only the final step using all optimizations (far better than taking 5x as long). It may be possible to reduce this further. Initial results with my small test data set:

Folder Size (MB) Compression Ratio Time (seconds)
test-files 16.8 100% -
test-output-libjpegturbo 5.8 35% 16.7
test-output-libmozjpeg 4.7 28% 27.4

That's an average 7% extra reduction in file size at the same perceived visual quality (using -q medium which sets an SSIM target of 0.9999)

@gunta
Copy link

gunta commented Jul 16, 2014

@danielgtaylor Sounds nice!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants