-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Add option to optimize uploaded images automatically #1202
Comments
Then if you want to retrieve the original file to print, you cannot do it anymore |
Closing this issue as it is not in scope of the project |
Hi, I find this feature useful as well. But this can be done manually directly on the library. btw, immich is amazing! |
@lpryszcz Immich stores file size and md5 in the database and use md5 to check for duplication. |
@lpryszcz would you be interested to collaborate and find a way to compressed already stored images at least manually server-side ? I see the following possible options:
I suggest for us to try hacky ways to optimize a file storage and only after to try and create more or less stable PR to immich Useful links: DB queries |
I like the idea @WinnerOK .
This is also my favourite (at least for now).
The main question is, how stable is DB design? It makes little sense to develop something that will be broken with future releases (this is why #165 is on hold as far as I understand, right?). |
If we do client-side compression and use hash of compressed file for dedup, then client will have to repeat compression to properly check for duplicates. This seems like a lot of work for client. The original image in high-quality should still be available on user's device. Therefore, we should check for duplicates based on the original hash. (Keep in mind that current hashing is server sided, but it could become client sided at one time: #2567) If we have to consider original file hash, then we have to make sure that hash in the DB is only used for deduplication and does not actually get verified afterwards (otherwise we have to store 2 hashes in the DB) then we could just store the original hash. @alextran1502 could you tell us if immich uses file hash anywhere except deduplication during upload? If we would dive into server, we could also somewhat address the issue mentioned above (#1202 (comment) ) by adding 2 configurations: This way people who have limited storage can give up on quality, but still preserve their favourite moments as is |
I see someone started something about it here #1242 also someone made something that already work with immich https://gist.github.com/JamesCullum/6604e504318dd326a507108f59ca7dcd |
I'm interested in an on/off switch as well. My family and I don't particularly see the point in anything larger than 8MP or 12MP anyways. I |
Unfortunately according to the developer of the project this is not a feature that will be added in the near future or at all In my reply above, I showed how it is possible to still apply compression to details, but it seems that this is something that affects the entire library and cannot be selected per user |
it's unfortunate as it could save a tremendous amount of space for those of us that don't need original quality on a remote server especially if the photo is cached locally. I guess I'll have to look into a solution for that. Do you know of any that have been created by other users? |
My issue #8907 was recently closed as a duplicate of this unfortunately (also) closed issue. I think this makes it pretty clear that the team won't work on this issue. It would be a great solution to be able to toggle with users even if only for video. Images occupy a miniscule amount of space relatively speaking, but a lot of my users are uploading giant 4K videos don't really benefit from their inflated bitrates and my machine is already doing the work to convert to proxies for viewing. I understand it's a against the spirit of Immich as the developers see it, but it would go a very long way to making this a much more versatile solution |
@alextran1502 finally, does the incoming new feature of workflows could fit this need, as we could be able to apply a pipeline over the imported photos? |
Potentially, I think it will fir with custom plug in |
I think this feature is very viable for certain use-cases and workflows, but it may not be a good fit for the core of what Immich is. This is said as someone who really wants this feature! Immich has grown immensely in popularity and functionality (and for good reason!) and with that comes more and more use-cases and workflows. Personally I would love this feature since currently I'm trying to use Immich as a photo / video sharing platform for close family members. I have 80,000 photos / videos in my Lighroom catalog and need a good way to share those with the family, but I am not expecting Immich to take over all that content as the main storage / organization system. Rather it's a glorified web gallery so my family can gain access to our family photos and they're not "locked" into my LR catalog and I don't have to eat up all my cloud storage and rely on Adobe. Thus, I don't want / need Immich to hold the original files, backup, etc just "web copies" that family can easily access. Some of my original video files are massive as well as they are shot in 4K 60 on a Canon mirrorless. So 15 second clip can be around 0.5 of a GB. Would love to see a workflow that would allow for this kind of thing and I think workflows would open up a huge amount of possibility for all kinds of cases without bugging down or diluting the core development and making the whole product a spaghetti mess trying to accommodate everyones specific use cases. |
I'd second to that! In addition, my phone camera saves "lowly" compressed photos in jpeg (~8 MB each) which could be easily compressed to reduce their size 10x. So for me, having option to "reduce" file size before sending it to the server would be a great benefit. |
If i undetstand this correctly you could just link your originals to immich via an external library. This way it is not the main storage but still does the video/image transcoding. Basically exactly what you want it to do really. |
That could work for some workflows but mine is to publish from Lightroom to Immich using a plugin. The main reason for this is that I have thousands of videos in my collection and don't want all of them in Immich and cannot use the current file structure to filter them out on disk. I have smart collections in Lightroom that filter down to just specific videos automatically (say 2 star and above only with specific people or other keywords) and only publishes those to Immich. Lightroom can technically do conversions first before sending to Immich (instead of sending the full size original) but it can only do a max of 1080p (I want 1440p) and only h.264 non-HDR (I want HDR video in Immich). So I'm forced to send the full size video file which can sometimes be massive. |
Feature detail
The vast majority of home users don't notice if any compression is applied to the image.
I have tested this with the ImageMagick tool on Linux and by changing the quality. Personally, I've started to notice it when I lower it to 35%. I think it is a good idea to add an option to compress images when uploading, as this would allow storing many, many more in the same space using a negligible amount of server usage.
According to tests I have done, a 5.8 MB image can be reduced to 350 KB (35% quality).
Platform
Server
The text was updated successfully, but these errors were encountered: