Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Internal server error for animated GIFs #110

Open
gianpaolom opened this issue Feb 10, 2022 · 8 comments
Open

Internal server error for animated GIFs #110

gianpaolom opened this issue Feb 10, 2022 · 8 comments
Labels
bug Something isn't working

Comments

@gianpaolom
Copy link

I have installed the module and it works properly so far. I have recently noticed that however I get an internal error for animated GIFs.
Any clue about this?

@ofhouse
Copy link
Member

ofhouse commented Feb 10, 2022

Ah, I see we currently have no tests for animated GIF (only for animated webp).
Need to take a look, thanks for the hint!

@ofhouse ofhouse added the bug Something isn't working label Feb 10, 2022
@ofhouse
Copy link
Member

ofhouse commented Feb 10, 2022

Tested the internal middleware against an animated gif and seems to work fine: milliHQ/pixel#9

Are you able to share the image in question?

@ofhouse
Copy link
Member

ofhouse commented Feb 11, 2022

Thanks, will take a look over the weekend!

@ofhouse
Copy link
Member

ofhouse commented Feb 13, 2022

Unfortunately it seems the image hit the max payload size for AWS Lambda of 6mb.
Since the image has ~8mb it cannot be sent back from Lambda as Response.

The limit cannot be increased by AWS so the only option here would be to replace the image with a smaller one (<6mb).
Don't know for sure if we could implement a workaround here, so that the original image is served when the limit is hit.

@gianpaolom
Copy link
Author

@ofhouse Thank you for looking into the issue!
I will have the developers to set a constraint on the upload size for GIFs

Re the workaround, if you ever happen to figure something out, maybe just bump reference this thread so I get a heads up!

Thank you again for your work! :)

@gianpaolom
Copy link
Author

Hello @ofhouse! I just had a thought, but not sure if it could apply, as I am not aware about how the image resizing is handled.
However, to try to workaround that payload limit, couldn't it work if you read the image from s3 as a stream, you pipe into sharp, and the write again as a stream to s3?

@ofhouse
Copy link
Member

ofhouse commented Apr 2, 2022

Hi @gianpaolom,
yes the basic idea is to use a direct connection from CloudFront to S3 to avoid the limits from Lambda & API Gateway.

The best solution currently would be to introduce a new S3 bucket as short-term cache (e.g. save the processed images there for 30 days, then expire the objects, to save costs).
To process the images I would recommend to add a S3 Object Lambda to the S3 bucket.
So every response from S3 through CloudFront triggers an invocation.
If the image already exists, do not modify the object.
When the image does not exists, start the processing and save the result back to S3.

As stated in the linked blog post:

Finally, I use the new WriteGetObjectResponse API to send the result of the transformation back to S3 Object Lambda. In this way, the transformed object can be much larger than the maximum size of the response returned by a Lambda function.

So this should also work for payload sizes that exceed the lambda limit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants