-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delete S3 media individually instead of in batches #69
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we want to merge this to upstream-v4.2.8
. Also, we should check with SRE that that branch is still set to autodeploy to staging
# since GCP XML API doesn't support batch delete | ||
logger.debug { "Deleting #{keys.size} objects" } | ||
keys.each do |key| | ||
bucket.object(key).delete |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interesting that your non-batch version is less lines and less complex than the original batch version that needs to be sliced anyway! 🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
haha! the batching makes a difference on the S3 side - putting it into slices of 1000 means all those 1000 media get deleted simultaneously, rather than O(n) with the non-batch way.
but it all gets deleted eventually so it shouldn't be an issue!
No description provided.