-
Notifications
You must be signed in to change notification settings - Fork 9.4k
Remote storage and cache - Huge amount of (useless ?) cache #35820
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @Nuranto. Thank you for your report.
Make sure that the issue is reproducible on the vanilla Magento instance following Steps to reproduce. To deploy vanilla Magento instance on our environment, Add a comment to the issue:
For more details, review the Magento Contributor Assistant documentation. Add a comment to assign the issue: To learn more about issue processing workflow, refer to the Code Contributions.
🕙 You can find the schedule on the Magento Community Calendar page. 📞 The triage of issues happens in the queue order. If you want to speed up the delivery of your contribution, join the Community Contributions Triage session to discuss the appropriate ticket. ✏️ Feel free to post questions/proposals/feedback related to the Community Contributions Triage process to the corresponding Slack Channel |
Additional informations We tried to disable L2 cache, and use our dedicated-to-cache redis instance instead. Here is what I can see after the
64G of cache, for "only" 50K products and 229 869 entries in I guess that's not a good thing for performances especially since in our case, we use Minio as S3 remote engine which is in the same cluster, so a call to redis is probably not significantly faster than requesting Minio directly. |
Hi @engcom-Hotel. Thank you for working on this issue.
|
Hello @Nuranto, Thanks for the report and collaboration! We have tried to reproduce the issue in Magento 2.4-develop but the issue is not reproducible for us. We have verified it in the We have followed the below steps:
Please let us know if we have missed anything. Thanks |
Hello @engcom-Hotel , Have you enabled & configured a remote storage (S3) during your test ? |
Hello @Nuranto, Yes, we have tried it with S3 enabled. But unable to reproduce the issue. Please refer to the below screenshot: The below path doesn't have any changes:
Please let us know if we have missed anything. Thanks |
Hello @engcom-Hotel, Have you run the consumer ? I think you can also run the resize command in synchronous mode, it should give the same result. |
Hello @Nuranto, Yes, we tried it with both Sync and Async commands. But the issue was not reproducible for us. Can you please suggest the path where this cache has been saved for you? Thanks |
Well, in my case I can see all that cache files with : Each of this files look like this :
You can find the cache logic in However I don't understand why you cannot reproduce this. The |
Hello @Nuranto, I have tried to debug this in detail and for me, the cache path is I have also observed that in order to save the cache it is using the below method from colinmollenhour And the This function is calling when we have tried to debug the below command:
Need your input on these findings. Thanks |
Hello @engcom-Hotel Yes, in my case, as specified in my first message, we are using L2 caching, with basic configuration took from the docs, with this line :
But I don't think the cache location matters for this issue. For the But if you're looking there, it means you managed to reproduce the caching behaviour of 1 cache file per image ? To me the whole caching of remote file's metadatas is an error that should be either removed, refactored or make as optionnal (I guess that can be usefull with few images, or when the distant S3 server is slow). |
Hello @Nuranto, Thanks for the reply! Yes I can manage to reproduce the issue that the caching behavior of 1 cache file per image by running the below command:
But as I told you that this has been due to colinmollenhour
Please suggest. Thanks |
Hello @engcom-Hotel , I don't understand why you think the issue is in But I agree with your fix proposal : we should not persist this cache. |
Hello @Nuranto, Thanks for your reply! As per the process, we are moving this ticket for a fix. Hence confirming the issue. Thanks |
✅ Jira issue https://jira.corp.adobe.com/browse/AC-6907 is successfully created for this GitHub issue. |
✅ Confirmed by @engcom-Hotel. Thank you for verifying the issue. |
Uh oh!
There was an error while loading. Please reload this page.
Summary (*)
We have L2 caching enabled. But I guess you can reproduce the issue with just setting up the cache backend to Filesystem.
When a S3 remote system is configured, a file cache is created for each single image (with tags
flysystem
andmage
).The result is that we crashed one of our server by launching
bin/magento catalog:images:resize --async
andbin/magento queue:consumers:start media.storage.catalog.image.resize
, because we reached server's inode limits.Proposed solution
I'm not sure if this cache is relevant, and it should probably be removed.
If I'm wrong and that cache is usefull for performance, then we should disable that caching on bulk operation like
bin/magento catalog:images:resize
Additional Information
We have tried to debug this in detail and for me, the cache path is
var/cache
:I have also observed that in order to save the cache it is using the below method from colinmollenhour
/
Cm_Cache_Backend_File:
https://github.com/colinmollenhour/Cm_Cache_Backend_File/blob/034bf73adfdc5b02057ae3ef2a2255b381f46944/File.php#L178-L205
And the
$hash
variable is empty in this method.This function is calling when we have tried to debug the below command:
XDEBUG_CONFIG=idekey=PHPSTORM bin/magento remote-storage:sync
Please provide Severity assessment for the Issue as Reporter. This information will help during Confirmation and Issue triage processes.
The text was updated successfully, but these errors were encountered: