-
Notifications
You must be signed in to change notification settings - Fork 9.4k
Magento 2.4.1 - HUGE Cache Sizes grows quickly #32118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @amitvkhajanchi. Thank you for your report.
Please make sure that the issue is reproducible on the vanilla Magento instance following Steps to reproduce. To deploy vanilla Magento instance on our environment, please, add a comment to the issue:
For more details, please, review the Magento Contributor Assistant documentation. Please, add a comment to assign the issue:
🕙 You can find the schedule on the Magento Community Calendar page. 📞 The triage of issues happens in the queue order. If you want to speed up the delivery of your contribution, please join the Community Contributions Triage session to discuss the appropriate ticket. 🎥 You can find the recording of the previous Community Contributions Triage on the Magento Youtube Channel ✏️ Feel free to post questions/proposals/feedback related to the Community Contributions Triage process to the corresponding Slack Channel |
Since your block caches are very big, I believe this might be a duplicate of #29964 which was fixed in Magento 2.4.2 You can try to temporarily disable the Magento_Csp module and see if that solves it. If that is the case you'll be certain that it's the same bug and upgrading to 2.4.2 should fix it. |
Thanks for quick response. I have disabled the module. I am using Amazon EFS for the pub/media folder and realized that I had encryption on for this. I created another unencrypted volume and mounted this to pub/media. This helped my problem somewhat as I was filling 100GB of space in 30min and that changed to 60GB in 3-4 hours. Which was still crazy. I will check in the morning (as its night my time) to see how the disk is filled. So far, it seemed to have slowed down, but I will report back in morning if disabling Magento_Csp improved it. |
As i remember this caused by high load plugin design load |
Thanks for all the feed-back. I can confirm to you that disabling Magento_Csp module resolved the issue on Magento 2.4.1 version I have. My cache only grew to 600MB and is stable now compared to multiple GB previous days. This is inline with my experience on Magento 2.3.3. Thanks for the help and prompt feed-back. I really appreciate it!!! |
Still, this is an issue on M2.4.2! |
We have Magento_CSP disabled from start (never enabled) |
hmm! Interesting case |
@DavorOptiweb We have investigated this further, it appears what is consuming the memory in Redis is basically the 'page_cache', it stores the page source code which is consuming most of the memory, once we disabled/kept it off Redis it came down from 40GB to just up to 200-300MB memory usage. The whole concept seems to be against the purpose. |
@DavorOptiweb Please share the cache declaration in your env.php file. |
the difference redis of mine only in session How many products that you have ? cms blocks count |
@DavorOptiweb Please try the following:
|
i think this one should like below |
@mrtuvn I missed that, just corrected it. |
@DavorOptiweb Could you execute the following command "redis-cli --bigkeys"? It cannot be session keys I must tell you now. |
`# Scanning the entire keyspace to find biggest keys as well as average sizes per key type. You can use -i 0.1 to sleep 0.1 secper 100 SCAN commands (not usually needed).[00.00%] Biggest hash found so far 'zc:k:46c_AB140E4DCDEBB91C4004178C10A7A5D3BD7C179B' with 4 fields -------- summary ------- Sampled 80175 keys in the keyspace! Biggest set found 'zc:ti:46c_MAGE' has 69201 members 0 strings with 0 bytes (00.00% of keys, avg size 0.00) Session keys are in separeted instance of Redis (used only 12M) |
Having the same issue after upgrading from 2.3.3 to 2.4.2-p1. The cache overfilling issue happens both in Redis and in file storage (if you dont have Redis cache enabled). This would fill up my hard drive to the brim when Catalog Search indexing was running. It only happens when Catalog Search index is running. When I do redis-cli monitor, the output shows this:
.............. ^^^ this rapidly increases and keeps storing data. I don't think its related to the Cache Type as I see it both on Redis and File Storage cache. my env.php looks like this:
Here is the output of redis-cli --bigkeys
^^^^ this is still in progress while Catalog Search Index is running. The RAM load at this point was at 40GB. It went up by 20GB in span of about 40 minutes. |
It doesnt look like it's only related to Redis. It happened both on Redis and File Storage cache for me. |
Looks like you still have to disable CSP plugin for the issue to disappear. This is very Magento :) It looks like it's no longer storing crazy amount of data into zc:k:c8a_BLOCK_81EE233A1B32FD871F6A4DD3CC91A5BFD107B28B_215322_FINAL_PRICE_LIST_CATEGORY_PAGE |
imho Module Magento_Csp seem not root causes of problem cache growing. Maybe something else |
@pmonosolo we have automated deploy and it has passed it many times since upgrade. So this is not an issue. |
I can't understand what is going on with Magento2 framework, I have a trace from a _initProduct and somehow it call to config more than 100 times and as the result, Magento spend around 500ms just for reading config value from cache and decode it. How can we make a website faster with such design from core? |
Not sure related your problem but i see one pull request related with fix read config. I will mention you if i found it. When in magento2 is big core we able to improve a lot and also easy to make mistakes with improperly way |
Try upgrade package from colinmollenhour redis in your test instance |
I don't think it is related (I think it was for fixing a long array of cache key or something) @hostep @IbrahimS2 any idea why this is not being cached? Basically this method can be called many times from many where or within a loop,...etc for a same config value (like module enabled or not, a store setting,.....) (developers shouldn't care about caching the value themselves) so it should cache the value inside the object |
cc: @fooman may know this |
@mrtuvn sorry I have got nothing to add to this discussion |
Hello @amitvkhajanchi, Have you tried to reproduce this issue in Magento 2.4-develop. Is it still reproducible for you? Thanks |
No. I haven't tried to reproduce the issue on Magento 2.4-develop. I am now on 2.4.2 myself and running production with no issue -- I have Magento_Csp disabled (which fixed for me). i will shortly in a month or so try to port my app to 2.4.3 but I don't think I should run into issue. |
Hello @amitvkhajanchi, Thanks for the clarification! As per your comment, it seems that your issue has been resolved. Can we close this now? Thanks |
Dear @amitvkhajanchi, We have noticed that this issue has not been updated for a period of 14 Days. Hence we assume that this issue is fixed now, so we are closing it. Please raise a fresh ticket or reopen this ticket if you need more assistance on this. Regards |
This still happens in Magento ver. 2.4.2. The cache grows to 20G in 2 - 3 hours. |
Update: The issue seems to be from "PayPal\Braintree\Plugin\ProductDetailsBlockPlugin::aroundGetProductDetailsHtml" This plugin with lazy marketing HTML insert to regenerate for every product load from the code level. We had to disable the Paypal plugin to go back to the original state. |
This issue is already present on magento 2.4.6-p2. |
We're also having this issue on 2.4.6-p3 with the modules in question here - Paypal_Braintree, Magento_Csp - already disabled. Normal file cache storage, no Redis etc.; has anybody found a root cause yet? |
Seeing this same issue with 2.4.6-p4 where Redis cache grows very large - all block_html with many FINAL_PRICE_LIST tags. Does not look like this is fixed yet... |
I'm facing same issue on magento opensource 2.4.7-p3 |
@hostep @engcom-Hotel should we re-open this one? |
@onlinebizsoft: if the code in the latest version of that paypal/braintree module is still causing issues, I'd recommend to open a new issue with detailed steps of what the problem is, and not re-open this old ticket. |
Magento CE 2.4.1
Ubuntu 20.04 LTS
PHP 7.4.3
MySQL 8.022 (Amazon RDS)
Elasticsearch 7.6.2
Redis 5.2.1
Varnish 6.2.1
Nginx 1.8
PS. I have over 5000+ products in 2 websites (EU/US) EU site has 5 Store views (EN,FR,IT,DE,ES). US site has 1 Store view (US). Total of 6 Store view for my site.
During Testing, the site was performing well on my staging server. When I finally deployed it to production this week after setting the mode to production and seeing real traffic, my cache started growing at alarming rate. Right now, it will fill up 100GB in 2 hours. That never happened in Magento 2.3.3 in my old box.
My old box had 30GB in production mode and was running fine with Varnish. much smaller.
When I run the command du -h /var/cache it total 741MB in my old box
When I run the command du -h /var/cache in my current box it grows from 0 -> 10GB in 1 min and within 2 hours its 100GB.
Attached are results of both du command to see file size.
du-output-2.4.1-mage--d.txt
du-output-2.3.3-mage--d.txt
The text was updated successfully, but these errors were encountered: