-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code change to wp_stream_get_sites causing very large memory usage on clusters with higher number of sites #1270
Comments
Thanks for reporting the issue!
I guess you're referring to #1258 which introduced the change, right? @dd32 Are you seeing the same issue with high memory usage now that it queries for all sites in the network? It appears that stream/classes/class-network.php Lines 433 to 441 in 305583d
and the Mercator connector:
I'm wondering why is it actually calling Do you know if A simply fix would be to add a cache wrapper on the generated blog ID to blog name array. |
Thank you very much for the reply! I will add that we actually do use the Mercator plugin too, which is probably an important detail to add. |
Delete the section that is not applicable:
Bug Report
Expected Behavior
class-connector-blogs->get_context_labels() gathers data from blogs on the multisite using a reasonable amount of memory
Actual Behavior
wp_stream_get_sites used to call get_sites() with no args, this defaulted to just 100 sites. This behavior was changed in the newest release. When get_context_labels() switches gets blog details for every site, it is loading the autoloaded option for every site on the cluster, although wp_is_large_network is checked, wp_is_large_network returns false on less than 10,000 sites. We have a multisite with about 2500 sites, and we are seeing memory usage of 500-700 Megabytes of memory. As you know, the default memory limit for PHP in WordPress is 256MB, we have upped the limit to 1GB, but even in that case we are sometimes seeing heartbeats timeout, because on post edit there is a 30 second time limit before a heartbeat is considered a timeout, and even with caching, 500MB takes a fair amount of time to load. Has this functionality been tested on large clusters? And is it necessary functionality? It was only doing the first 100 sites for a long time, so it seems maybe it is just an extra feature? If it is an extra feature is it possible to add another filter to cancel it so that wp_is_large_network doesn't have to be used?
The text was updated successfully, but these errors were encountered: