-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug: prometheus metrics cause high cpu usage #7211
Comments
Do you mean the method |
|
notice that codes changed about prometheus, my apisix version: 2.14.1 |
We have already done it in the Enterprise version and received good feedback. |
Can this plan also solve the previous problem:#5755 |
we do not have this plan now. |
Same question to me. Is there any way we can do , not close it. |
I have seen that among the metrics in Prometheus, the amount of delay metrics data is very large. This metrics is not very important to me, so I removed it in exporter.lua and mount by configmap. Now the memory is greatly reduced, and the CPU usage during collection is also greatly reduced. Looking forward to better solutions. |
Or can we add some configuration items of the Prometheus plugin to support selecting what data to collect |
Similar to #4273? |
This comment was marked as off-topic.
This comment was marked as off-topic.
@crazyMonkey1995 |
Let's discuss it at #7353. Close this to ensure all discussions happen in the same place. |
Current Behavior
At present, we have enabled the prometheus plugin. Since we have more routes (4000+), the size of the lua share dict of prometheus is 512M (larger). When the amount of data becomes larger, prometheus.export_metrics() will cause the cpu-usage of one worker process very hight, and requests on this worker process will be greatly affected.
Maybe there is some way to separate the worker process of internal server (such as admin and prometheus) and the actual business server?
Expected Behavior
Error Logs
Steps to Reproduce
Environment
apisix version
): 2.14.1uname -a
):openresty -V
ornginx -V
):curl http://127.0.0.1:9090/v1/server_info
):luarocks --version
):The text was updated successfully, but these errors were encountered: