Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: The Prometheus plugin can now allow users to configure the DEFAULT_BUCKETS #9653

Closed
wklken opened this issue Jun 13, 2023 · 3 comments · Fixed by #9673
Closed

feat: The Prometheus plugin can now allow users to configure the DEFAULT_BUCKETS #9653

wklken opened this issue Jun 13, 2023 · 3 comments · Fixed by #9673
Assignees

Comments

@wklken
Copy link

wklken commented Jun 13, 2023

Description

-- Default set of latency buckets, 1ms to 60s:
local DEFAULT_BUCKETS = {1, 2, 5, 10, 20, 50, 100, 200, 500, 1000, 2000, 5000, 10000, 30000, 60000}

3 types: apisix/request/upstream

If we have 10,000 routes, the maximum number of metrics can be (15+1) * 3 * 10000 = 480,000, which is a huge response. (we will deploy 1 route to multiple services, so it will be much more records than that)

In production, we may not need such accurate metrics and can instead set them to {50, 100, 200, 500, 1000, 5000}, so it only have 210,000 metrics.

Or if we do not need the metrics by default, we can set it to {100000} to make the response small enough.(we registerd our own metrics)

Is it possible to set the buckets via plugin_attr? or can we add some triggers for those default metrics.

plugin_attr:
  prometheus:
    default_buckets:
      - 50
      - 100
      - 200
      - 500
@shreemaan-abhishek
Copy link
Contributor

@kingluo what do you think?

@kingluo
Copy link
Contributor

kingluo commented Jun 14, 2023

Yes, I think it's better to be configurable.

@jiangfucheng
Copy link
Member

Hi, I want try to implement this feature, could assign it to me?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants