-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add kubernetes apiserver metricset #7059
Conversation
}.Build() | ||
) | ||
|
||
func MetricSetBuilder(mapping *MetricsMapping) func(base mb.BaseMetricSet) (mb.MetricSet, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exported function MetricSetBuilder should have comment or be unexported
}.Build() | ||
) | ||
|
||
func MetricSetBuilder(mapping *MetricsMapping) func(base mb.BaseMetricSet) (mb.MetricSet, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exported function MetricSetBuilder should have comment or be unexported
prometheus.TestCases{ | ||
{ | ||
MetricsFile: "./_meta/test/metrics", | ||
ExpectedFile: "./_meta/test/metrics.expected", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This reminds me a lot of what @simitt was doing here with the approvals: https://github.com/elastic/apm-server/blob/master/tests/approvals.go I wanted to look into it to also do some comparison on json docs for Elasticsearch. Perhaps we can sync up on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thats the approvals
script that you can run from cmd line: https://github.com/elastic/apm-server/blob/master/tests/scripts/approvals.go
Would be great to put this in its own small library for more generic usage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, we can discuss this and come up with a standardized way to deal with expected output in these tests, also pinging @jsoriano here, as he is working on some ideas to improve our testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ideas I'm working on are more focused to improve and scale integration tests, but any abstraction like these ones are more than welcome of course :)
b6ef84d
to
a64eb74
Compare
jenkins, retest this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty cool abstraction to collect prometheus metrics, surely lots things can be monitored this way.
events, err := f.Fetch() | ||
assert.NoError(t, err) | ||
|
||
if update { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about using a flag for update
as we do with go test -data
?
prometheus.TestCases{ | ||
{ | ||
MetricsFile: "./_meta/test/metrics", | ||
ExpectedFile: "./_meta/test/metrics.expected", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ideas I'm working on are more focused to improve and scale integration tests, but any abstraction like these ones are more than welcome of course :)
if err := mb.Registry.AddMetricSet("kubernetes", "apiserver", | ||
prometheus.MetricSetBuilder(mapping), prometheus.HostParser); err != nil { | ||
panic(err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use mb.Registry.MustAddMetricSet()
here?
@jsoriano I've incorporated your suggestions, thanks! |
jenkins, retest this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main thing I'm worried is that a testing flag shows up in the production binary. Haven't tested if this is the case.
Seems with api-server
we add one more thing to the kubenetes module which seems to be an other service?
DefaultPath: defaultPath, | ||
}.Build() | ||
|
||
expectedFlag = flag.Bool("update_expected", false, "Update prometheus expected files") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this only used for testing? If yes, should it really be registered here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, if it is defined here it will appear in the production binary. Testing code should be in a submodule to avoid that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we use cobra, common flags are not really "leaked" to the binary, I'm ok with moving this though
mapping *MetricsMapping | ||
} | ||
|
||
func (m *prometheusMetricSet) Fetch() ([]common.MapStr, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can happen in a later PR, but would be great to also use here the ReporterV2 interface.
// TestMetricSet goes over the given TestCases and ensures that source Prometheus metrics gets converted into the expected | ||
// events when passed by the given metricset. | ||
// If -update_expected flag is passed, the expected JSON file will be updated with the result | ||
func TestMetricSet(t *testing.T, module, metricset string, cases TestCases) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm thinking we should separate the testing parts and the module parts.
@@ -0,0 +1,27 @@ | |||
package apiserver |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is probably the most compact metricset we have. I like it.
I converted the helper to use |
3594513
to
52b2c0b
Compare
jenkins, test this again |
@exekias Will need a rebase and a |
@@ -0,0 +1,106 @@ | |||
package ptest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should call these packages always testing
as we do in some other places or if this is always and issue because there is a testing
package.
Note: Just for discussion, no change in the PR needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had the same thought, but as this is always used along with the Go testing
package clash is guaranteed.
This already happens with the mb/testing
package, we end up renaming it on import to mbtest
, with the annoyance of breaking automatic imports.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is another example in the latest changes on RabbitMQ module, all modules end up being renamed, but I think this is not so bad (https://github.com/elastic/beats/pull/7106/files#diff-7d67ada6eac5ae4109019e586c93d3a1R8)
I'm slightly in favour of having the same name for all test modules, though not sure if testing
is the best option as it collides with the standard library.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the module testing packages I think all must have the same name or structure, as we need to somehow modify the imports
script to skip them. In general what i worry about directories in module is that they can always conflict with metricset names.
This component serves the Kubernetes API, used both internally by other components and externally to query/modify objects state (for instance by using kubectl)
Summaries are not aggregatable, and, while we don't have support to display histograms I still prefer to keep them over summaries.
jenkins, retest this please |
jenkins, retest this please |
This new metricset gathers metrics from Kubernetes API servers, giving visibility on number of requests and latency. This change adds support for `summary` and `histogram` metric type to our Prometheus helper, histogram is used to retrieve latency info. I also simplified the process of creating a new metricset based on this helper, to the point you only need [this](https://github.com/exekias/beats/blob/5f4568208ac273d9b759343b22f783184951a477/metricbeat/module/kubernetes/apiserver/apiserver.go) to create it and [this](https://github.com/exekias/beats/blob/5f4568208ac273d9b759343b22f783184951a477/metricbeat/module/kubernetes/apiserver/apiserver_test.go) to test it.
This new metricset gathers metrics from Kubernetes API servers, giving visibility on number of requests and latency. This change adds support for `summary` and `histogram` metric type to our Prometheus helper, histogram is used to retrieve latency info. I also simplified the process of creating a new metricset based on this helper, to the point you only need [this](https://github.com/exekias/beats/blob/5f4568208ac273d9b759343b22f783184951a477/metricbeat/module/kubernetes/apiserver/apiserver.go) to create it and [this](https://github.com/exekias/beats/blob/5f4568208ac273d9b759343b22f783184951a477/metricbeat/module/kubernetes/apiserver/apiserver_test.go) to test it.
This new metricset gathers metrics from Kubernetes API servers, giving visibility on number of requests and latency.
This change adds support for
summary
andhistogram
metric type to our Prometheus helper, histogram is used to retrieve latency info.I also simplified the process of creating a new metricset based on this helper, to the point you only need this to create it and this to test it.