Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kubernetes apiserver metricset #7059

Merged
merged 15 commits into from
May 17, 2018
Merged

Conversation

exekias
Copy link
Contributor

@exekias exekias commented May 9, 2018

This new metricset gathers metrics from Kubernetes API servers, giving visibility on number of requests and latency.

This change adds support for summary and histogram metric type to our Prometheus helper, histogram is used to retrieve latency info.

I also simplified the process of creating a new metricset based on this helper, to the point you only need this to create it and this to test it.

download

@exekias exekias added enhancement in progress Pull request is currently in progress. review containers Related to containers use case labels May 9, 2018
}.Build()
)

func MetricSetBuilder(mapping *MetricsMapping) func(base mb.BaseMetricSet) (mb.MetricSet, error) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

exported function MetricSetBuilder should have comment or be unexported

}.Build()
)

func MetricSetBuilder(mapping *MetricsMapping) func(base mb.BaseMetricSet) (mb.MetricSet, error) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

exported function MetricSetBuilder should have comment or be unexported

prometheus.TestCases{
{
MetricsFile: "./_meta/test/metrics",
ExpectedFile: "./_meta/test/metrics.expected",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This reminds me a lot of what @simitt was doing here with the approvals: https://github.com/elastic/apm-server/blob/master/tests/approvals.go I wanted to look into it to also do some comparison on json docs for Elasticsearch. Perhaps we can sync up on this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats the approvals script that you can run from cmd line: https://github.com/elastic/apm-server/blob/master/tests/scripts/approvals.go

Would be great to put this in its own small library for more generic usage.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, we can discuss this and come up with a standardized way to deal with expected output in these tests, also pinging @jsoriano here, as he is working on some ideas to improve our testing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ideas I'm working on are more focused to improve and scale integration tests, but any abstraction like these ones are more than welcome of course :)

@exekias exekias force-pushed the k8s-apiserver branch 2 times, most recently from b6ef84d to a64eb74 Compare May 14, 2018 18:07
@exekias exekias removed the in progress Pull request is currently in progress. label May 14, 2018
@exekias
Copy link
Contributor Author

exekias commented May 14, 2018

jenkins, retest this please

Copy link
Member

@jsoriano jsoriano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty cool abstraction to collect prometheus metrics, surely lots things can be monitored this way.

events, err := f.Fetch()
assert.NoError(t, err)

if update {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about using a flag for update as we do with go test -data?

prometheus.TestCases{
{
MetricsFile: "./_meta/test/metrics",
ExpectedFile: "./_meta/test/metrics.expected",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ideas I'm working on are more focused to improve and scale integration tests, but any abstraction like these ones are more than welcome of course :)

if err := mb.Registry.AddMetricSet("kubernetes", "apiserver",
prometheus.MetricSetBuilder(mapping), prometheus.HostParser); err != nil {
panic(err)
}
Copy link
Member

@jsoriano jsoriano May 15, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use mb.Registry.MustAddMetricSet() here?

@exekias
Copy link
Contributor Author

exekias commented May 15, 2018

@jsoriano I've incorporated your suggestions, thanks!

@exekias
Copy link
Contributor Author

exekias commented May 15, 2018

jenkins, retest this please

Copy link
Member

@ruflin ruflin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main thing I'm worried is that a testing flag shows up in the production binary. Haven't tested if this is the case.

Seems with api-server we add one more thing to the kubenetes module which seems to be an other service?

DefaultPath: defaultPath,
}.Build()

expectedFlag = flag.Bool("update_expected", false, "Update prometheus expected files")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this only used for testing? If yes, should it really be registered here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, if it is defined here it will appear in the production binary. Testing code should be in a submodule to avoid that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we use cobra, common flags are not really "leaked" to the binary, I'm ok with moving this though

mapping *MetricsMapping
}

func (m *prometheusMetricSet) Fetch() ([]common.MapStr, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can happen in a later PR, but would be great to also use here the ReporterV2 interface.

// TestMetricSet goes over the given TestCases and ensures that source Prometheus metrics gets converted into the expected
// events when passed by the given metricset.
// If -update_expected flag is passed, the expected JSON file will be updated with the result
func TestMetricSet(t *testing.T, module, metricset string, cases TestCases) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking we should separate the testing parts and the module parts.

@@ -0,0 +1,27 @@
package apiserver
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is probably the most compact metricset we have. I like it.

@exekias
Copy link
Contributor Author

exekias commented May 15, 2018

I converted the helper to use ReporterV2

@exekias exekias force-pushed the k8s-apiserver branch 2 times, most recently from 3594513 to 52b2c0b Compare May 15, 2018 16:35
@exekias
Copy link
Contributor Author

exekias commented May 15, 2018

jenkins, test this again

@ruflin
Copy link
Member

ruflin commented May 16, 2018

@exekias Will need a rebase and a make update.

@@ -0,0 +1,106 @@
package ptest
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should call these packages always testing as we do in some other places or if this is always and issue because there is a testing package.

Note: Just for discussion, no change in the PR needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had the same thought, but as this is always used along with the Go testing package clash is guaranteed.

This already happens with the mb/testing package, we end up renaming it on import to mbtest, with the annoyance of breaking automatic imports.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is another example in the latest changes on RabbitMQ module, all modules end up being renamed, but I think this is not so bad (https://github.com/elastic/beats/pull/7106/files#diff-7d67ada6eac5ae4109019e586c93d3a1R8)

I'm slightly in favour of having the same name for all test modules, though not sure if testing is the best option as it collides with the standard library.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the module testing packages I think all must have the same name or structure, as we need to somehow modify the imports script to skip them. In general what i worry about directories in module is that they can always conflict with metricset names.

Carlos Pérez-Aradros Herce added 5 commits May 16, 2018 14:58
This component serves the Kubernetes API, used both internally by other
components and externally to query/modify objects state (for instance by
using kubectl)
@exekias
Copy link
Contributor Author

exekias commented May 16, 2018

jenkins, retest this please

@exekias
Copy link
Contributor Author

exekias commented May 17, 2018

jenkins, retest this please

@ruflin ruflin merged commit b1f1b5f into elastic:master May 17, 2018
stevea78 pushed a commit to stevea78/beats that referenced this pull request May 20, 2018
This new metricset gathers metrics from Kubernetes API servers, giving visibility on number of requests and latency.

This change adds support for `summary` and `histogram` metric type to our Prometheus helper, histogram is used to retrieve latency info.

I also simplified the process of creating a new metricset based on this helper, to the point you only need [this](https://github.com/exekias/beats/blob/5f4568208ac273d9b759343b22f783184951a477/metricbeat/module/kubernetes/apiserver/apiserver.go) to create it and [this](https://github.com/exekias/beats/blob/5f4568208ac273d9b759343b22f783184951a477/metricbeat/module/kubernetes/apiserver/apiserver_test.go) to test it.
stevea78 pushed a commit to stevea78/beats that referenced this pull request May 20, 2018
This new metricset gathers metrics from Kubernetes API servers, giving visibility on number of requests and latency.

This change adds support for `summary` and `histogram` metric type to our Prometheus helper, histogram is used to retrieve latency info.

I also simplified the process of creating a new metricset based on this helper, to the point you only need [this](https://github.com/exekias/beats/blob/5f4568208ac273d9b759343b22f783184951a477/metricbeat/module/kubernetes/apiserver/apiserver.go) to create it and [this](https://github.com/exekias/beats/blob/5f4568208ac273d9b759343b22f783184951a477/metricbeat/module/kubernetes/apiserver/apiserver_test.go) to test it.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
containers Related to containers use case enhancement review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants