Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any cardinality concerns with always emitting CSV metrics?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the metric is using a good, qualified name, which is always unique for one CSV
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, that doesn't look crazy to me, but it looks like we're emitting other metrics besides that csv_succeeded one when I was poking around that metrics package - I'm trying to wrap my head around how all those CSV-related metrics, emitting on a per-step basis, can lead to problems affecting the core monitoring stack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I found #1099 - @awgreene any idea on whether we can safely emit metrics on a per-step basis now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other approach to fixing this bug could be to emit the metric for all CSVs during pod startup and then update it only when a change happens
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@timflannagan I don't think there's any cardinality concern here.
csv_succeeded
is a prometheus gauge, and withinEmitCSVMetrics
we're always first deleting the old metric for the csv being synced, and then emitting a new metric and setting the gauge value to 1 or 0 (succeeded/did not succeeded). Even if we were not deleting the old metric, iirc metric points for a unique set of values are only emitted once, i.e they're always unique data points in the set of emitted metrics, which is what @josefkarasek clarified in the first comment.However, I'm not convinced this actually solves the problem. @josefkarasek the original problem was that we were only edge triggering this metric, i.e whenever the controller syncs a clusterserviceversion (
syncClusterServiceVersion
holds the logic for when that does happen), and that happens only when there's a change in the CSV object on cluster. But we need some way to level drive this metric too, which is what the first part of your last comment is.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming that all CSVs are queued up during pod start and reconciled. My assumption is that this approach is edge+level driven at the same time. From what you're saying it sounds like this assumption doesn't hold.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although CSVs are queued up during pod start, that is still edge trigger, the trigger here being queuing of the CSV. True level driven is when you query the state of the cluster and attempt to reconcile with the desired state. There's always that chance with edge triggers that we'll miss an event, so querying for existing CSVs and emitting metrics for them on pod restart is the most full proof method to solve this problem.