-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[kube-prometheus-stack] Allow specifying Additional Labels per Alert #3014
Comments
@zeritti Can you help me on this ? I have the knowledge to change the value file and rules templates manually, but it seems that all rules templates file are generated from this script, and I'm horrible at python. I've been trying to change the So far I reached this point: def add_custom_labels(rules, indent=4):
"""Add if wrapper for additional rules labels"""
rules_group = re.findall(r'(?<=name: ).*', rules)
alerts_names = re.findall(r'(?<=- alert: ).*', rules)
separator = " " * indent + "- alert:.*"
alerts_positions = re.finditer(separator,rules)
alert=-1
for alert_position in alerts_positions:
rule_condition = f'{{- if {condition_map[rules_group[0]]}.{alerts_names[alert]}.additionalRuleLabels }}\n{{ toYaml {condition_map[rules_group[0]]}.{alerts_names[alert]}.additionalRuleLabels | indent 8 }}\n{{- end }}'
rule_condition_len = len(rule_condition) + 1
# add rule_condition at the end of the alert block
if alert >= 0 :
index = alert_position.start() + rule_condition_len * alert - 1
rules = rules[:index] + "\n" + rule_condition + rules[index:]
alert += 1
# add rule_condition at the end of the last alert
if alert >= 0:
index = len(rules) - 1
rules = rules[:index] + "\n" + rule_condition + rules[index:]
return rules But this always generates broken templates since the statement Any python expert help here would be great. |
Hello Community :D |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
remove stale |
May want to look at #1231 (comment) which runs a query on the namespace hosting the service. I didn't realize you could do that either but it solved a similar problem for me. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
Is your feature request related to a problem ?
Issue:
Currently we only are able to add additional labels to the default rules using the
defaultRules.additionalRuleLabels
value. This Value is applied to every single Alert underhelm-charts/charts/kube-prometheus-stack/templates/prometheus/rules-1.14/
.In our setup we heavily rely on a
instance
label that identifies the target in the alert receivers like Slack, PagerDuty, etc.Since each alert addresses different objects\targets, and we want to set the
instance
label to identify those targets, we can't do that by adding a commondefaultRules.additionalRuleLabels
value since the labels will change depending on the alerts expressions of course.For example:
instance: '{{ $labels.namespace }}/{{ $labels.deployment }}'
will correctly identify the alerts that have the "namespace" and "deployment" labels, but for example alerts regarding cronjobs or even statefullsets would break and\or appear empty.Describe the solution you'd like.
We would need to have the possibility to edit Per Alert the
additionalRuleLabels
value so that we can have a more fine-grained Control on what labels we add to what alerts.Something similar to the following values.yaml structure:
Describe alternatives you've considered.
Alternative we can just disable all default rules and manage them ourselves but this would defeat the purpose of using the kube-prometheus-stack for getting automatically community approved patches, recommendations and updates.
Additional context.
We currently are migrating from a "legacy" Prom\Alertman\Grafana stack to Kube-Prometheus-stack and this changes to the overall helm templates really would make a difference in this migration.
The text was updated successfully, but these errors were encountered: