Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(deps): update helm release k8s-monitoring to v1.6.19 #985

Merged
merged 2 commits into from
Jan 14, 2025

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Jan 14, 2025

This PR contains the following updates:

Package Update Change
k8s-monitoring patch 1.6.18 -> 1.6.19

Release Notes

grafana/helm-charts (k8s-monitoring)

v1.6.19

Compare Source

A Helm chart for gathering, scraping, and forwarding Kubernetes telemetry data to a Grafana Stack.

Source commit: grafana/k8s-monitoring-helm@039c0d7

Tag on source: https://github.com/grafana/k8s-monitoring-helm/releases/tag/v1.6.19


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

Copy link
Contributor

Changes Default Values

Copy link
Contributor

Changes Rendered Chart
diff -U 4 -r out/target/k8s-monitoring/values-demo-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml out/pr/k8s-monitoring/values-demo-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml
--- out/target/k8s-monitoring/values-demo-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml	2025-01-14 00:58:16.913912188 +0000
+++ out/pr/k8s-monitoring/values-demo-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml	2025-01-14 00:57:37.617531233 +0000
@@ -670,9 +670,9 @@
     prometheus.relabel "kube_state_metrics" {
       max_cache_size = 100000
       rule {
         source_labels = ["__name__"]
-        regex = "up|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_statefulset.*|kube_namespace_created|kube_namespace_labels|kube_pod_container_status_running|kube_pod_container_status_ready|kube_pod_container_status_waiting|kube_pod_container_status_terminated|kube_service_info|kube_endpoint_info|kube_ingress_info|kube_deployment_labels|kube_statefulset_labels|kube_daemonset_labels|kube_persistentvolumeclaim_info|kube_hpa_labels|kube_configmap_info|kube_secret_info|kube_networkpolicy_labels|kube_node_info|kube_pod_status_qos_class|kube_pod_container_status_last_terminated_exitcode"
+        regex = "up|kube_configmap_metadata_resource_version|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_secret_metadata_resource_version|kube_statefulset.*|kube_namespace_created|kube_namespace_labels|kube_pod_container_status_running|kube_pod_container_status_ready|kube_pod_container_status_waiting|kube_pod_container_status_terminated|kube_service_info|kube_endpoint_info|kube_ingress_info|kube_deployment_labels|kube_statefulset_labels|kube_daemonset_labels|kube_persistentvolumeclaim_info|kube_hpa_labels|kube_configmap_info|kube_secret_info|kube_networkpolicy_labels|kube_node_info|kube_pod_status_qos_class|kube_pod_container_status_last_terminated_exitcode"
         action = "keep"
       }
       forward_to = [prometheus.relabel.metrics_service.receiver]
     }
@@ -960,5 +960,5 @@
     }
   k8s-monitoring-build-info-metric.prom: |
     # HELP grafana_kubernetes_monitoring_build_info A metric to report the version of the Kubernetes Monitoring Helm chart as well as a summary of enabled features
     # TYPE grafana_kubernetes_monitoring_build_info gauge
-    grafana_kubernetes_monitoring_build_info{version="1.6.18", namespace="default", metrics="enabled,alloy,autoDiscover,kube-state-metrics,node-exporter,kubelet,kubeletResource,cadvisor,apiserver,cost,extraConfig", logs="enabled,events,pod_logs", traces="disabled", deployments="kube-state-metrics,prometheus-node-exporter,prometheus-operator-crds"} 1
+    grafana_kubernetes_monitoring_build_info{version="1.6.19", namespace="default", metrics="enabled,alloy,autoDiscover,kube-state-metrics,node-exporter,kubelet,kubeletResource,cadvisor,apiserver,cost,extraConfig", logs="enabled,events,pod_logs", traces="disabled", deployments="kube-state-metrics,prometheus-node-exporter,prometheus-operator-crds"} 1
diff -U 4 -r out/target/k8s-monitoring/values-demo-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml out/pr/k8s-monitoring/values-demo-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml
--- out/target/k8s-monitoring/values-demo-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml	2025-01-14 00:58:16.921912263 +0000
+++ out/pr/k8s-monitoring/values-demo-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml	2025-01-14 00:57:37.624531299 +0000
@@ -8,9 +8,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": pre-install,pre-upgrade
     "helm.sh/hook-weight": "-5"
     "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
@@ -679,9 +679,9 @@
     prometheus.relabel "kube_state_metrics" {
       max_cache_size = 100000
       rule {
         source_labels = ["__name__"]
-        regex = "up|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_statefulset.*|kube_namespace_created|kube_namespace_labels|kube_pod_container_status_running|kube_pod_container_status_ready|kube_pod_container_status_waiting|kube_pod_container_status_terminated|kube_service_info|kube_endpoint_info|kube_ingress_info|kube_deployment_labels|kube_statefulset_labels|kube_daemonset_labels|kube_persistentvolumeclaim_info|kube_hpa_labels|kube_configmap_info|kube_secret_info|kube_networkpolicy_labels|kube_node_info|kube_pod_status_qos_class|kube_pod_container_status_last_terminated_exitcode"
+        regex = "up|kube_configmap_metadata_resource_version|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_secret_metadata_resource_version|kube_statefulset.*|kube_namespace_created|kube_namespace_labels|kube_pod_container_status_running|kube_pod_container_status_ready|kube_pod_container_status_waiting|kube_pod_container_status_terminated|kube_service_info|kube_endpoint_info|kube_ingress_info|kube_deployment_labels|kube_statefulset_labels|kube_daemonset_labels|kube_persistentvolumeclaim_info|kube_hpa_labels|kube_configmap_info|kube_secret_info|kube_networkpolicy_labels|kube_node_info|kube_pod_status_qos_class|kube_pod_container_status_last_terminated_exitcode"
         action = "keep"
       }
       forward_to = [prometheus.relabel.metrics_service.receiver]
     }
@@ -1182,9 +1182,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": pre-install,pre-upgrade
     "helm.sh/hook-weight": "-5"
     "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
diff -U 4 -r out/target/k8s-monitoring/values-demo-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml out/pr/k8s-monitoring/values-demo-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml
--- out/target/k8s-monitoring/values-demo-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml	2025-01-14 00:58:16.921912263 +0000
+++ out/pr/k8s-monitoring/values-demo-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml	2025-01-14 00:57:37.625531308 +0000
@@ -8,9 +8,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": test
     "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
     "helm.sh/hook-weight": "-1"
@@ -70,9 +70,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": test
     "helm.sh/hook-delete-policy": before-hook-creation
     "helm.sh/hook-weight": "0"
@@ -81,9 +81,9 @@
   nodeSelector:
         kubernetes.io/os: linux
   containers:
     - name: config-analysis
-      image: ghcr.io/grafana/k8s-monitoring-test:1.6.18
+      image: ghcr.io/grafana/k8s-monitoring-test:1.6.19
       command: [/etc/bin/config-analysis.sh]
       env:
         - name: ALLOY_HOST
           value: release-name-alloy.default.svc:12345
@@ -97,9 +97,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": test
     "helm.sh/hook-delete-policy": before-hook-creation
     "helm.sh/hook-weight": "0"
@@ -113,16 +113,16 @@
       namespace: default
       labels:
         app.kubernetes.io/managed-by: "Helm"
         app.kubernetes.io/instance: "release-name"
-        helm.sh/chart: "k8s-monitoring-1.6.18"
+        helm.sh/chart: "k8s-monitoring-1.6.19"
     spec:
       restartPolicy: Never
       nodeSelector:
         kubernetes.io/os: linux
       containers:
         - name: query-test
-          image: ghcr.io/grafana/k8s-monitoring-test:1.6.18
+          image: ghcr.io/grafana/k8s-monitoring-test:1.6.19
           command: ["bash", "-c", "/etc/bin/query-test.sh /etc/test/testQueries.json"]
           volumeMounts:
             - name: test-files
               mountPath: /etc/test
diff -U 4 -r out/target/k8s-monitoring/values-k3d.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml out/pr/k8s-monitoring/values-k3d.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml
--- out/target/k8s-monitoring/values-k3d.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml	2025-01-14 00:58:16.418907498 +0000
+++ out/pr/k8s-monitoring/values-k3d.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml	2025-01-14 00:57:37.096526306 +0000
@@ -670,9 +670,9 @@
     prometheus.relabel "kube_state_metrics" {
       max_cache_size = 100000
       rule {
         source_labels = ["__name__"]
-        regex = "up|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_statefulset.*|kube_namespace_created"
+        regex = "up|kube_configmap_metadata_resource_version|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_secret_metadata_resource_version|kube_statefulset.*|kube_namespace_created"
         action = "keep"
       }
       forward_to = [prometheus.relabel.metrics_service.receiver]
     }
@@ -936,5 +936,5 @@
     }
   k8s-monitoring-build-info-metric.prom: |
     # HELP grafana_kubernetes_monitoring_build_info A metric to report the version of the Kubernetes Monitoring Helm chart as well as a summary of enabled features
     # TYPE grafana_kubernetes_monitoring_build_info gauge
-    grafana_kubernetes_monitoring_build_info{version="1.6.18", namespace="default", metrics="enabled,alloy,autoDiscover,kube-state-metrics,node-exporter,kubelet,kubeletResource,cadvisor,apiserver,cost,extraConfig", logs="enabled,events,pod_logs", traces="disabled", deployments="kube-state-metrics,prometheus-node-exporter,prometheus-operator-crds"} 1
+    grafana_kubernetes_monitoring_build_info{version="1.6.19", namespace="default", metrics="enabled,alloy,autoDiscover,kube-state-metrics,node-exporter,kubelet,kubeletResource,cadvisor,apiserver,cost,extraConfig", logs="enabled,events,pod_logs", traces="disabled", deployments="kube-state-metrics,prometheus-node-exporter,prometheus-operator-crds"} 1
diff -U 4 -r out/target/k8s-monitoring/values-k3d.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml out/pr/k8s-monitoring/values-k3d.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml
--- out/target/k8s-monitoring/values-k3d.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml	2025-01-14 00:58:16.424907555 +0000
+++ out/pr/k8s-monitoring/values-k3d.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml	2025-01-14 00:57:37.102526362 +0000
@@ -8,9 +8,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": pre-install,pre-upgrade
     "helm.sh/hook-weight": "-5"
     "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
@@ -679,9 +679,9 @@
     prometheus.relabel "kube_state_metrics" {
       max_cache_size = 100000
       rule {
         source_labels = ["__name__"]
-        regex = "up|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_statefulset.*|kube_namespace_created"
+        regex = "up|kube_configmap_metadata_resource_version|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_secret_metadata_resource_version|kube_statefulset.*|kube_namespace_created"
         action = "keep"
       }
       forward_to = [prometheus.relabel.metrics_service.receiver]
     }
@@ -1158,9 +1158,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": pre-install,pre-upgrade
     "helm.sh/hook-weight": "-5"
     "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
diff -U 4 -r out/target/k8s-monitoring/values-k3d.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml out/pr/k8s-monitoring/values-k3d.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml
--- out/target/k8s-monitoring/values-k3d.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml	2025-01-14 00:58:16.424907555 +0000
+++ out/pr/k8s-monitoring/values-k3d.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml	2025-01-14 00:57:37.102526362 +0000
@@ -8,9 +8,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": test
     "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
     "helm.sh/hook-weight": "-1"
@@ -70,9 +70,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": test
     "helm.sh/hook-delete-policy": before-hook-creation
     "helm.sh/hook-weight": "0"
@@ -81,9 +81,9 @@
   nodeSelector:
         kubernetes.io/os: linux
   containers:
     - name: config-analysis
-      image: ghcr.io/grafana/k8s-monitoring-test:1.6.18
+      image: ghcr.io/grafana/k8s-monitoring-test:1.6.19
       command: [/etc/bin/config-analysis.sh]
       env:
         - name: ALLOY_HOST
           value: release-name-alloy.default.svc:12345
@@ -97,9 +97,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": test
     "helm.sh/hook-delete-policy": before-hook-creation
     "helm.sh/hook-weight": "0"
@@ -113,16 +113,16 @@
       namespace: default
       labels:
         app.kubernetes.io/managed-by: "Helm"
         app.kubernetes.io/instance: "release-name"
-        helm.sh/chart: "k8s-monitoring-1.6.18"
+        helm.sh/chart: "k8s-monitoring-1.6.19"
     spec:
       restartPolicy: Never
       nodeSelector:
         kubernetes.io/os: linux
       containers:
         - name: query-test
-          image: ghcr.io/grafana/k8s-monitoring-test:1.6.18
+          image: ghcr.io/grafana/k8s-monitoring-test:1.6.19
           command: ["bash", "-c", "/etc/bin/query-test.sh /etc/test/testQueries.json"]
           volumeMounts:
             - name: test-files
               mountPath: /etc/test
diff -U 4 -r out/target/k8s-monitoring/values-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml out/pr/k8s-monitoring/values-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml
--- out/target/k8s-monitoring/values-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml	2025-01-14 00:58:17.922921748 +0000
+++ out/pr/k8s-monitoring/values-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml	2025-01-14 00:57:38.610540853 +0000
@@ -670,9 +670,9 @@
     prometheus.relabel "kube_state_metrics" {
       max_cache_size = 100000
       rule {
         source_labels = ["__name__"]
-        regex = "up|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_statefulset.*|kube_namespace_created|kube_namespace_labels|kube_pod_container_status_running|kube_pod_container_status_ready|kube_pod_container_status_waiting|kube_pod_container_status_terminated|kube_service_info|kube_endpoint_info|kube_ingress_info|kube_deployment_labels|kube_statefulset_labels|kube_daemonset_labels|kube_persistentvolumeclaim_info|kube_hpa_labels|kube_configmap_info|kube_secret_info|kube_networkpolicy_labels|kube_node_info|kube_pod_status_qos_class|kube_pod_container_status_last_terminated_exitcode"
+        regex = "up|kube_configmap_metadata_resource_version|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_secret_metadata_resource_version|kube_statefulset.*|kube_namespace_created|kube_namespace_labels|kube_pod_container_status_running|kube_pod_container_status_ready|kube_pod_container_status_waiting|kube_pod_container_status_terminated|kube_service_info|kube_endpoint_info|kube_ingress_info|kube_deployment_labels|kube_statefulset_labels|kube_daemonset_labels|kube_persistentvolumeclaim_info|kube_hpa_labels|kube_configmap_info|kube_secret_info|kube_networkpolicy_labels|kube_node_info|kube_pod_status_qos_class|kube_pod_container_status_last_terminated_exitcode"
         action = "keep"
       }
       forward_to = [prometheus.relabel.metrics_service.receiver]
     }
@@ -960,5 +960,5 @@
     }
   k8s-monitoring-build-info-metric.prom: |
     # HELP grafana_kubernetes_monitoring_build_info A metric to report the version of the Kubernetes Monitoring Helm chart as well as a summary of enabled features
     # TYPE grafana_kubernetes_monitoring_build_info gauge
-    grafana_kubernetes_monitoring_build_info{version="1.6.18", namespace="default", metrics="enabled,alloy,autoDiscover,kube-state-metrics,node-exporter,kubelet,kubeletResource,cadvisor,apiserver,cost,extraConfig", logs="enabled,events,pod_logs", traces="disabled", deployments="kube-state-metrics,prometheus-node-exporter,prometheus-operator-crds"} 1
+    grafana_kubernetes_monitoring_build_info{version="1.6.19", namespace="default", metrics="enabled,alloy,autoDiscover,kube-state-metrics,node-exporter,kubelet,kubeletResource,cadvisor,apiserver,cost,extraConfig", logs="enabled,events,pod_logs", traces="disabled", deployments="kube-state-metrics,prometheus-node-exporter,prometheus-operator-crds"} 1
diff -U 4 -r out/target/k8s-monitoring/values-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml out/pr/k8s-monitoring/values-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml
--- out/target/k8s-monitoring/values-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml	2025-01-14 00:58:17.929921814 +0000
+++ out/pr/k8s-monitoring/values-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml	2025-01-14 00:57:38.616540911 +0000
@@ -8,9 +8,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": pre-install,pre-upgrade
     "helm.sh/hook-weight": "-5"
     "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
@@ -679,9 +679,9 @@
     prometheus.relabel "kube_state_metrics" {
       max_cache_size = 100000
       rule {
         source_labels = ["__name__"]
-        regex = "up|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_statefulset.*|kube_namespace_created|kube_namespace_labels|kube_pod_container_status_running|kube_pod_container_status_ready|kube_pod_container_status_waiting|kube_pod_container_status_terminated|kube_service_info|kube_endpoint_info|kube_ingress_info|kube_deployment_labels|kube_statefulset_labels|kube_daemonset_labels|kube_persistentvolumeclaim_info|kube_hpa_labels|kube_configmap_info|kube_secret_info|kube_networkpolicy_labels|kube_node_info|kube_pod_status_qos_class|kube_pod_container_status_last_terminated_exitcode"
+        regex = "up|kube_configmap_metadata_resource_version|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_secret_metadata_resource_version|kube_statefulset.*|kube_namespace_created|kube_namespace_labels|kube_pod_container_status_running|kube_pod_container_status_ready|kube_pod_container_status_waiting|kube_pod_container_status_terminated|kube_service_info|kube_endpoint_info|kube_ingress_info|kube_deployment_labels|kube_statefulset_labels|kube_daemonset_labels|kube_persistentvolumeclaim_info|kube_hpa_labels|kube_configmap_info|kube_secret_info|kube_networkpolicy_labels|kube_node_info|kube_pod_status_qos_class|kube_pod_container_status_last_terminated_exitcode"
         action = "keep"
       }
       forward_to = [prometheus.relabel.metrics_service.receiver]
     }
@@ -1182,9 +1182,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": pre-install,pre-upgrade
     "helm.sh/hook-weight": "-5"
     "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
diff -U 4 -r out/target/k8s-monitoring/values-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml out/pr/k8s-monitoring/values-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml
--- out/target/k8s-monitoring/values-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml	2025-01-14 00:58:17.929921814 +0000
+++ out/pr/k8s-monitoring/values-metalstack.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml	2025-01-14 00:57:38.616540911 +0000
@@ -8,9 +8,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": test
     "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
     "helm.sh/hook-weight": "-1"
@@ -70,9 +70,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": test
     "helm.sh/hook-delete-policy": before-hook-creation
     "helm.sh/hook-weight": "0"
@@ -81,9 +81,9 @@
   nodeSelector:
         kubernetes.io/os: linux
   containers:
     - name: config-analysis
-      image: ghcr.io/grafana/k8s-monitoring-test:1.6.18
+      image: ghcr.io/grafana/k8s-monitoring-test:1.6.19
       command: [/etc/bin/config-analysis.sh]
       env:
         - name: ALLOY_HOST
           value: release-name-alloy.default.svc:12345
@@ -97,9 +97,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": test
     "helm.sh/hook-delete-policy": before-hook-creation
     "helm.sh/hook-weight": "0"
@@ -113,16 +113,16 @@
       namespace: default
       labels:
         app.kubernetes.io/managed-by: "Helm"
         app.kubernetes.io/instance: "release-name"
-        helm.sh/chart: "k8s-monitoring-1.6.18"
+        helm.sh/chart: "k8s-monitoring-1.6.19"
     spec:
       restartPolicy: Never
       nodeSelector:
         kubernetes.io/os: linux
       containers:
         - name: query-test
-          image: ghcr.io/grafana/k8s-monitoring-test:1.6.18
+          image: ghcr.io/grafana/k8s-monitoring-test:1.6.19
           command: ["bash", "-c", "/etc/bin/query-test.sh /etc/test/testQueries.json"]
           volumeMounts:
             - name: test-files
               mountPath: /etc/test
diff -U 4 -r out/target/k8s-monitoring/values-uibklab.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml out/pr/k8s-monitoring/values-uibklab.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml
--- out/target/k8s-monitoring/values-uibklab.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml	2025-01-14 00:58:17.422917010 +0000
+++ out/pr/k8s-monitoring/values-uibklab.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/alloy-config.yaml	2025-01-14 00:57:38.114535994 +0000
@@ -700,9 +700,9 @@
     prometheus.relabel "kube_state_metrics" {
       max_cache_size = 100000
       rule {
         source_labels = ["__name__"]
-        regex = "up|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_statefulset.*|kube_namespace_created|kube_namespace_labels|kube_pod_container_status_running|kube_pod_container_status_ready|kube_pod_container_status_waiting|kube_pod_container_status_terminated|kube_service_info|kube_endpoint_info|kube_ingress_info|kube_deployment_labels|kube_statefulset_labels|kube_daemonset_labels|kube_persistentvolumeclaim_info|kube_hpa_labels|kube_configmap_info|kube_secret_info|kube_networkpolicy_labels|kube_node_info|kube_pod_status_qos_class|kube_pod_container_status_last_terminated_exitcode"
+        regex = "up|kube_configmap_metadata_resource_version|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_secret_metadata_resource_version|kube_statefulset.*|kube_namespace_created|kube_namespace_labels|kube_pod_container_status_running|kube_pod_container_status_ready|kube_pod_container_status_waiting|kube_pod_container_status_terminated|kube_service_info|kube_endpoint_info|kube_ingress_info|kube_deployment_labels|kube_statefulset_labels|kube_daemonset_labels|kube_persistentvolumeclaim_info|kube_hpa_labels|kube_configmap_info|kube_secret_info|kube_networkpolicy_labels|kube_node_info|kube_pod_status_qos_class|kube_pod_container_status_last_terminated_exitcode"
         action = "keep"
       }
       rule {
           source_labels = ["namespace"]
@@ -1016,5 +1016,5 @@
     }
   k8s-monitoring-build-info-metric.prom: |
     # HELP grafana_kubernetes_monitoring_build_info A metric to report the version of the Kubernetes Monitoring Helm chart as well as a summary of enabled features
     # TYPE grafana_kubernetes_monitoring_build_info gauge
-    grafana_kubernetes_monitoring_build_info{version="1.6.18", namespace="default", metrics="enabled,alloy,autoDiscover,kube-state-metrics,node-exporter,kubelet,kubeletResource,cadvisor,apiserver,cost,extraConfig", logs="enabled,events,pod_logs", traces="disabled", deployments="kube-state-metrics,prometheus-node-exporter,prometheus-operator-crds"} 1
+    grafana_kubernetes_monitoring_build_info{version="1.6.19", namespace="default", metrics="enabled,alloy,autoDiscover,kube-state-metrics,node-exporter,kubelet,kubeletResource,cadvisor,apiserver,cost,extraConfig", logs="enabled,events,pod_logs", traces="disabled", deployments="kube-state-metrics,prometheus-node-exporter,prometheus-operator-crds"} 1
diff -U 4 -r out/target/k8s-monitoring/values-uibklab.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml out/pr/k8s-monitoring/values-uibklab.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml
--- out/target/k8s-monitoring/values-uibklab.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml	2025-01-14 00:58:17.430917086 +0000
+++ out/pr/k8s-monitoring/values-uibklab.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/hooks/validate-configuration.yaml	2025-01-14 00:57:38.122536073 +0000
@@ -8,9 +8,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": pre-install,pre-upgrade
     "helm.sh/hook-weight": "-5"
     "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
@@ -709,9 +709,9 @@
     prometheus.relabel "kube_state_metrics" {
       max_cache_size = 100000
       rule {
         source_labels = ["__name__"]
-        regex = "up|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_statefulset.*|kube_namespace_created|kube_namespace_labels|kube_pod_container_status_running|kube_pod_container_status_ready|kube_pod_container_status_waiting|kube_pod_container_status_terminated|kube_service_info|kube_endpoint_info|kube_ingress_info|kube_deployment_labels|kube_statefulset_labels|kube_daemonset_labels|kube_persistentvolumeclaim_info|kube_hpa_labels|kube_configmap_info|kube_secret_info|kube_networkpolicy_labels|kube_node_info|kube_pod_status_qos_class|kube_pod_container_status_last_terminated_exitcode"
+        regex = "up|kube_configmap_metadata_resource_version|kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_condition|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolume_status_phase|kube_persistentvolumeclaim_access_mode|kube_persistentvolumeclaim_info|kube_persistentvolumeclaim_labels|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_persistentvolumeclaim_status_phase|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_spec_volumes_persistentvolumeclaims_info|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_secret_metadata_resource_version|kube_statefulset.*|kube_namespace_created|kube_namespace_labels|kube_pod_container_status_running|kube_pod_container_status_ready|kube_pod_container_status_waiting|kube_pod_container_status_terminated|kube_service_info|kube_endpoint_info|kube_ingress_info|kube_deployment_labels|kube_statefulset_labels|kube_daemonset_labels|kube_persistentvolumeclaim_info|kube_hpa_labels|kube_configmap_info|kube_secret_info|kube_networkpolicy_labels|kube_node_info|kube_pod_status_qos_class|kube_pod_container_status_last_terminated_exitcode"
         action = "keep"
       }
       rule {
           source_labels = ["namespace"]
@@ -1238,9 +1238,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": pre-install,pre-upgrade
     "helm.sh/hook-weight": "-5"
     "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
diff -U 4 -r out/target/k8s-monitoring/values-uibklab.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml out/pr/k8s-monitoring/values-uibklab.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml
--- out/target/k8s-monitoring/values-uibklab.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml	2025-01-14 00:58:17.430917086 +0000
+++ out/pr/k8s-monitoring/values-uibklab.yaml/sx-k8s-monitoring/charts/k8s-monitoring/templates/tests/test.yaml	2025-01-14 00:57:38.122536073 +0000
@@ -8,9 +8,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": test
     "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
     "helm.sh/hook-weight": "-1"
@@ -70,9 +70,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": test
     "helm.sh/hook-delete-policy": before-hook-creation
     "helm.sh/hook-weight": "0"
@@ -81,9 +81,9 @@
   nodeSelector:
         kubernetes.io/os: linux
   containers:
     - name: config-analysis
-      image: ghcr.io/grafana/k8s-monitoring-test:1.6.18
+      image: ghcr.io/grafana/k8s-monitoring-test:1.6.19
       command: [/etc/bin/config-analysis.sh]
       env:
         - name: ALLOY_HOST
           value: release-name-alloy.default.svc:12345
@@ -97,9 +97,9 @@
   labels:
     app.kubernetes.io/managed-by: "Helm"
     app.kubernetes.io/instance: "release-name"
     app.kubernetes.io/version: 2.10.0
-    helm.sh/chart: "k8s-monitoring-1.6.18"
+    helm.sh/chart: "k8s-monitoring-1.6.19"
   annotations:
     "helm.sh/hook": test
     "helm.sh/hook-delete-policy": before-hook-creation
     "helm.sh/hook-weight": "0"
@@ -113,16 +113,16 @@
       namespace: default
       labels:
         app.kubernetes.io/managed-by: "Helm"
         app.kubernetes.io/instance: "release-name"
-        helm.sh/chart: "k8s-monitoring-1.6.18"
+        helm.sh/chart: "k8s-monitoring-1.6.19"
     spec:
       restartPolicy: Never
       nodeSelector:
         kubernetes.io/os: linux
       containers:
         - name: query-test
-          image: ghcr.io/grafana/k8s-monitoring-test:1.6.18
+          image: ghcr.io/grafana/k8s-monitoring-test:1.6.19
           command: ["bash", "-c", "/etc/bin/query-test.sh /etc/test/testQueries.json"]
           volumeMounts:
             - name: test-files
               mountPath: /etc/test

Copy link
Contributor Author

renovate bot commented Jan 14, 2025

Edited/Blocked Notification

Renovate will not automatically rebase this PR, because it does not recognize the last commit author and assumes somebody else may have edited the PR.

You can manually request rebase by checking the rebase/retry box above.

⚠️ Warning: custom changes will be lost.

@jkleinlercher jkleinlercher merged commit aa50c9d into main Jan 14, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant