You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Which chart:
prometheus-community/kube-prometheus-stack
Which version of the chart:
kube-prometheus-stack-14.0.0
What happened:
helm setup went fine. All pods started. Only, shortly after startup, the prometheus-operator pod failed, with a very short log:
ts=2021-03-08T19:08:17.263567905Z caller=main.go:99 msg="Staring insecure server on :8080"
level=info ts=2021-03-08T19:08:17.282302674Z caller=operator.go:452 component=prometheusoperator msg="connection established" cluster-version=v1.17.17+IKS
level=info ts=2021-03-08T19:08:17.282386097Z caller=operator.go:294 component=thanosoperator msg="connection established" cluster-version=v1.17.17+IKS
level=info ts=2021-03-08T19:08:17.282473789Z caller=operator.go:214 component=alertmanageroperator msg="connection established" cluster-version=v1.17.17+IKS
ts=2021-03-08T19:08:17.338940504Z caller=main.go:305 msg="Unhandled error received. Exiting..." err="getting CRD: Alertmanager: customresourcedefinitions.apiextensions.k8s.io \"alertmanagers.monitoring.coreos.com\" is forbidden: User \"system:serviceaccount:monitoring:prometheus-kube-prometheus-operator\" cannot get resource \"customresourcedefinitions\" in API group \"apiextensions.k8s.io\" at the cluster scope"
What you expected to happen:
All pods should go on running forever after deployment.
How to reproduce it (as minimally and precisely as possible):
Just do a similar deployment, i.e. with webhooks and TLS disabled. The root cause, as far as I could investigate, I describe at the end of the ticket.
Changed values of values.yaml (only put values which differ from the defaults):
Anything else we need to know:
The prometheus-operator pod runs with the service account prometheus-kube-prometheus-operator. There's neither a cluster role binding nor a role binding that should give that service account access to custom resource definitions.
The text was updated successfully, but these errors were encountered:
Describe the bug
The deployment runs fine, but the prometheus-operator pod fails almost immediately after it starts.
Helm Version:
Kubernetes Version:
Which chart:
prometheus-community/kube-prometheus-stack
Which version of the chart:
kube-prometheus-stack-14.0.0
What happened:
helm setup went fine. All pods started. Only, shortly after startup, the prometheus-operator pod failed, with a very short log:
What you expected to happen:
All pods should go on running forever after deployment.
How to reproduce it (as minimally and precisely as possible):
Just do a similar deployment, i.e. with webhooks and TLS disabled. The root cause, as far as I could investigate, I describe at the end of the ticket.
Changed values of values.yaml (only put values which differ from the defaults):
The helm command that you execute and failing/misfunctioning:
Helm values set after installation/upgrade:
Anything else we need to know:
The prometheus-operator pod runs with the service account
prometheus-kube-prometheus-operator
. There's neither a cluster role binding nor a role binding that should give that service account access to custom resource definitions.The text was updated successfully, but these errors were encountered: