-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prometheus-operator 0.33,ruleNamespaceSelector is not working #2890
Comments
serviceMonitorNamespaceSelector is also not working |
Does the |
Any update/workaround for this? |
This doesn’t look like a bug or anything. This looks like plain misconfiguration. I assume almost all users just want the all selector, which instead of specifying a label just set “ruleNamespaceSelector: {}”. |
@brancz Explicitly having that in my config doesn't seem to get all the namespaces either, I am still only seeing rules from my ---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: prometheus-operator
namespace: monitoring
annotations:
fluxcd.io/ignore: "false"
fluxcd.io/automated: "false"
spec:
releaseName: prometheus-operator
chart:
repository: https://kubernetes-charts.storage.googleapis.com/
name: prometheus-operator
version: 8.7.0
values:
...
prometheus:
prometheusSpec:
ruleNamespaceSelector: {}
serviceMonitorNamespaceSelector: {}
retention: 30d
enableAdminAPI: true
serviceMonitorSelectorNilUsesHelmValues: false
... Also these docs need to be updated, there is no
|
You’re right the docs do need updating. cc @pgier can you look into why the selector may not be working as expected? |
It's easily issue to replicate. Seems broken for awhile now... Let me know if there is more details that you will need me to provide. Link to my GitOps project and this specific Helm deployment is here: https://github.com/onedr0p/k3s-gitops/blob/master/deployments/monitoring/prometheus-operator/prometheus-operator.yaml |
@onedr0p Could you please share what version of Prometheus-operator are you running? I cannot reproduce it in master branch. Update: from the helm chart, seems the operator version is 0.35.0, cannot reproduce this as well |
@yeya24 My entire Kubernetes cluster is at that link. https://github.com/onedr0p/k3s-gitops and my helm values for the chart is here: |
I am running Kubernetes v1.17.2 could that be a factor? Something isn't lining up... I turned up the loglevel and will post if I see anything interesting. |
Could you please share your Prometheus CR yaml file? I think there might be something with the chart, but I am not sure. |
kubectl get clusterrole/prometheus -n monitoring -o yaml aggregationRule:
clusterRoleSelectors:
matchLabels:
rbac.ceph.rook.io/aggregate-to-prometheus: "true"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
fluxcd.io/sync-checksum: 9ea4adcb3ddfa12d3d51f7dafabcaa2e9f1d00fc
kubectl.kubernetes.io/last-applied-configuration: |
{"aggregationRule":{"clusterRoleSelectors":[{"matchLabels":{"rbac.ceph.rook.io/aggregate-to-prometheus":"true"}}]},"apiVersion":"rbac.authorization.k8s.io/v1beta1","kind":"ClusterRole","metadata":{"annotations":{"fluxcd.io/sync-checksum":"9ea4adcb3ddfa12d3d51f7dafabcaa2e9f1d00fc"},"labels":{"fluxcd.io/sync-gc-mark":"sha256.ZmuhpG59XQ5T2S7-fis4EhbV5xqthNppFp5TjlBJ5sE"},"name":"prometheus"},"rules":[]}
creationTimestamp: "2020-01-24T14:45:55Z"
labels:
fluxcd.io/sync-gc-mark: sha256.ZmuhpG59XQ5T2S7-fis4EhbV5xqthNppFp5TjlBJ5sE
name: prometheus
resourceVersion: "7576678"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/prometheus
uid: 5ae891d6-e5bf-4d0a-baee-8aa9bafff25f
rules:
- apiGroups:
- ""
resources:
- nodes
- services
- endpoints
- pods
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get Sorry for the bad yaml formatting I'm on mobile. |
Sorry for not being clear. Could you please share the yaml file of your Prometheus Custom Definition? like |
apiVersion: v1
items:
- apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
annotations:
helm.fluxcd.io/antecedent: monitoring:helmrelease/prometheus-operator
creationTimestamp: "2020-01-24T14:36:18Z"
generation: 6
labels:
app: prometheus-operator-prometheus
chart: prometheus-operator-8.7.0
heritage: Helm
release: prometheus-operator
name: prometheus-operator-prometheus
namespace: monitoring
resourceVersion: "7022408"
selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheuses/prometheus-operator-prometheus
uid: 8cee14f4-7b9a-4ea9-90de-e8e350eccce1
spec:
additionalScrapeConfigs:
key: additional-scrape-configs.yaml
name: prometheus-operator-prometheus-scrape-confg
alerting:
alertmanagers:
- apiVersion: v2
name: prometheus-operator-alertmanager
namespace: monitoring
pathPrefix: /
port: web
baseImage: quay.io/prometheus/prometheus
enableAdminAPI: true
externalUrl: http://redacted
listenLocal: false
logFormat: logfmt
logLevel: info
paused: false
podMonitorNamespaceSelector: {}
podMonitorSelector:
matchLabels:
release: prometheus-operator
portName: web
replicas: 1
retention: 30d
routePrefix: /
ruleNamespaceSelector: {}
ruleSelector:
matchLabels:
app: prometheus-operator
release: prometheus-operator
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: prometheus-operator-prometheus
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector:
matchLabels:
release: prometheus-operator
storage:
volumeClaimTemplate:
spec:
resources:
requests:
storage: 100Gi
storageClassName: rook-ceph-block
version: v2.15.2
kind: List
metadata:
resourceVersion: ""
selfLink: "" |
Thanks! @onedr0p The manifest is good. So the last chance is that the rules label cannot be matched
Could you please set your prometheus-operator log level to debug and check if you can find some logs like |
Will do, in the mean time does it make since to set the following?
and
Will that force it to include all rules and pods? |
Logs from the past hour with debug turned on: Grabbed with Loki
|
Yes, I think they have the same logic. Could you please share the logs of your prometheus-operator instance, not Prometheus? It is ok to just use kubectl logs to check whether it contains |
I think I found what you want:
|
Full logs here: https://gist.github.com/onedr0p/bd695f14ff91bef802c4f290ddb58c97 |
It looks like all namespaces are being monitored! derp For example I have https://github.com/onedr0p/k3s-gitops/blob/master/deployments/rook-ceph/monitoring/prometheus-ceph-v14-rules.yaml Would me updating the first few lines to this make it pick up the rules? apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
prometheus: rook-prometheus
role: alert-rules
app: prometheus-operator
release: prometheus-operator
... |
Yes, I think so. You could try it and see if it works |
And just like that it started picking up the rules. Anyone else in this issue make sure you have you labels set correctly on your PrometheusRule objects. Thanks @yeya24 for all the help! I think this issue can be closed, as it really was a misconfiguration issue :) |
More information for people who want to ignore labels and just have monitoring and rules for all namespaces / objects, it's important to set the following: prometheus:
prometheusSpec:
ruleSelector: {}
ruleNamespaceSelector: {}
ruleSelectorNilUsesHelmValues: false
serviceMonitorSelector: {}
serviceMonitorNamespaceSelector: {}
serviceMonitorSelectorNilUsesHelmValues: false
podMonitorSelector: {}
podMonitorNamespaceSelector: {}
podMonitorSelectorNilUsesHelmValues: false |
This issue has been automatically marked as stale because it has not had any activity in last 60d. Thank you for your contributions. |
Looks like things work as expected. :) Closing. |
What happened?
prometheus-operator 0.33,ruleNamespaceSelector is not working
Did you expect to see some different?
when i create prometheusrule in other namespace,the prometheus operator can find it
How to reproduce it (as minimally and precisely as possible):
i create a prometheusrule at kube-system namespace,but therer is nothing in prometheus's UI
Environment
Prometheus Operator version:
prometheus-operator 0.33
Insert image tag or Git SHA here
Kubernetes version information:
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
kubectl version
1.10
Kubernetes cluster kind:
kubeadm
Manifests:
Anything else we need to know?:
The text was updated successfully, but these errors were encountered: