Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prometheus-operator 0.33,ruleNamespaceSelector is not working #2890

Closed
gjpei2 opened this issue Nov 29, 2019 · 26 comments
Closed

prometheus-operator 0.33,ruleNamespaceSelector is not working #2890

gjpei2 opened this issue Nov 29, 2019 · 26 comments

Comments

@gjpei2
Copy link

gjpei2 commented Nov 29, 2019

What happened?
prometheus-operator 0.33,ruleNamespaceSelector is not working
Did you expect to see some different?
when i create prometheusrule in other namespace,the prometheus operator can find it
How to reproduce it (as minimally and precisely as possible):
i create a prometheusrule at kube-system namespace,but therer is nothing in prometheus's UI
Environment

  • Prometheus Operator version:
    prometheus-operator 0.33
    Insert image tag or Git SHA here

  • Kubernetes version information:
    Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
    kubectl version
    1.10

  • Kubernetes cluster kind:

    kubeadm

  • Manifests:

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  labels:
    prometheus: janus
  name: janus
  namespace: monitoring
spec:
  alerting:
    alertmanagers:
    - name: alertmanager
      namespace: monitoring
      port: alert
  retention: 300d
  storage:
    volumeClaimTemplate:
      spec:
        storageClassName: local-storage
        resources:
          requests:
            storage: 90Gi
  baseImage: quay.io/prometheus/prometheus
  nodeSelector:
    prometheus: deployed
  podMonitorSelector: {}
  replicas: 2
  ruleSelector: {}
  ruleNamespaceSelector: 
    matchLabels:
      namespace: kube-system
  securityContext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  additionalScrapeConfigs:
    name: additional-configs
    key: prometheus-additional.yaml
  serviceAccountName: prometheus-k8s
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector: {}
  version: v2.11.0

  • Prometheus Operator Logs:
level=info ts=2019-11-29T00:10:46.984367341Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:13:46.973647144Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:16:46.44537077Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:19:47.140513326Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:22:49.756379862Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:25:47.030447614Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:28:46.751344447Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:31:46.643626445Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:34:47.158656178Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:37:47.109882047Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:40:47.162427809Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:43:47.351519659Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:46:46.842565308Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:49:46.834624258Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:52:47.015767322Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:55:46.633987853Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T00:58:46.806089565Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T01:01:46.462754207Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T01:04:46.425956854Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T01:07:46.874226277Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T01:10:46.91100777Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T01:13:46.496781191Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T01:15:38.431800191Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T01:15:38.456870759Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus
level=info ts=2019-11-29T01:15:38.468594256Z caller=operator.go:1056 component=prometheusoperator msg="sync prometheus" key=monitoring/janus

Anything else we need to know?:

@gjpei2
Copy link
Author

gjpei2 commented Nov 29, 2019

serviceMonitorNamespaceSelector is also not working

@vsliouniaev
Copy link
Contributor

ruleNamespaceSelector: 
    matchLabels:
      namespace: kube-system

Does the kube-system namespace have this label?

@obrienrobert
Copy link

Any update/workaround for this?

@brancz
Copy link
Contributor

brancz commented Jan 9, 2020

This doesn’t look like a bug or anything. This looks like plain misconfiguration. I assume almost all users just want the all selector, which instead of specifying a label just set “ruleNamespaceSelector: {}”.

@onedr0p
Copy link

onedr0p commented Feb 4, 2020

@brancz ruleNamespaceSelector: {} is the default and doesn't work for gathering rules in all namespaces. I doubt this is a misconfiguration issue.

Explicitly having that in my config doesn't seem to get all the namespaces either, I am still only seeing rules from my monitoring namespace. I am using v8.7.0 of the helm chart.

---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: prometheus-operator
  namespace: monitoring
  annotations:
    fluxcd.io/ignore: "false"
    fluxcd.io/automated: "false"
spec:
  releaseName: prometheus-operator
  chart:
    repository: https://kubernetes-charts.storage.googleapis.com/
    name: prometheus-operator
    version: 8.7.0
  values:
...
    prometheus:
      prometheusSpec:
        ruleNamespaceSelector: {}
        serviceMonitorNamespaceSelector: {}
        retention: 30d
        enableAdminAPI: true
        serviceMonitorSelectorNilUsesHelmValues: false
...

Also these docs need to be updated, there is no any: true available for ruleNamespaceSelector or serviceMonitorNamespaceSelector

Parameter Description Default
prometheus.prometheusSpec.ruleNamespaceSelector Namespaces to be selected for PrometheusRules discovery. If nil, select own namespace. See namespaceSelector for usage {}

@brancz
Copy link
Contributor

brancz commented Feb 5, 2020

You’re right the docs do need updating.

cc @pgier can you look into why the selector may not be working as expected?

@onedr0p
Copy link

onedr0p commented Feb 6, 2020

It's easily issue to replicate. Seems broken for awhile now... Let me know if there is more details that you will need me to provide. Link to my GitOps project and this specific Helm deployment is here: https://github.com/onedr0p/k3s-gitops/blob/master/deployments/monitoring/prometheus-operator/prometheus-operator.yaml

@yeya24
Copy link
Contributor

yeya24 commented Feb 6, 2020

@onedr0p Could you please share what version of Prometheus-operator are you running? I cannot reproduce it in master branch.

Update: from the helm chart, seems the operator version is 0.35.0, cannot reproduce this as well

@onedr0p
Copy link

onedr0p commented Feb 6, 2020

@onedr0p
Copy link

onedr0p commented Feb 6, 2020

I am running Kubernetes v1.17.2 could that be a factor?

Something isn't lining up... I turned up the loglevel and will post if I see anything interesting.

@yeya24
Copy link
Contributor

yeya24 commented Feb 6, 2020

I am running Kubernetes v1.17.2 could that be a factor?

Something isn't lining up...

Could you please share your Prometheus CR yaml file? I think there might be something with the chart, but I am not sure.

@onedr0p
Copy link

onedr0p commented Feb 6, 2020

kubectl get clusterrole/prometheus -n monitoring -o yaml

aggregationRule:
  clusterRoleSelectors:
    matchLabels:                                           
      rbac.ceph.rook.io/aggregate-to-prometheus: "true"                                                   
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole                                    
metadata:                                              
  annotations:
    fluxcd.io/sync-checksum: 9ea4adcb3ddfa12d3d51f7dafabcaa2e9f1d00fc                                         
    kubectl.kubernetes.io/last-applied-configuration: |
      {"aggregationRule":{"clusterRoleSelectors":[{"matchLabels":{"rbac.ceph.rook.io/aggregate-to-prometheus":"true"}}]},"apiVersion":"rbac.authorization.k8s.io/v1beta1","kind":"ClusterRole","metadata":{"annotations":{"fluxcd.io/sync-checksum":"9ea4adcb3ddfa12d3d51f7dafabcaa2e9f1d00fc"},"labels":{"fluxcd.io/sync-gc-mark":"sha256.ZmuhpG59XQ5T2S7-fis4EhbV5xqthNppFp5TjlBJ5sE"},"name":"prometheus"},"rules":[]}
  creationTimestamp: "2020-01-24T14:45:55Z"          
labels:                                                
  fluxcd.io/sync-gc-mark: sha256.ZmuhpG59XQ5T2S7-fis4EhbV5xqthNppFp5TjlBJ5sE
name: prometheus                       
resourceVersion: "7576678"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/prometheus
  uid: 5ae891d6-e5bf-4d0a-baee-8aa9bafff25f
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - services
  - endpoints
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
- nonResourceURLs:
  - /metrics
  verbs:
  - get

Sorry for the bad yaml formatting I'm on mobile.

@yeya24
Copy link
Contributor

yeya24 commented Feb 6, 2020

Sorry for not being clear. Could you please share the yaml file of your Prometheus Custom Definition? like kubectl get prometheus -n monitoring -o yaml ?

@onedr0p
Copy link

onedr0p commented Feb 6, 2020

@yeya24

apiVersion: v1
items:
- apiVersion: monitoring.coreos.com/v1
  kind: Prometheus
  metadata:
    annotations:
      helm.fluxcd.io/antecedent: monitoring:helmrelease/prometheus-operator
    creationTimestamp: "2020-01-24T14:36:18Z"
    generation: 6
    labels:
      app: prometheus-operator-prometheus
      chart: prometheus-operator-8.7.0
      heritage: Helm
      release: prometheus-operator
    name: prometheus-operator-prometheus
    namespace: monitoring
    resourceVersion: "7022408"
    selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheuses/prometheus-operator-prometheus
    uid: 8cee14f4-7b9a-4ea9-90de-e8e350eccce1
  spec:
    additionalScrapeConfigs:
      key: additional-scrape-configs.yaml
      name: prometheus-operator-prometheus-scrape-confg
    alerting:
      alertmanagers:
      - apiVersion: v2
        name: prometheus-operator-alertmanager
        namespace: monitoring
        pathPrefix: /
        port: web
    baseImage: quay.io/prometheus/prometheus
    enableAdminAPI: true
    externalUrl: http://redacted
    listenLocal: false
    logFormat: logfmt
    logLevel: info
    paused: false
    podMonitorNamespaceSelector: {}
    podMonitorSelector:
      matchLabels:
        release: prometheus-operator
    portName: web
    replicas: 1
    retention: 30d
    routePrefix: /
    ruleNamespaceSelector: {}
    ruleSelector:
      matchLabels:
        app: prometheus-operator
        release: prometheus-operator
    securityContext:
      fsGroup: 2000
      runAsNonRoot: true
      runAsUser: 1000
    serviceAccountName: prometheus-operator-prometheus
    serviceMonitorNamespaceSelector: {}
    serviceMonitorSelector:
      matchLabels:
        release: prometheus-operator
    storage:
      volumeClaimTemplate:
        spec:
          resources:
            requests:
              storage: 100Gi
          storageClassName: rook-ceph-block
    version: v2.15.2
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

@yeya24
Copy link
Contributor

yeya24 commented Feb 6, 2020

Thanks! @onedr0p The manifest is good. So the last chance is that the rules label cannot be matched

    ruleSelector:
      matchLabels:
        app: prometheus-operator
        release: prometheus-operator

Could you please set your prometheus-operator log level to debug and check if you can find some logs like level=debug ts=2020-02-06T13:46:01.360165Z caller=rules.go:200 component=prometheusoperator msg="selected Rules" rules=kube-system-prometheus-example-alerts.yaml namespace=default prometheus=self ?

@onedr0p
Copy link

onedr0p commented Feb 6, 2020

Will do, in the mean time does it make since to set the following?

ruleSelector: {}

and

podMonitorSelector: {}

Will that force it to include all rules and pods?

@onedr0p
Copy link

onedr0p commented Feb 6, 2020

Logs from the past hour with debug turned on:

Grabbed with Loki {prometheus="prometheus-operator-prometheus", app="prometheus"}

2020-02-06 09:10:07	
level=debug ts=2020-02-06T14:10:07.382Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/pods?resourceVersion=7733826&timeout=8m54s&timeoutSeconds=534&watch=true 200 OK in 3 milliseconds"
2020-02-06 09:10:07	
level=debug ts=2020-02-06T14:10:07.379Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="/app/discovery/kubernetes/kubernetes.go:263: Watch close - *v1.Pod total 1 items received"
2020-02-06 09:09:48	
level=debug ts=2020-02-06T14:09:48.384Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/endpoints?resourceVersion=7733828&timeout=9m38s&timeoutSeconds=578&watch=true 200 OK in 2 milliseconds"
2020-02-06 09:09:48	
level=debug ts=2020-02-06T14:09:48.381Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="/app/discovery/kubernetes/kubernetes.go:261: Watch close - *v1.Endpoints total 2 items received"
2020-02-06 09:09:43	
level=debug ts=2020-02-06T14:09:43.380Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/services?resourceVersion=6970069&timeout=7m56s&timeoutSeconds=476&watch=true 200 OK in 1 milliseconds"
2020-02-06 09:09:43	
level=debug ts=2020-02-06T14:09:43.378Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="/app/discovery/kubernetes/kubernetes.go:262: Watch close - *v1.Service total 0 items received"
2020-02-06 09:09:40	
level=debug ts=2020-02-06T14:09:40.384Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/services?resourceVersion=6970069&timeout=6m57s&timeoutSeconds=417&watch=true 200 OK in 1 milliseconds"
2020-02-06 09:09:40	
level=debug ts=2020-02-06T14:09:40.382Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="/app/discovery/kubernetes/kubernetes.go:262: Watch close - *v1.Service total 0 items received"
2020-02-06 09:09:37	
level=debug ts=2020-02-06T14:09:37.383Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/kube-system/pods?resourceVersion=7733753&timeout=5m7s&timeoutSeconds=307&watch=true 200 OK in 2 milliseconds"
2020-02-06 09:09:37	
level=debug ts=2020-02-06T14:09:37.380Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="/app/discovery/kubernetes/kubernetes.go:263: Watch close - *v1.Pod total 0 items received"
2020-02-06 09:09:18	
level=debug ts=2020-02-06T14:09:18.383Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/default/pods?resourceVersion=7733753&timeout=6m28s&timeoutSeconds=388&watch=true 200 OK in 0 milliseconds"
2020-02-06 09:09:18	
level=debug ts=2020-02-06T14:09:18.382Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="/app/discovery/kubernetes/kubernetes.go:263: Watch close - *v1.Pod total 0 items received"
2020-02-06 09:09:17	
level=debug ts=2020-02-06T14:09:17.381Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/kube-system/endpoints?resourceVersion=7735613&timeout=8m45s&timeoutSeconds=525&watch=true 200 OK in 1 milliseconds"
2020-02-06 09:09:17	
level=debug ts=2020-02-06T14:09:17.379Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="/app/discovery/kubernetes/kubernetes.go:261: Watch close - *v1.Endpoints total 181 items received"
2020-02-06 09:08:42	
level=debug ts=2020-02-06T14:08:42.379Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/endpoints?resourceVersion=7733828&timeout=8m12s&timeoutSeconds=492&watch=true 200 OK in 1 milliseconds"
2020-02-06 09:08:42	
level=debug ts=2020-02-06T14:08:42.377Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="/app/discovery/kubernetes/kubernetes.go:261: Watch close - *v1.Endpoints total 2 items received"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.471Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="caches populated"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.469Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="caches populated"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.469Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="caches populated"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.468Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="caches populated"
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.418000964Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.417Z caller=main.go:762 msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.382Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/default/pods?resourceVersion=7733753&timeout=6m5s&timeoutSeconds=365&watch=true 200 OK in 0 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.382Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/pods?resourceVersion=7733753&timeout=8m23s&timeoutSeconds=503&watch=true 200 OK in 1 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.381Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/services?resourceVersion=6970069&timeout=6m27s&timeoutSeconds=387&watch=true 200 OK in 2 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.380Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/endpoints?resourceVersion=7733821&timeout=6m35s&timeoutSeconds=395&watch=true 200 OK in 5 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.380Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/kube-system/services?resourceVersion=6970069&timeout=7m20s&timeoutSeconds=440&watch=true 200 OK in 2 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.380Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/kube-system/pods?resourceVersion=7733753&timeout=6m24s&timeoutSeconds=384&watch=true 200 OK in 1 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.379Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/default/pods?limit=500&resourceVersion=0 200 OK in 9 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.379Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/services?resourceVersion=6970069&timeout=6m30s&timeoutSeconds=390&watch=true 200 OK in 7 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.378Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/services?limit=500&resourceVersion=0 200 OK in 6 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.378Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/pods?resourceVersion=7733753&timeout=6m54s&timeoutSeconds=414&watch=true 200 OK in 4 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.378Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/kube-system/endpoints?resourceVersion=7733821&timeout=6m4s&timeoutSeconds=364&watch=true 200 OK in 4 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.377Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/default/services?resourceVersion=6970069&timeout=9m4s&timeoutSeconds=544&watch=true 200 OK in 5 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.377Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/kube-system/services?limit=500&resourceVersion=0 200 OK in 7 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.377Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/pods?limit=500&resourceVersion=0 200 OK in 5 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.377Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/endpoints?resourceVersion=7733821&timeout=5m29s&timeoutSeconds=329&watch=true 200 OK in 6 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.375Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/kube-system/pods?limit=500&resourceVersion=0 200 OK in 4 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.374Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/endpoints?limit=500&resourceVersion=0 200 OK in 2 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.374Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/default/endpoints?resourceVersion=7733821&timeout=7m34s&timeoutSeconds=454&watch=true 200 OK in 2 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.373Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/kube-system/endpoints?limit=500&resourceVersion=0 200 OK in 3 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.371Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Service from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.371Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Pod from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.371Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Service (10m0s) from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.371Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Pod (10m0s) from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.371Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Endpoints from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.371Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/default/services?limit=500&resourceVersion=0 200 OK in 2 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.371Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Endpoints (10m0s) from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.371Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="stop requested"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.371Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/pods?limit=500&resourceVersion=0 200 OK in 2 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.371Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/default/endpoints?limit=500&resourceVersion=0 200 OK in 1 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.370Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/services?limit=500&resourceVersion=0 200 OK in 1 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.370Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Endpoints from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.370Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Pod from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.370Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Endpoints (10m0s) from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.370Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Pod (10m0s) from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.370Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Service from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.370Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Service (10m0s) from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.370Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/endpoints?limit=500&resourceVersion=0 200 OK in 1 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.369Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Pod from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.369Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Pod (10m0s) from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.369Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Service from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.369Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Service (10m0s) from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.369Z caller=manager.go:224 component="discovery manager notify" msg="Starting provider" provider=*kubernetes.SDConfig/0 subs=[config-0]
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.369Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Endpoints from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.369Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Endpoints (10m0s) from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.369Z caller=manager.go:242 component="discovery manager scrape" msg="discoverer channel closed" provider=string/5
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.369Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Pod from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.369Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Pod (10m0s) from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.368Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Service from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.368Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Service (10m0s) from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.368Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Endpoints from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.368Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Endpoints (10m0s) from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.368Z caller=kubernetes.go:190 component="discovery manager notify" discovery=k8s msg="Using pod service account via in-cluster config"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.368Z caller=manager.go:242 component="discovery manager scrape" msg="discoverer channel closed" provider=string/2
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.368Z caller=manager.go:242 component="discovery manager scrape" msg="discoverer channel closed" provider=string/1
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.368Z caller=manager.go:224 component="discovery manager scrape" msg="Starting provider" provider=string/5 subs=[netdata-scrape]
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.368Z caller=manager.go:224 component="discovery manager scrape" msg="Starting provider" provider=*kubernetes.SDConfig/4 subs="[monitoring/prometheus-operator-kube-proxy/0 monitoring/prometheus-operator-kubelet/0 monitoring/prometheus-operator-coredns/0 monitoring/prometheus-operator-kubelet/1]"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.368Z caller=manager.go:224 component="discovery manager scrape" msg="Starting provider" provider=*kubernetes.SDConfig/3 subs=[monitoring/prometheus-operator-apiserver/0]
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.368Z caller=manager.go:224 component="discovery manager scrape" msg="Starting provider" provider=string/2 subs=[pihole-exporter]
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.368Z caller=manager.go:224 component="discovery manager scrape" msg="Starting provider" provider=string/1 subs=[sonarr-exporter]
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.368Z caller=manager.go:224 component="discovery manager scrape" msg="Starting provider" provider=*kubernetes.SDConfig/0 subs="[monitoring/prometheus-operator-alertmanager/0 monitoring/prometheus-operator-node-exporter/0 monitoring/prometheus-operator-kube-state-metrics/0 monitoring/prometheus-operator-prometheus/0 monitoring/prometheus-operator-grafana/0 monitoring/prometheus-operator-operator/0]"
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.367Z caller=kubernetes.go:190 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.366Z caller=kubernetes.go:190 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.365Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="stop requested"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.365Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="stop requested"
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.365Z caller=kubernetes.go:190 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.365Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="stop requested"
2020-02-06 09:03:13	
ts=2020-02-06T14:03:13.365Z caller=dedupe.go:111 component=remote level=debug msg="remote write config has not changed, no need to restart QueueManagers"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.364Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/endpoints?resourceVersion=7733821&timeout=5m46s&timeoutSeconds=346&watch=true 200 OK in 0 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.364Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/kube-system/services?resourceVersion=6970069&timeout=5m19s&timeoutSeconds=319&watch=true 200 OK in 1 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.364Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/default/pods?limit=500&resourceVersion=0 200 OK in 16 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.364Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/services?resourceVersion=6970069&timeout=7m11s&timeoutSeconds=431&watch=true 200 OK in 1 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.364Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/kube-system/endpoints?resourceVersion=7733821&timeout=8m19s&timeoutSeconds=499&watch=true 200 OK in 1 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.364Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/pods?limit=500&resourceVersion=0 200 OK in 15 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.364Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/default/services?resourceVersion=6970069&timeout=7m7s&timeoutSeconds=427&watch=true 200 OK in 1 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.363Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/default/endpoints?resourceVersion=7733821&timeout=8m26s&timeoutSeconds=506&watch=true 200 OK in 0 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.363Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/endpoints?limit=500&resourceVersion=0 200 OK in 15 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.363Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/services?resourceVersion=6970069&timeout=9m42s&timeoutSeconds=582&watch=true 200 OK in 1 milliseconds"
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.363Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.363Z caller=main.go:617 msg="Server is ready to receive web requests."
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.363Z caller=main.go:762 msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.363Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/kube-system/services?limit=500&resourceVersion=0 200 OK in 15 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.362Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/endpoints?resourceVersion=7733821&timeout=8m1s&timeoutSeconds=481&watch=true 200 OK in 0 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.362Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/default/endpoints?limit=500&resourceVersion=0 200 OK in 14 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.362Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/kube-system/endpoints?limit=500&resourceVersion=0 200 OK in 14 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.362Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/kube-system/pods?limit=500&resourceVersion=0 200 OK in 14 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.362Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/pods?limit=500&resourceVersion=0 200 OK in 14 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.361Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/default/services?limit=500&resourceVersion=0 200 OK in 13 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.361Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/services?limit=500&resourceVersion=0 200 OK in 13 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.360Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/endpoints?limit=500&resourceVersion=0 200 OK in 13 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.360Z caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET https://10.43.0.1:443/api/v1/namespaces/monitoring/services?limit=500&resourceVersion=0 200 OK in 12 milliseconds"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.348Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Pod from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.348Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Pod (10m0s) from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.348Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Endpoints from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.348Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Endpoints (10m0s) from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.348Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Service from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.348Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Service (10m0s) from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.348Z caller=manager.go:224 component="discovery manager notify" msg="Starting provider" provider=*kubernetes.SDConfig/0 subs=[config-0]
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Pod from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Pod (10m0s) from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Service from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Service (10m0s) from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Endpoints from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Endpoints (10m0s) from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=manager.go:242 component="discovery manager scrape" msg="discoverer channel closed" provider=string/1
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Pod from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Pod (10m0s) from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Pod from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Service from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Pod (10m0s) from /app/discovery/kubernetes/kubernetes.go:263"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Service (10m0s) from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Endpoints from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Service from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Endpoints (10m0s) from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing and watching *v1.Endpoints from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Service (10m0s) from /app/discovery/kubernetes/kubernetes.go:262"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting reflector *v1.Endpoints (10m0s) from /app/discovery/kubernetes/kubernetes.go:261"
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.347Z caller=kubernetes.go:190 component="discovery manager notify" discovery=k8s msg="Using pod service account via in-cluster config"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=manager.go:242 component="discovery manager scrape" msg="discoverer channel closed" provider=string/4
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=manager.go:242 component="discovery manager scrape" msg="discoverer channel closed" provider=string/2
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=manager.go:224 component="discovery manager scrape" msg="Starting provider" provider=*kubernetes.SDConfig/5 subs=[monitoring/prometheus-operator-apiserver/0]
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=manager.go:224 component="discovery manager scrape" msg="Starting provider" provider=string/4 subs=[sonarr-exporter]
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=manager.go:224 component="discovery manager scrape" msg="Starting provider" provider=*kubernetes.SDConfig/3 subs="[monitoring/prometheus-operator-kube-proxy/0 monitoring/prometheus-operator-kubelet/0 monitoring/prometheus-operator-kubelet/1 monitoring/prometheus-operator-coredns/0]"
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=manager.go:224 component="discovery manager scrape" msg="Starting provider" provider=string/2 subs=[pihole-exporter]
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=manager.go:224 component="discovery manager scrape" msg="Starting provider" provider=string/1 subs=[netdata-scrape]
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.347Z caller=manager.go:224 component="discovery manager scrape" msg="Starting provider" provider=*kubernetes.SDConfig/0 subs="[monitoring/prometheus-operator-kube-state-metrics/0 monitoring/prometheus-operator-prometheus/0 monitoring/prometheus-operator-alertmanager/0 monitoring/prometheus-operator-node-exporter/0 monitoring/prometheus-operator-grafana/0 monitoring/prometheus-operator-operator/0]"
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.346Z caller=kubernetes.go:190 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.346Z caller=kubernetes.go:190 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.345Z caller=kubernetes.go:190 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.342Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
2020-02-06 09:03:13	
level=debug ts=2020-02-06T14:03:13.342Z caller=main.go:665 msg="TSDB options" MinBlockDuration=2h MaxBlockDuration=3d MaxBytes=0B NoLockfile=true RetentionDuration=30d WALSegmentSize=0B AllowOverlappingBlocks=false WALCompression=false
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.342Z caller=main.go:664 msg="TSDB started"
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.342Z caller=main.go:663 fs_type=EXT4_SUPER_MAGIC
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.239Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=471 maxSegment=471
2020-02-06 09:03:13	
level=info ts=2020-02-06T14:03:13.239Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=470 maxSegment=471
2020-02-06 09:03:12	
level=info ts=2020-02-06T14:03:12.677Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=469 maxSegment=471
2020-02-06 09:03:09	
level=info ts=2020-02-06T14:03:09.502Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=468 maxSegment=471
2020-02-06 09:03:08	
level=info ts=2020-02-06T14:03:08.459Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=467 maxSegment=471
2020-02-06 09:03:05	
level=info ts=2020-02-06T14:03:05.908Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=466 maxSegment=471
2020-02-06 09:03:04	
level=info ts=2020-02-06T14:03:04.208Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=465 maxSegment=471
2020-02-06 09:03:03	
level=info ts=2020-02-06T14:03:03.647Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=464 maxSegment=471
2020-02-06 09:03:01	
level=info ts=2020-02-06T14:03:01.685Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=463 maxSegment=471
2020-02-06 09:02:59	
level=info ts=2020-02-06T14:02:59.923Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=462 maxSegment=471
2020-02-06 09:02:59	
level=info ts=2020-02-06T14:02:59.062Z caller=head.go:608 component=tsdb msg="WAL checkpoint loaded"
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.421Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.360879975Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml
2020-02-06 09:02:58	
ts=2020-02-06T14:02:58.36080492Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.35.0'."
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.035Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1580983200000 maxt=1580990400000 ulid=01E0D9BY1BFAHEDCK7DQ966VWD
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.034Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1580976000000 maxt=1580983200000 ulid=01E0D2G6SJZH9V339QFC1RAHH5
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.032Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1580947200000 maxt=1580968800000 ulid=01E0CVMK5JPTG2AKNB7NPD8QQS
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.030Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1580968800000 maxt=1580976000000 ulid=01E0CVMFHHVDYD4D9H4XFCVHWD
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.019Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1580925600000 maxt=1580947200000 ulid=01E0C71DH2C2FZE3WS9DYKVE5A
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.016Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1580860800000 maxt=1580925600000 ulid=01E0BJEBEP61SZ70ZSF0H2V9GA
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.014Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1580666400000 maxt=1580860800000 ulid=01E09MN102K38TBVG0WV5AT4CW
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.012Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1580472000000 maxt=1580666400000 ulid=01E03V855VP3ZTP3HJ2PT4MEVG
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.010Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1580277600000 maxt=1580472000000 ulid=01DZY1VHPVZS1TTYRW208DHDCT
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.008Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1580083200000 maxt=1580277600000 ulid=01DZR8EXS481WRKAABCCBSXACB
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.006Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1579888800000 maxt=1580083200000 ulid=01DZJF29QS9CPBXXDX5CJCW2DH
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.004Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1579876615847 maxt=1579888800000 ulid=01DZCWHABAN2NA4RNC6PX099Z4
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.001Z caller=web.go:506 component=web msg="Start listening for connections" address=0.0.0.0:9090
2020-02-06 09:02:58	
level=info ts=2020-02-06T14:02:58.001Z caller=main.go:648 msg="Starting TSDB ..."
2020-02-06 09:02:57	
level=info ts=2020-02-06T14:02:57.982Z caller=main.go:334 vm_limits="(soft=unlimited, hard=unlimited)"
2020-02-06 09:02:57	
level=info ts=2020-02-06T14:02:57.982Z caller=main.go:333 fd_limits="(soft=1048576, hard=1048576)"
2020-02-06 09:02:57	
level=info ts=2020-02-06T14:02:57.982Z caller=main.go:332 host_details="(Linux 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 prometheus-prometheus-operator-prometheus-0 (none))"
2020-02-06 09:02:57	
level=info ts=2020-02-06T14:02:57.982Z caller=main.go:331 build_context="(go=go1.13.5, user=root@688433cf4ff7, date=20200106-14:50:51)"
2020-02-06 09:02:57	
level=info ts=2020-02-06T14:02:57.982Z caller=main.go:330 msg="Starting Prometheus" version="(version=2.15.2, branch=HEAD, revision=d9613e5c466c6e9de548c4dae1b9aabf9aaf7c57)"
2020-02-06 09:02:35	
level=info ts=2020-02-06T14:02:35.362Z caller=main.go:730 msg="See you next time!"
2020-02-06 09:02:35	
level=info ts=2020-02-06T14:02:35.362Z caller=main.go:718 msg="Notifier manager stopped"
2020-02-06 09:02:35	
level=info ts=2020-02-06T14:02:35.361Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."
2020-02-06 09:02:35	
level=info ts=2020-02-06T14:02:35.338Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"
2020-02-06 09:02:35	
level=info ts=2020-02-06T14:02:35.337Z caller=main.go:547 msg="Scrape manager stopped"
2020-02-06 09:02:35	
level=info ts=2020-02-06T14:02:35.337Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."
2020-02-06 09:02:35	
level=info ts=2020-02-06T14:02:35.336Z caller=main.go:527 msg="Notify discovery manager stopped"
2020-02-06 09:02:35	
level=info ts=2020-02-06T14:02:35.336Z caller=main.go:513 msg="Scrape discovery manager stopped"
2020-02-06 09:02:35	
level=info ts=2020-02-06T14:02:35.336Z caller=main.go:553 msg="Stopping scrape manager..."
2020-02-06 09:02:35	
level=info ts=2020-02-06T14:02:35.336Z caller=main.go:531 msg="Stopping notify discovery manager..."
2020-02-06 09:02:35	
level=info ts=2020-02-06T14:02:35.336Z caller=main.go:517 msg="Stopping scrape discovery manager..."
2020-02-06 09:02:35	
level=warn ts=2020-02-06T14:02:35.336Z caller=main.go:494 msg="Received SIGTERM, exiting gracefully..."
2020-02-06 09:02:29	
level=warn ts=2020-02-06T14:02:29.419Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 7728110 (7730131)"
2020-02-06 08:56:45	
level=warn ts=2020-02-06T13:56:45.483Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 7727167 (7728357)"
2020-02-06 08:56:17	
level=warn ts=2020-02-06T13:56:17.381Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 7727913 (7728196)"
2020-02-06 08:44:48	
level=warn ts=2020-02-06T13:44:48.406Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 7724288 (7724799)"
2020-02-06 08:44:07	
level=warn ts=2020-02-06T13:44:07.367Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 7723318 (7724593)"
2020-02-06 08:41:37	
level=warn ts=2020-02-06T13:41:37.468Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 7722834 (7723856)"
2020-02-06 08:31:51	
level=warn ts=2020-02-06T13:31:51.391Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 7720768 (7720965)"
2020-02-06 08:28:36	
level=warn ts=2020-02-06T13:28:36.349Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 7718864 (7720008)"
2020-02-06 08:26:58	
level=warn ts=2020-02-06T13:26:58.451Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 7718277 (7719538)"
2020-02-06 08:19:58	
level=warn ts=2020-02-06T13:19:58.375Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 7716495 (7717452)"
2020-02-06 08:13:31	
level=warn ts=2020-02-06T13:13:31.333Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 7715132 (7715551)"
2020-02-06 08:11:33	
level=warn ts=2020-02-06T13:11:33.437Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints ended with: too old resource version: 7714424 (7714967)"

@yeya24
Copy link
Contributor

yeya24 commented Feb 6, 2020

Will do, in the mean time does it make since to set the following?

ruleSelector: {}

and

podMonitorSelector: {}

Will that force it to include all rules and pods?

Yes, I think they have the same logic.

Could you please share the logs of your prometheus-operator instance, not Prometheus? It is ok to just use kubectl logs to check whether it contains selected Rules.

@onedr0p
Copy link

onedr0p commented Feb 6, 2020

I think I found what you want:

level=debug ts=2020-02-06T14:02:25.773224064Z caller=rules.go:200 component=prometheusoperator msg="selected Rules" rules=monitoring-prometheus-operator-kubernetes-system-kubelet.yaml,monitoring-prometheus-operator-prometheus.yaml,monitoring-prometheus-operator-kubernetes-absent.yaml,monitoring-prometheus-operator-kubernetes-system.yaml,monitoring-prometheus-operator-general.rules.yaml,monitoring-prometheus-operator-node-network.yaml,monitoring-prometheus-operator-node-time.yaml,monitoring-prometheus-operator-k8s.rules.yaml,monitoring-prometheus-operator-kube-apiserver-error.yaml,monitoring-prometheus-operator-alertmanager.rules.yaml,monitoring-prometheus-operator-kubernetes-resources.yaml,monitoring-prometheus-operator-prometheus-operator.yaml,monitoring-prometheus-operator-node-exporter.rules.yaml,monitoring-prometheus-operator-kube-apiserver.rules.yaml,monitoring-prometheus-operator-kubernetes-apps.yaml,monitoring-prometheus-operator-kubernetes-storage.yaml,monitoring-prometheus-operator-kube-prometheus-node-recording.rules.yaml,monitoring-prometheus-operator-node.rules.yaml,monitoring-prometheus-operator-kubernetes-system-apiserver.yaml,monitoring-prometheus-operator-node-exporter.yaml namespace=monitoring prometheus=prometheus-operator-prometheus

@onedr0p
Copy link

onedr0p commented Feb 6, 2020

@onedr0p
Copy link

onedr0p commented Feb 6, 2020

It looks like all namespaces are being monitored! derp

For example I have https://github.com/onedr0p/k3s-gitops/blob/master/deployments/rook-ceph/monitoring/prometheus-ceph-v14-rules.yaml

Would me updating the first few lines to this make it pick up the rules?

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  labels:
    prometheus: rook-prometheus
    role: alert-rules
    app: prometheus-operator
    release: prometheus-operator
...

@yeya24
Copy link
Contributor

yeya24 commented Feb 6, 2020

It looks like all namespaces are being monitored! derp

For example I have https://github.com/onedr0p/k3s-gitops/blob/master/deployments/rook-ceph/monitoring/prometheus-ceph-v14-rules.yaml

Would me updating the first few lines to this make it pick up the rules?

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  labels:
    prometheus: rook-prometheus
    role: alert-rules
    app: prometheus-operator
    release: prometheus-operator
...

Yes, I think so. You could try it and see if it works

@onedr0p
Copy link

onedr0p commented Feb 6, 2020

And just like that it started picking up the rules. Anyone else in this issue make sure you have you labels set correctly on your PrometheusRule objects.

Thanks @yeya24 for all the help! I think this issue can be closed, as it really was a misconfiguration issue :)

@onedr0p
Copy link

onedr0p commented Feb 6, 2020

More information for people who want to ignore labels and just have monitoring and rules for all namespaces / objects, it's important to set the following:

    prometheus:
      prometheusSpec:
        ruleSelector: {}
        ruleNamespaceSelector: {}
        ruleSelectorNilUsesHelmValues: false
        serviceMonitorSelector: {}
        serviceMonitorNamespaceSelector: {}
        serviceMonitorSelectorNilUsesHelmValues: false
        podMonitorSelector: {}
        podMonitorNamespaceSelector: {}
        podMonitorSelectorNilUsesHelmValues: false

@stale
Copy link

stale bot commented Apr 7, 2020

This issue has been automatically marked as stale because it has not had any activity in last 60d. Thank you for your contributions.

@stale stale bot added the stale label Apr 7, 2020
@brancz
Copy link
Contributor

brancz commented Apr 7, 2020

Looks like things work as expected. :)

Closing.

@brancz brancz closed this as completed Apr 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants