Skip to content
This repository has been archived by the owner on May 16, 2023. It is now read-only.

Metricbeat deployment doesn't start after upgrade using custom metricbeatConfig #623

Closed
jmlrt opened this issue May 15, 2020 · 3 comments · Fixed by #624
Closed

Metricbeat deployment doesn't start after upgrade using custom metricbeatConfig #623

jmlrt opened this issue May 15, 2020 · 3 comments · Fixed by #624
Labels
bug Something isn't working metricbeat

Comments

@jmlrt
Copy link
Member

jmlrt commented May 15, 2020

Chart version: 7.7.0

Kubernetes version: 1.16.6

Kubernetes provider: Docker for Mac

Helm Version: 2.16.7

helm get release output

Output of helm get release
REVISION: 1
RELEASED: Fri May 15 19:52:54 2020
CHART: metricbeat-7.6.2
USER-SUPPLIED VALUES:
metricbeatConfig:
  kube-state-metrics-metricbeat.yml: |
    metricbeat.modules:
    - module: kubernetes
      enabled: true
      metricsets:
        - state_node
        - state_deployment
        - state_replicaset
        - state_pod
        - state_container
      period: 10s
      hosts: ["${KUBE_STATE_METRICS_HOSTS}"]
    output.elasticsearch:
      hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
  metricbeat.yml: |
    metricbeat.modules:
    - module: kubernetes
      metricsets:
        - container
        - node
        - pod
        - system
        - volume
      period: 10s
      host: "${NODE_NAME}"
      hosts: ["${NODE_NAME}:10255"]
      processors:
      - add_kubernetes_metadata:
          in_cluster: true
    - module: kubernetes
      enabled: true
      metricsets:
        - event
    - module: system
      period: 10s
      metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5
        by_memory: 5
    - module: system
      period: 1m
      metricsets:
        - filesystem
        - fsstat
      processors:
      - drop_event.when.regexp:
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
    output.elasticsearch:
      hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'

COMPUTED VALUES:
affinity: {}
clusterRoleRules:
- apiGroups:
  - extensions
  - apps
  - ""
  resources:
  - namespaces
  - pods
  - events
  - deployments
  - nodes
  - replicasets
  verbs:
  - get
  - list
  - watch
envFrom: []
extraContainers: ""
extraEnvs: []
extraInitContainers: ""
extraVolumeMounts: []
extraVolumes: []
fullnameOverride: ""
hostPathRoot: /var/lib
image: docker.elastic.co/beats/metricbeat
imagePullPolicy: IfNotPresent
imagePullSecrets: []
imageTag: 7.6.2
kube-state-metrics:
  affinity: {}
  collectors:
    certificatesigningrequests: true
    configmaps: true
    cronjobs: true
    daemonsets: true
    deployments: true
    endpoints: true
    horizontalpodautoscalers: true
    ingresses: true
    jobs: true
    limitranges: true
    namespaces: true
    nodes: true
    persistentvolumeclaims: true
    persistentvolumes: true
    poddisruptionbudgets: true
    pods: true
    replicasets: true
    replicationcontrollers: true
    resourcequotas: true
    secrets: true
    services: true
    statefulsets: true
    storageclasses: true
    verticalpodautoscalers: false
  customLabels: {}
  global: {}
  hostNetwork: false
  image:
    pullPolicy: IfNotPresent
    repository: quay.io/coreos/kube-state-metrics
    tag: v1.8.0
  nodeSelector: {}
  podAnnotations: {}
  podSecurityPolicy:
    annotations: {}
    enabled: false
  prometheus:
    monitor:
      additionalLabels: {}
      enabled: false
      honorLabels: false
      namespace: ""
  prometheusScrape: true
  rbac:
    create: true
  replicas: 1
  securityContext:
    enabled: true
    fsGroup: 65534
    runAsUser: 65534
  service:
    annotations: {}
    loadBalancerIP: ""
    nodePort: 0
    port: 8080
    type: ClusterIP
  serviceAccount:
    create: true
    imagePullSecrets: []
  tolerations: []
labels: {}
livenessProbe:
  exec:
    command:
    - sh
    - -c
    - |
      #!/usr/bin/env bash -e
      curl --fail 127.0.0.1:5066
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  timeoutSeconds: 5
managedServiceAccount: true
metricbeatConfig:
  kube-state-metrics-metricbeat.yml: |
    metricbeat.modules:
    - module: kubernetes
      enabled: true
      metricsets:
        - state_node
        - state_deployment
        - state_replicaset
        - state_pod
        - state_container
      period: 10s
      hosts: ["${KUBE_STATE_METRICS_HOSTS}"]
    output.elasticsearch:
      hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
  metricbeat.yml: |
    metricbeat.modules:
    - module: kubernetes
      metricsets:
        - container
        - node
        - pod
        - system
        - volume
      period: 10s
      host: "${NODE_NAME}"
      hosts: ["${NODE_NAME}:10255"]
      processors:
      - add_kubernetes_metadata:
          in_cluster: true
    - module: kubernetes
      enabled: true
      metricsets:
        - event
    - module: system
      period: 10s
      metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5
        by_memory: 5
    - module: system
      period: 1m
      metricsets:
        - filesystem
        - fsstat
      processors:
      - drop_event.when.regexp:
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
    output.elasticsearch:
      hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext:
  privileged: false
  runAsUser: 0
priorityClassName: ""
readinessProbe:
  exec:
    command:
    - sh
    - -c
    - |
      #!/usr/bin/env bash -e
      metricbeat test output
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  timeoutSeconds: 5
replicas: 1
resources:
  limits:
    cpu: 1000m
    memory: 200Mi
  requests:
    cpu: 100m
    memory: 100Mi
secretMounts: []
serviceAccount: ""
terminationGracePeriod: 30
tolerations: []
updateStrategy: RollingUpdate

HOOKS:
MANIFEST:

---
# Source: metricbeat/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-metricbeat-config
  labels:
    app: "metricbeat-metricbeat"
    chart: "metricbeat-7.6.2"
    heritage: "Tiller"
    release: "metricbeat"
data:
  kube-state-metrics-metricbeat.yml: |
    metricbeat.modules:
    - module: kubernetes
      enabled: true
      metricsets:
        - state_node
        - state_deployment
        - state_replicaset
        - state_pod
        - state_container
      period: 10s
      hosts: ["${KUBE_STATE_METRICS_HOSTS}"]
    output.elasticsearch:
      hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
    
  metricbeat.yml: |
    metricbeat.modules:
    - module: kubernetes
      metricsets:
        - container
        - node
        - pod
        - system
        - volume
      period: 10s
      host: "${NODE_NAME}"
      hosts: ["${NODE_NAME}:10255"]
      processors:
      - add_kubernetes_metadata:
          in_cluster: true
    - module: kubernetes
      enabled: true
      metricsets:
        - event
    - module: system
      period: 10s
      metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5
        by_memory: 5
    - module: system
      period: 1m
      metricsets:
        - filesystem
        - fsstat
      processors:
      - drop_event.when.regexp:
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
    output.elasticsearch:
      hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
---
# Source: metricbeat/charts/kube-state-metrics/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/name: kube-state-metrics
    helm.sh/chart: kube-state-metrics-2.4.1
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/instance: metricbeat
  name: metricbeat-kube-state-metrics
imagePullSecrets:
  []
---
# Source: metricbeat/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metricbeat-metricbeat
  labels:
    app: "metricbeat-metricbeat"
    chart: "metricbeat-7.6.2"
    heritage: "Tiller"
    release: "metricbeat"
---
# Source: metricbeat/charts/kube-state-metrics/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/name: kube-state-metrics
    helm.sh/chart: kube-state-metrics-2.4.1
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/instance: metricbeat
  name: metricbeat-kube-state-metrics
rules:

- apiGroups: ["certificates.k8s.io"]
  resources:
  - certificatesigningrequests
  verbs: ["list", "watch"]

- apiGroups: [""]
  resources:
  - configmaps
  verbs: ["list", "watch"]

- apiGroups: ["batch"]
  resources:
  - cronjobs
  verbs: ["list", "watch"]

- apiGroups: ["extensions", "apps"]
  resources:
  - daemonsets
  verbs: ["list", "watch"]

- apiGroups: ["extensions", "apps"]
  resources:
  - deployments
  verbs: ["list", "watch"]

- apiGroups: [""]
  resources:
  - endpoints
  verbs: ["list", "watch"]

- apiGroups: ["autoscaling"]
  resources:
  - horizontalpodautoscalers
  verbs: ["list", "watch"]

- apiGroups: ["extensions", "networking.k8s.io"]
  resources:
  - ingresses
  verbs: ["list", "watch"]

- apiGroups: ["batch"]
  resources:
  - jobs
  verbs: ["list", "watch"]

- apiGroups: [""]
  resources:
  - limitranges
  verbs: ["list", "watch"]

- apiGroups: [""]
  resources:
  - namespaces
  verbs: ["list", "watch"]

- apiGroups: [""]
  resources:
  - nodes
  verbs: ["list", "watch"]

- apiGroups: [""]
  resources:
  - persistentvolumeclaims
  verbs: ["list", "watch"]

- apiGroups: [""]
  resources:
  - persistentvolumes
  verbs: ["list", "watch"]

- apiGroups: ["policy"]
  resources:
    - poddisruptionbudgets
  verbs: ["list", "watch"]

- apiGroups: [""]
  resources:
  - pods
  verbs: ["list", "watch"]

- apiGroups: ["extensions", "apps"]
  resources:
  - replicasets
  verbs: ["list", "watch"]

- apiGroups: [""]
  resources:
  - replicationcontrollers
  verbs: ["list", "watch"]

- apiGroups: [""]
  resources:
  - resourcequotas
  verbs: ["list", "watch"]

- apiGroups: [""]
  resources:
  - secrets
  verbs: ["list", "watch"]

- apiGroups: [""]
  resources:
  - services
  verbs: ["list", "watch"]

- apiGroups: ["apps"]
  resources:
  - statefulsets
  verbs: ["list", "watch"]

- apiGroups: ["storage.k8s.io"]
  resources:
    - storageclasses
  verbs: ["list", "watch"]
---
# Source: metricbeat/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: metricbeat-metricbeat-cluster-role
  labels:
    app: "metricbeat-metricbeat"
    chart: "metricbeat-7.6.2"
    heritage: "Tiller"
    release: "metricbeat"
rules: 
  - apiGroups:
    - extensions
    - apps
    - ""
    resources:
    - namespaces
    - pods
    - events
    - deployments
    - nodes
    - replicasets
    verbs:
    - get
    - list
    - watch
---
# Source: metricbeat/charts/kube-state-metrics/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/name: kube-state-metrics
    helm.sh/chart: kube-state-metrics-2.4.1
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/instance: metricbeat
  name: metricbeat-kube-state-metrics
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: metricbeat-kube-state-metrics
subjects:
- kind: ServiceAccount
  name: metricbeat-kube-state-metrics
  namespace: default
---
# Source: metricbeat/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: metricbeat-metricbeat-cluster-role-binding
  labels:
    app: "metricbeat-metricbeat"
    chart: "metricbeat-7.6.2"
    heritage: "Tiller"
    release: "metricbeat"
roleRef:
  kind: ClusterRole
  name: metricbeat-metricbeat-cluster-role
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: metricbeat-metricbeat
  namespace: default
---
# Source: metricbeat/charts/kube-state-metrics/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: metricbeat-kube-state-metrics
  labels:
    app.kubernetes.io/name: kube-state-metrics
    helm.sh/chart: "kube-state-metrics-2.4.1"
    app.kubernetes.io/instance: "metricbeat"
    app.kubernetes.io/managed-by: "Tiller"
  annotations:
    prometheus.io/scrape: 'true'
spec:
  type: "ClusterIP"
  ports:
  - name: "http"
    protocol: TCP
    port: 8080
    targetPort: 8080
  selector:
    app.kubernetes.io/name: kube-state-metrics
    app.kubernetes.io/instance: metricbeat
---
# Source: metricbeat/templates/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: metricbeat-metricbeat
  labels:
    app: "metricbeat-metricbeat"
    chart: "metricbeat-7.6.2"
    heritage: "Tiller"
    release: "metricbeat"
spec:
  selector:
    matchLabels:
      app: "metricbeat-metricbeat"
      release: "metricbeat"
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      annotations:
        
        configChecksum: 8cd9efa3fc205a49c9522b0960c0b4c4d4a2c1760e77c1778f7c1ff83781ff8
      name: "metricbeat-metricbeat"
      labels:
        app: "metricbeat-metricbeat"
        chart: "metricbeat-7.6.2"
        heritage: "Tiller"
        release: "metricbeat"
    spec:
      serviceAccountName: metricbeat-metricbeat
      terminationGracePeriodSeconds: 30
      volumes:
      - name: metricbeat-config
        configMap:
          defaultMode: 0600
          name: metricbeat-metricbeat-config
      - name: data
        hostPath:
          path: /var/lib/metricbeat-metricbeat-default-data
          type: DirectoryOrCreate
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varrundockersock
        hostPath:
          path: /var/run/docker.sock
      - name: proc
        hostPath:
          path: /proc
      - name: cgroup
        hostPath:
          path: /sys/fs/cgroup
      containers:
      - name: "metricbeat"
        image: "docker.elastic.co/beats/metricbeat:7.6.2"
        imagePullPolicy: "IfNotPresent"
        args:
        - "-e"
        - "-E"
        - "http.enabled=true"
        - "--system.hostfs=/hostfs"
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - |
              #!/usr/bin/env bash -e
              curl --fail 127.0.0.1:5066
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 5
          
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - |
              #!/usr/bin/env bash -e
              metricbeat test output
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 5
          
        resources:
          limits:
            cpu: 1000m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
          
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          privileged: false
          runAsUser: 0
          
        volumeMounts:
        - name: metricbeat-config
          mountPath: /usr/share/metricbeat/kube-state-metrics-metricbeat.yml
          readOnly: true
          subPath: kube-state-metrics-metricbeat.yml
        - name: metricbeat-config
          mountPath: /usr/share/metricbeat/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: data
          mountPath: /usr/share/metricbeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        # Necessary when using autodiscovery; avoid mounting it otherwise
        # See: https://www.elastic.co/guide/en/beats/metricbeat/master/configuration-autodiscover.html
        - name: varrundockersock
          mountPath: /var/run/docker.sock
          readOnly: true
        - name: proc
          mountPath: /hostfs/proc
          readOnly: true
        - name: cgroup
          mountPath: /hostfs/sys/fs/cgroup
          readOnly: true
---
# Source: metricbeat/charts/kube-state-metrics/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metricbeat-kube-state-metrics
  labels:
    app.kubernetes.io/name: kube-state-metrics
    helm.sh/chart: "kube-state-metrics-2.4.1"
    app.kubernetes.io/instance: "metricbeat"
    app.kubernetes.io/managed-by: "Tiller"
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: kube-state-metrics
  replicas: 1
  template:
    metadata:
      labels:
        app.kubernetes.io/name: kube-state-metrics
        app.kubernetes.io/instance: "metricbeat"
    spec:
      hostNetwork: false
      serviceAccountName: metricbeat-kube-state-metrics
      securityContext:
        fsGroup: 65534
        runAsUser: 65534
      containers:
      - name: kube-state-metrics
        args:

        - --collectors=certificatesigningrequests


        - --collectors=configmaps


        - --collectors=cronjobs


        - --collectors=daemonsets


        - --collectors=deployments


        - --collectors=endpoints


        - --collectors=horizontalpodautoscalers


        - --collectors=ingresses


        - --collectors=jobs


        - --collectors=limitranges


        - --collectors=namespaces


        - --collectors=nodes


        - --collectors=persistentvolumeclaims


        - --collectors=persistentvolumes


        - --collectors=poddisruptionbudgets


        - --collectors=pods


        - --collectors=replicasets


        - --collectors=replicationcontrollers


        - --collectors=resourcequotas


        - --collectors=secrets


        - --collectors=services


        - --collectors=statefulsets


        - --collectors=storageclasses



        imagePullPolicy: IfNotPresent
        image: "quay.io/coreos/kube-state-metrics:v1.8.0"
        ports:
        - containerPort: 8080
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 5
          timeoutSeconds: 5
---
# Source: metricbeat/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: 'metricbeat-metricbeat-metrics'
  labels:
    app: 'metricbeat-metricbeat-metrics'
    chart: 'metricbeat-7.6.2'
    heritage: 'Tiller'
    release: 'metricbeat'
spec:
  replicas: 1
  selector:
    matchLabels:
      app: 'metricbeat-metricbeat-metrics'
      chart: 'metricbeat-7.6.2'
      heritage: 'Tiller'
      release: 'metricbeat'
  template:
    metadata:
      annotations:
        
        configChecksum: 8cd9efa3fc205a49c9522b0960c0b4c4d4a2c1760e77c1778f7c1ff83781ff8
      labels:
        app: 'metricbeat-metricbeat-metrics'
        chart: 'metricbeat-7.6.2'
        heritage: 'Tiller'
        release: 'metricbeat'
    spec:
      serviceAccountName: metricbeat-metricbeat
      terminationGracePeriodSeconds: 30
      volumes:
      - name: metricbeat-config
        configMap:
          defaultMode: 0600
          name: metricbeat-metricbeat-config
      containers:
      - name: "metricbeat"
        image: "docker.elastic.co/beats/metricbeat:7.6.2"
        imagePullPolicy: "IfNotPresent"
        args:
          - "-c"
          - "/usr/share/metricbeat/kube-state-metrics-metricbeat.yml"
          - "-e"
          - "-E"
          - "http.enabled=true"
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - |
              #!/usr/bin/env bash -e
              curl --fail 127.0.0.1:5066
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 5
          
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - |
              #!/usr/bin/env bash -e
              metricbeat test output
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 5
          
        resources:
          limits:
            cpu: 1000m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
          
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: KUBE_STATE_METRICS_HOSTS
          value: "$(METRICBEAT_KUBE_STATE_METRICS_SERVICE_HOST):$(METRICBEAT_KUBE_STATE_METRICS_SERVICE_PORT_HTTP)"
        securityContext:
          privileged: false
          runAsUser: 0
          
        volumeMounts:
        - name: metricbeat-config
          mountPath: /usr/share/metricbeat/kube-state-metrics-metricbeat.yml
          readOnly: true
          subPath: kube-state-metrics-metricbeat.yml
        - name: metricbeat-config
          mountPath: /usr/share/metricbeat/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml

Describe the bug: When upgrading Metricbeat while using custom metricbeatConfig value for metricbeat.yml and kube-state-metrics-metricbeat.yml, Metricbeat deployment doesn't start.

Steps to reproduce:

  1. Create the following values.yaml:
metricbeatConfig:
  metricbeat.yml: |
    metricbeat.modules:
    - module: kubernetes
      metricsets:
        - container
        - node
        - pod
        - system
        - volume
      period: 10s
      host: "${NODE_NAME}"
      hosts: ["${NODE_NAME}:10255"]
      processors:
      - add_kubernetes_metadata:
          in_cluster: true
    - module: kubernetes
      enabled: true
      metricsets:
        - event
    - module: system
      period: 10s
      metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5
        by_memory: 5
    - module: system
      period: 1m
      metricsets:
        - filesystem
        - fsstat
      processors:
      - drop_event.when.regexp:
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
    output.elasticsearch:
      hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
  kube-state-metrics-metricbeat.yml: |
    metricbeat.modules:
    - module: kubernetes
      enabled: true
      metricsets:
        - state_node
        - state_deployment
        - state_replicaset
        - state_pod
        - state_container
      period: 10s
      hosts: ["${KUBE_STATE_METRICS_HOSTS}"]
    output.elasticsearch:
      hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
  1. helm install --name metricbeat elastic/metricbeat --version 7.6.2 --values values.yaml
  2. helm upgrade metricbeat metricbeat --force --values values.yaml

Expected behavior: Upgrade is successful

Provide logs and/or server output (if relevant):

$ kubectl get pod -l chart=metricbeat-7.7.0
NAME                                             READY   STATUS    RESTARTS   AGE
metricbeat-metricbeat-gd64b                      1/1     Running   0          2m57s
metricbeat-metricbeat-metrics-658974f5c7-cgtwz   0/1     Error     5          3m5s

$ kubectl logs metricbeat-metricbeat-metrics-658974f5c7-cgtwz
...
020-05-15T18:01:02.617Z        INFO    instance/beat.go:411    metricbeat stopped.
2020-05-15T18:01:02.617Z        ERROR   instance/beat.go:932    Exiting: missing field accessing 'metricbeat.modules.0.hosts.0' (source:'metricbeat.yml')
Exiting: missing field accessing 'metricbeat.modules.0.hosts.0' (source:'metricbeat.yml')

Any additional context:

@pachalk
Copy link

pachalk commented Feb 19, 2021

I am getting same issue for metricbeat 7.10.2 chart version. Repo - https://helm.elastic.co
Error: Exiting: missing field accessing 'metricbeat.modules.0.hosts.0' (source:'metricbeat.yml')

While installing using helm (version 3) in k8 cluster, Can anyone know solution for this?

@pachalk
Copy link

pachalk commented Feb 22, 2021

I am getting same issue for metricbeat 7.10.2 chart version. Repo - https://helm.elastic.co
Error: Exiting: missing field accessing 'metricbeat.modules.0.hosts.0' (source:'metricbeat.yml')

While installing using helm (version 3) in k8 cluster, Can anyone know solution for this

I got it missing env variables -
-name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName

@evanstucker-hates-2fa
Copy link

More specifically, for the tired/confused among us, to solve this problem you need to add this to your values.yaml:

extraEnvs:
  - name: NODE_NAME
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName

Should this be a default value in the chart? Are many people having issues with this?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working metricbeat
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants