Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[autodiscover] Error creating runner from config: Can only start an input when all related states are finished #11834

Closed
sushsampath opened this issue Apr 16, 2019 · 91 comments
Assignees
Labels
bug containers Related to containers use case Filebeat Filebeat Team:Integrations Label for the Integrations team Team:Platforms Label for the Integrations - Platforms team

Comments

@sushsampath
Copy link

sushsampath commented Apr 16, 2019

Hi,
I am using filebeat 6.6.2 version with autodiscover for kubernetes provider type. After version upgrade from 6.2.4 to 6.6.2, I am facing this error for multiple docker containers.

ERROR [autodiscover] cfgfile/list.go:96 Error creating runner from config: Can only start an input when all related states are finished: {Id:3841919-66305 Finished:false Fileinfo:0xc42070c750 Source:/var/lib/docker/containers/a5330346622f0f10b4d85bac140b4bf69f3ead398a69ac0a66c1e3b742210393/a5330346622f0f10b4d85bac140b4bf69f3ead398a69ac0a66c1e3b742210393-json.log Offset:2860573 Timestamp:2019-04-15 19:28:25.567596091 +0000 UTC m=+557430.342740825 TTL:-1ns Type:docker Meta:map[] FileStateOS:3841919-66305}

And I see two entries in the registry file
{"source":"/var/lib/docker/containers/a1824700c0568c120cd3b939c85ab75df696602f9741a215c74e3ce6b497e111/a1824700c0568c120cd3b939c85ab75df696602f9741a215c74e3ce6b497e111-json.log","offset":8655848,"timestamp":"2019-04-16T10:33:16.507862449Z","ttl":-1,"type":"docker","meta":null,"FileStateOS":{"inode":3841895,"device":66305}} {"source":"/var/lib/docker/containers/a1824700c0568c120cd3b939c85ab75df696602f9741a215c74e3ce6b497e111/a1824700c0568c120cd3b939c85ab75df696602f9741a215c74e3ce6b497e111-json.log","offset":3423960,"timestamp":"2019-04-16T10:37:01.366386839Z","ttl":-1,"type":"docker","meta":null,"FileStateOS":{"inode":3841901,"device":66305}}]

Don't see any solutions other than setting the Finished flag to true or updating registry file. Any permanent solutions? Thanks in advance

@yianL
Copy link

yianL commented Apr 16, 2019

+1
I'm using the autodiscover feature in 6.2.4 and saw the same error as well. Perceived behavior was filebeat will stop harvesting and forwarding logs from the container a few minutes after it's been created. When I digged deeper, it seems like it threw the Error creating runner from config error and stopped harvesting logs.

Similar issue reported in discuss:

@tiagoReichert
Copy link

+1
Same issue here on docker.elastic.co/beats/filebeat:6.7.1 and following config file:

logging.level: info

filebeat.autodiscover:
  providers:
    - type: docker
      cleanup_timeout: 5m
      templates:
        - condition:
            not.contains:
              docker.container.image: rancher
          config:
            - type: docker
              containers.ids:
                - "${data.docker.container.id}"
              json.keys_under_root: true
              json.ignore_decoding_error: true
              json.add_error_key: false
              json.message_key: log
              multiline.pattern: '((^.+Exception: .+)|(^\s+at .+)|(^\s+... \d+ more)|(^\s*Caused by:.+))|((^\{)|(^\s+.*)|(^\}))'
              multiline.match: after
processors:
- add_cloud_metadata: ~
- add_docker_metadata: ~
output.logstash:
  hosts: ["logstash"]
  ttl: 1m

@kaiyan-sheng kaiyan-sheng added the Filebeat Filebeat label Apr 23, 2019
@yianL
Copy link

yianL commented Apr 26, 2019

Looked into this a bit more, and I'm guessing it has something to do with how events are emitted from kubernetes and how kubernetes provider in beats is handling them. In kubernetes, you usually get multiple (3 or more) UPDATE events from the time the pod was created until it became ready. Sometimes you even get multiple updates within a second. On the filebeat side, it translates a single update event into a STOP and a START, which will first try to stop the config and immediately create and apply a new config (https://github.com/elastic/beats/blob/6.7/libbeat/autodiscover/providers/kubernetes/kubernetes.go#L117-L118), and this is where I think things could go wrong. if the processing of events is asynchronous, then it is likely to run into race conditions, having 2 conflicting states of the same file in the registry.

Either debounce the event stream or implement real update event instead of simulating with stop-start should help.

@nickgronow
Copy link

nickgronow commented May 9, 2019

I am having this same issue in my pod logs running in the daemonset. Running version 6.7.0

@artushin
Copy link

Also running into this with 6.7.0. Frequent logs with

ERROR	kubernetes/watcher.go:258	kubernetes: Watching API error EOF

@exekias exekias added Team:Integrations Label for the Integrations team bug labels May 31, 2019
@exekias exekias self-assigned this May 31, 2019
@jcsorvasi
Copy link

It seems like we're hitting this problem as well in our kubernetes cluster. Logs seem to go missing.
We'd love to help out and aid in debugging and have some time to spare to work on it too. Let me know how I can help @exekias!

@artushin
Copy link

Btw, we're running 7.1.1 and the issue is still present. Restart seems to solve the problem so we hacked in a solution where filebeat's liveness probe monitors it's own logs for the Error creating runner from config: Can only start an input when all related states are finished error string and restarts the pod.

@mritd
Copy link

mritd commented Jun 14, 2019

+1
same issue on 6.4.2

@exekias
Copy link
Contributor

exekias commented Jun 17, 2019

Thank you everyone for your feedback!

I was able to reproduce this, currently trying to get it fixed

@pawelprazak
Copy link

I can see it happening in 7.0.1

@nevmerzhitsky
Copy link

Still exist in 7.2.

@exekias exekias added the containers Related to containers use case label Jul 10, 2019
@marqc
Copy link
Contributor

marqc commented Jul 16, 2019

@exekias I spend some times digging on this issue and there are multiple causes leading to this "problem". I see this error message every time pod is stopped (not removed; when running cronjob). I run filebeat from master branch.

Firstly, for good understanding, what this error message means, and what are its consequences:
When this error message appears it means, that autodiscover attempted to create new Input but in registry it was not marked as finished (probably some other input is reading this file). Autodiscover then attempts to retry creating input every 10 seconds. So if you keep getting error every 10s you have probably something misconfigured. Otherwise you should be fine.

To get rid of the error message I see few possibilities:

Option A:

Make kubernetes provider aware of all events it has send to autodiscover event bus and skip sending events on "kubernetes pod update" when nothing important changes. Error can still appear in logs, but should be less frequent.

Option B:

Make atomic, synchronized operation for reload Input which will require to:

  • change libbeat/cfgfile/list to perform runner.Stop synchronously
  • change filebeat/harvester/registry to perform harvester.Stop synchronously
  • somehow make sure status Finished is propagated to registry (which also is done in some async way via outlet channel) before filebeat/input/log/input::Stop() returns control to perform start new Input operation

All this changes may have significant impact on performance of normal filebeat operations. I just tried this approached and realized I may have gone to far.

Option C:

Make API for Input reconfiguration "on the fly" and send "reload" event from kubernetes provider on each pod update event. It should still fallback to stop/start strategy when reload is not possible (eg. changed input type). This will probably affect all existing Input implementations.

Option D:

Change log level for this from Error to Warn and pretend that everything is fine ;)

@odacremolbap
Copy link
Contributor

I'm not able to reproduce this one.
Have already tried different loads and filebeat configurations.

I'd appreciate someone here providing some info on what operational pattern do I need to follow.

@marqc
Copy link
Contributor

marqc commented Aug 2, 2019

@odacremolbap You can try generating lots of pod update event. starting pods with multiple containers, with readiness/liveness checks. eventually perform some manual actions on pods (eg. patch condition statuses, as readiness gates do). Or try running some short running pods (eg. cronjob that prints something to stdout and exits).

I see it quite often in my kube cluster. Below example is for cronjob working as described above.

2019-08-02T11:00:11.171+0200    INFO    log/input.go:148        Configured paths: [/var/lib/docker/containers/dab0c765a18c5cc44b049be0852500499610ce2c6789b7ddab2e00568e8193f1/*-json.log]
2019-08-02T11:00:11.172+0200    INFO    log/harvester.go:253    Harvester started for file: /var/lib/docker/containers/dab0c765a18c5cc44b049be0852500499610ce2c6789b7ddab2e00568e8193f1/dab0c765a18c5cc44b049be0852500499610ce2c6789b7ddab2e00568e8193f1-json.log
2019-08-02T11:00:21.213+0200    ERROR   [autodiscover]  cfgfile/list.go:96      Error creating runner from config: Can only start an input when all related states are finished: {Id:141702787-64773 Finished:false Fileinfo:0xc000e0f040 Source:/var/lib/docker/containers/dab0c765a18c5cc44b049be0852500499610ce2c6789b7ddab2e00568e8193f1/dab0c765a18c5cc44b049be0852500499610ce2c6789b7ddab2e00568e8193f1-json.log Offset:91 Timestamp:2019-08-02 11:00:21.175030244 +0200 CEST m=+147.647589059 TTL:-1ns Type:container Meta:map[] FileStateOS:141702787-64773}
2019-08-02T11:00:21.213+0200    INFO    log/harvester.go:274    Reader was closed: /var/lib/docker/containers/dab0c765a18c5cc44b049be0852500499610ce2c6789b7ddab2e00568e8193f1/dab0c765a18c5cc44b049be0852500499610ce2c6789b7ddab2e00568e8193f1-json.log. Closing.
2019-08-02T11:00:24.527+0200    INFO    log/input.go:148        Configured paths: [/var/lib/docker/containers/dab0c765a18c5cc44b049be0852500499610ce2c6789b7ddab2e00568e8193f1/*-json.log]
2019-08-02T11:00:24.530+0200    INFO    log/harvester.go:253    Harvester started for file: /var/lib/docker/containers/dab0c765a18c5cc44b049be0852500499610ce2c6789b7ddab2e00568e8193f1/dab0c765a18c5cc44b049be0852500499610ce2c6789b7ddab2e00568e8193f1-json.log

@odacremolbap
Copy link
Contributor

thanks @marqc

tried the cronjobs, and patching pods ... no success so far.
will continue trying.

@artushin
Copy link

artushin commented Aug 2, 2019

@odacremolbap What version of Kubernetes are you running? Seeing the issue here on 1.12.7

@jeremykerr-sp
Copy link

Seeing the issue in docker.elastic.co/beats/filebeat:7.1.1

@ivan046
Copy link

ivan046 commented Aug 16, 2019

Still exist in 7.2. :(

@chrisbloemker
Copy link

I am running into the same issue with filebeat 7.2 & 7.3 running as a stand alone container on a swarm host.

@yogeek
Copy link

yogeek commented Aug 21, 2020

@jsoriano thank you for you help.
I wanted to test your proposal on my real configuration (the configuration I copied above was simplified to avoid useless complexity) which includes multiple conditions like this :

filebeat.autodiscover:
  providers:
    - type: kubernetes
      cleanup_timeout: 120s
      templates:
        ## Infra logs
        - condition:
          - and:
            - has_fields: ['kubernetes.container.id'] 
            - or:
              - contains:
                  kubernetes.namespace: ns1
              - contains:
                  kubernetes.namespace: ns2
              - and:
                - contains:
                    kubernetes.namespace: ns3
                - equals:
                    kubernetes.container.name: c3
              - contains:
                  kubernetes.namespace: ns4
          config:
            - type: container
              paths:
                - /var/lib/docker/containers/${data.kubernetes.container.id}/*.log
              exclude_lines: ["^\\s+[\\-`('.|_]"]  # drop asciiart lines

but this does not seem to be a valid config...
Can you please point me towards a valid config with this kind of multiple conditions ?

@jsoriano
Copy link
Member

@yogeek good catch, my configuration used conditions, but it should be condition, I have updated my comment.
In your case, the condition is not a list, so it should be:

        - condition:
            and:

Instead of:

        - condition:
          - and:

When you start having complex conditions it is a signal that you might benefit of using hints-based autodiscover. Among other things, it allows to define different configurations (or disable them) per namespace in the namespace annotations.

If you continue having problems with this configuration, please start a new topic in https://discuss.elastic.co/ so we don't mix the conversation with the problem in this issue 🙂

@yogeek
Copy link

yogeek commented Aug 23, 2020

thank you @jsoriano ! Seems to work without error now 👍

@fzyzcjy
Copy link

fzyzcjy commented Aug 26, 2020

All my stack is in 7.9.0 using the elastic operator for k8s and the error messages still exist. But the logs seem not to be lost. So does this mean we should just ignore this ERROR message?

@jsoriano
Copy link
Member

All my stack is in 7.9.0 using the elastic operator for k8s and the error messages still exist. But the logs seem not to be lost. So does this mean we should just ignore this ERROR message?

Yes, in principle you can ignore this error. There is an open issue to improve logging in this case and discard unneeded error messages: #20568

@nerddelphi
Copy link

nerddelphi commented Aug 26, 2020

@jsoriano I have a weird issue related to that error. Randomly Filebeat stop collecting logs from pods after print Error creating runner from config.... even in Filebeat logs saying it starts new Container inputs and new harvestes.

I'm running Filebeat 7.9.0. I've upgraded to the latest version once that behavior exists since 7.6.1 (the first time I've seen it).

My enviroment:

  • GKE v1.15.12-gke.2 (preemptible nodes)
  • Filebeat running as Daemonsets
  • logging.level: debug
    logging.selectors: ["kubernetes","autodiscover"]

My autodiscover config:

filebeat.autodiscover:
  providers:
    - type: kubernetes
      labels.dedot: true
      annotations.dedot: true
      cleanup_timeout: 0
      scope: node
      templates:
        - condition:
            and:
              - has_fields: ['kubernetes.container.id']
              - equals:
                  kubernetes.labels.elastic_logs/json: "true"
          config:
            - type: container
              stream: stdout
              paths:
                - "/var/lib/docker/containers/${data.kubernetes.container.id}/*.log"
              encoding: utf-8
              scan_frequency: 1s
              publisher_pipeline.disable_host: true
              processors:
                - decode_json_fields:
                    process_array: true
                    max_depth: 10
                    target: ""
                    overwrite_keys: true
                    fields: ["message"]
            - type: container
              stream: stderr
              paths:
                - "/var/lib/docker/containers/${data.kubernetes.container.id}/*.log"
              encoding: utf-8
              scan_frequency: 1s
              publisher_pipeline.disable_host: true
              multiline.pattern: '^[[:space:]]+(\bat\b|\.{3})|^Caused by:'
              multiline.negate: false
              multiline.match: after
              processors:
                - decode_json_fields:
                    process_array: true
                    max_depth: 10
                    target: ""
                    overwrite_keys: true
                    fields: ["message"]
        - condition:
            and:
              - has_fields: ['kubernetes.container.id']
              - equals:
                  kubernetes.namespace: haproxy
          config:
            - module: haproxy
              log:
                input:
                  type: container
                  paths:
                    - "/var/lib/docker/containers/${data.kubernetes.container.id}/*.log"
                  encoding: utf-8
                  scan_frequency: 1s
                  publisher_pipeline.disable_host: true
2020-08-26T07:05:05.384Z        DEBUG   [autodiscover]  autodiscover/autodiscover.go:195        Generated config: {
  "encoding": "utf-8",
  "paths": [
    "/var/lib/docker/containers/628a5ca3a5e5037056ebabc9a016b13d4a6ebb16c44a6216beff387b65ff5c4b/*.log"
  ],
  "processors": [
    {
      "decode_json_fields": {
        "fields": [
          "message"
        ],
        "max_depth": 10,
        "overwrite_keys": true,
        "process_array": true,
        "target": ""
      }
    }
  ],
  "publisher_pipeline": {
    "disable_host": true
  },
  "scan_frequency": "1s",
  "stream": "stdout",
  "type": "container"
}
2020-08-26T07:05:05.384Z        DEBUG   [autodiscover]  autodiscover/autodiscover.go:195        Generated config: {
  "encoding": "utf-8",
  "multiline": {
    "match": "after",
    "negate": false,
    "pattern": "^[[:space:]]+(\\bat\\b|\\.{3})|^Caused by:"
  },
  "paths": [
    "/var/lib/docker/containers/628a5ca3a5e5037056ebabc9a016b13d4a6ebb16c44a6216beff387b65ff5c4b/*.log"
  ],
  "processors": [
    {
      "decode_json_fields": {
        "fields": [
          "message"
        ],
        "max_depth": 10,
        "overwrite_keys": true,
        "process_array": true,
        "target": ""
      }
    }
  ],
  "publisher_pipeline": {
    "disable_host": true
  },
  "scan_frequency": "1s",
  "stream": "stderr",
  "type": "container"
}
2020-08-26T07:05:05.384Z        DEBUG   [autodiscover]  autodiscover/autodiscover.go:259        Got a meta field in the event
2020-08-26T07:05:05.385Z        INFO    log/input.go:157        Configured paths: [/var/lib/docker/containers/628a5ca3a5e5037056ebabc9a016b13d4a6ebb16c44a6216beff387b65ff5c4b/*.log]
2020-08-26T07:05:05.385Z        DEBUG   [autodiscover]  cfgfile/list.go:63      Starting reload procedure, current runners: 12
2020-08-26T07:05:05.385Z        DEBUG   [autodiscover]  cfgfile/list.go:81      Start list: 2, Stop list: 0
2020-08-26T07:05:05.386Z        ERROR   [autodiscover]  cfgfile/list.go:95      Error creating runner from config: Can only start an input when all related states are finished: {Id: ea745ab688be85a9-native::1308836-2049, Finished: false, Fileinfo: &{628a5ca3a5e5037056ebabc9a016b13d4a6ebb16c44a6216beff387b65ff5c4b-json.log 99 416 {29223485 63734022237 0x608b880} {2049 1308836 1 33184 0 0 0 0 99 4096 8 {1598425436 865214956} {1598425437 29223485} {1598425437 29223485} [0 0 0]}}, Source: /var/lib/docker/containers/628a5ca3a5e5037056ebabc9a016b13d4a6ebb16c44a6216beff387b65ff5c4b/628a5ca3a5e5037056ebabc9a016b13d4a6ebb16c44a6216beff387b65ff5c4b-json.log, Offset: 42864, Timestamp: 2020-08-26 07:05:03.816329242 +0000 UTC m=+187.623544858, TTL: -1ns, Type: container, Meta: map[stream:stdout], FileStateOS: 1308836-2049}
2020-08-26T07:05:05.421Z        INFO    log/input.go:157        Configured paths: [/var/lib/docker/containers/628a5ca3a5e5037056ebabc9a016b13d4a6ebb16c44a6216beff387b65ff5c4b/*.log]
2020-08-26T07:05:05.421Z        DEBUG   [autodiscover]  cfgfile/list.go:100     Starting runner: input [type=container]

@jsoriano
Copy link
Member

This problem should be solved in 7.9.0, I am closing this.

Some errors are still being logged when they shouldn't, we have created the following issues as follow ups:

@sgreszcz
Copy link

@jsoriano and @ChrsMark I'm still not seeing filebeat 7.9.3 ship any logs from my k8s clusters. I do see logs coming from my filebeat 7.9.3 docker collectors on other servers. All the filebeats are sending logs to a elastic 7.9.3 server.

I'm using the recommended filebeat configuration above from @ChrsMark. I also deployed the test logging pod. Filebeat seems to be finding the container/pod logs but I get a strange error (2020-10-27T13:02:09.145Z DEBUG [autodiscover] template/config.go:156 Configuration template cannot be resolved: field 'data.kubernetes.container.id' not available in event or environment accessing 'paths' (source:'/etc/filebeat.yml'):

2020-10-27T13:02:09.145Z        DEBUG   [autodiscover]  autodiscover/autodiscover.go:195        Generated config: {
  "paths": [
    "/var/log/containers/*894a21c98d8cee4cd61e4dc2c4a281221ae9e915adead904f236185b2f7f5468.log"
  ],
  "type": "container"
}
2020-10-27T13:02:09.145Z        DEBUG   [autodiscover]  autodiscover/autodiscover.go:259        Got a meta field in the event
2020-10-27T13:02:09.145Z        DEBUG   [autodiscover]  template/config.go:156  Configuration template cannot be resolved: field 'data.kubernetes.container.id' not available in event or environment accessing 'paths' (source:'/etc/filebeat.yml')
2020-10-27T13:02:09.146Z        INFO    log/input.go:157        Configured paths: [/var/log/containers/*894a21c98d8cee4cd61e4dc2c4a281221ae9e915adead904f236185b2f7f5468.log]
2020-10-27T13:02:09.146Z        DEBUG   [autodiscover]  cfgfile/list.go:63      Starting reload procedure, current runners: 0
2020-10-27T13:02:09.146Z        DEBUG   [autodiscover]  cfgfile/list.go:81      Start list: 1, Stop list: 0
2020-10-27T13:02:09.146Z        DEBUG   [autodiscover]  template/config.go:156  Configuration template cannot be resolved: field 'data.kubernetes.container.id' not available in event or environment accessing 'paths' (source:'/etc/filebeat.yml')
2020-10-27T13:02:09.146Z        INFO    log/input.go:157        Configured paths: [/var/log/containers/*894a21c98d8cee4cd61e4dc2c4a281221ae9e915adead904f236185b2f7f5468.log]
2020-10-27T13:02:09.146Z        DEBUG   [autodiscover]  cfgfile/list.go:100     Starting runner: input [type=container]
2020-10-27T13:02:09.146Z        DEBUG   [autodiscover]  autodiscover/autodiscover.go:174        Got a start event: map[config:[] host:10.209.72.50 id:20a7e6f1-df54-4c72-84de-d1c67895ba1b kubernetes:{"annotations":{"kubernetes":{"io/psp":"speaker"},"linkerd":{"io/inject":"enabled"},"prometheus":{"io/port":"7472","io/scrape":"true"}},"labels":{"app":"metallb","component":"speaker","controller-revision-hash":"787547c99f","pod-template-generation":"1"},"namespace":"metallb-system","node":{"name":"k8s-bdlk-001"},"pod":{"name":"speaker-q2kjr","uid":"20a7e6f1-df54-4c72-84de-d1c67895ba1b"}} meta:{"kubernetes":{"labels":{"app":"metallb","component":"speaker","controller-revision-hash":"787547c99f","pod-template-generation":"1"},"namespace":"metallb-system","node":{"name":"k8s-bdlk-001"},"pod":{"name":"speaker-q2kjr","uid":"20a7e6f1-df54-4c72-84de-d1c67895ba1b"}}} ports:{"monitoring":7472} provider:1e5d4196-e211-4f96-af59-29cee8fd164b start:true]
2020-10-27T13:02:09.146Z        DEBUG   [autodiscover]  autodiscover/autodiscover.go:259        Got a meta field in the event
2020-10-27T13:02:09.146Z        DEBUG   [autodiscover]  autodiscover/autodiscover.go:174        Got a start event: map[config:[0xc0006328a0] host:10.209.72.50 id:20a7e6f1-df54-4c72-84de-d1c67895ba1b.speaker kubernetes:{"annotations":{"kubernetes":{"io/psp":"speaker"},"linkerd":{"io/inject":"enabled"},"prometheus":{"io/port":"7472","io/scrape":"true"}},"container":{"id":"e8a250df60c9334ba2c09917a586b1d3a0655f4799bade7cf0359485270151c1","image":"metallb/speaker:v0.8.2","name":"speaker","runtime":"docker"},"labels":{"app":"metallb","component":"speaker","controller-revision-hash":"787547c99f","pod-template-generation":"1"},"namespace":"metallb-system","node":{"name":"k8s-bdlk-001"},"pod":{"name":"speaker-q2kjr","uid":"20a7e6f1-df54-4c72-84de-d1c67895ba1b"}} meta:{"container":{"id":"e8a250df60c9334ba2c09917a586b1d3a0655f4799bade7cf0359485270151c1","image":{"name":"metallb/speaker:v0.8.2"},"runtime":"docker"},"kubernetes":{"container":{"image":"metallb/speaker:v0.8.2","name":"speaker"},"labels":{"app":"metallb","component":"speaker","controller-revision-hash":"787547c99f","pod-template-generation":"1"},"namespace":"metallb-system","node":{"name":"k8s-bdlk-001"},"pod":{"name":"speaker-q2kjr","uid":"20a7e6f1-df54-4c72-84de-d1c67895ba1b"}}} port:7472 provider:1e5d4196-e211-4f96-af59-29cee8fd164b start:true]

Configuration yaml:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          node: ${NODE_NAME}
          templates:
            - config:
                - type: container
                  paths:
                    - /var/log/containers/*${data.kubernetes.container.id}.log
    processors:
      - add_host_metadata:
      - add_kubernetes_metadata:

    monitoring:
      enabled: true

    output.elasticsearch:
      hosts: '${ELASTICSEARCH_HOST}'
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.9.3
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
          "-d", "autodiscover",
          "-d", "kubernetes",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: "hostname"
        - name: KIBANA_HOST
          value: "hostname"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 400Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlog
          mountPath: /var/log
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0640
          name: filebeat-config
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          # When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---

@ChrsMark
Copy link
Member

@sgreszcz I cannot reproduce it locally. Also you are adding add_kubernetes_metadata processor which is not needed since autodiscovery is adding metadata by default.

Here is the manifest I'm using:
filebeat-kubernetes.7.9.yaml.txt

Can you try with the above one and share your result?

@sgreszcz
Copy link

@sgreszcz I cannot reproduce it locally. Also you are adding add_kubernetes_metadata processor which is not needed since autodiscovery is adding metadata by default.

Here is the manifest I'm using:
filebeat-kubernetes.7.9.yaml.txt

Can you try with the above one and share your result?

@ChrsMark thank you so much for sharing your manifest! I'm still not sure what exactly is the diff between yours and the one that I had build from the filebeat github example and the examples above in this issue.

It was driving me crazy for a few days, so I really appreciate this and I can confirm if you just apply this manifest as-is and only change the elasticsearch hostname, all will work.

@sgreszcz
Copy link

weird, the only differences I can see in the new manifest is the addition of volume and volumemount (/var/lib/docker/containers) - but we are not even referring to it in the filebeat.yaml configmap.

paths:
  - /var/log/containers/*${data.kubernetes.container.id}.log

The only config that was removed in the new manifest was this, so maybe these things were breaking the proper k8s log discovery:

hints.enabled: true

processors: 
  - add_host_metadata:
  - add_kubernetes_metadata: 

monitoring:
  enabled: true

@marqc
Copy link
Contributor

marqc commented Oct 27, 2020

weird, the only differences I can see in the new manifest is the addition of volume and volumemount (/var/lib/docker/containers) - but we are not even referring to it in the filebeat.yaml configmap.

paths:
  - /var/log/containers/*${data.kubernetes.container.id}.log

If you are using docker as container engine, then /var/log/containers and /var/log/pods only contains symlinks to logs stored in /var/lib/docker so it has to be mounted to your filebeat container as well

@sgreszcz

@laviua
Copy link

laviua commented Nov 1, 2020

the same issue with the docker
"Error creating runner from config: Can only start an input when all related states are finished"
filebeat 7.9.3

@rika
Copy link

rika commented Nov 6, 2020

Hello, I was getting the same error on a Filebeat 7.9.3, with the following config:

    filebeat:
      autodiscover:
        providers:
        - type: kubernetes
          node: ${NODE_NAME}
          hints:
            enabled: true
            default_config:
              type: container
              paths:
              - /var/log/containers/*${data.kubernetes.container.id}.log
    processors:
    - add_cloud_metadata: {}
    - add_host_metadata: {}

I thought it was something with Filebeat. When I was testing stuff I changed my config to:

    filebeat:
      inputs:
      - type: container
        paths:
          - /var/log/containers/*.log
    processors:
    - add_cloud_metadata: {}
    - add_host_metadata: {}
    - add_kubernetes_metadata:
          host: ${NODE_NAME}
          matchers:
          - logs_path:
              logs_path: "/var/log/containers/"

And the error changed to:

2020-11-06T13:46:41.711Z	ERROR	[elasticsearch]	elasticsearch/client.go:224	failed to perform any bulk index operations: 503 Service Unavailable: {"error":{"root_cause":[{"type":"cluster_block_exception","reason":"blocked by: [SERVICE_UNAVAILABLE/2/no master];"}],"type":"cluster_block_exception","reason":"blocked by: [SERVICE_UNAVAILABLE/2/no master];"},"status":503}

So I think the problem was the Elasticsearch resources and not the Filebeat config.

@MrLuje
Copy link

MrLuje commented Nov 20, 2020

7.9.0 has been released and it should fix this issue. The errors can still appear in logs but autodiscover should end up with a proper state and no logs should be lost.

@jsoriano Using Filebeat 7.9.3, I am still loosing logs with the following CronJob

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  labels:
    app: test-log
    app.kubernetes.io/name: test-log
    app.kubernetes.io/version: "1.0"
  name: test-log
  namespace: default
spec:
  concurrencyPolicy: Forbid
  failedJobsHistoryLimit: 3
  jobTemplate:
    spec:
      template:
        metadata:
          labels:
            app: test-log
            app.kubernetes.io/instance: test-log
            app.kubernetes.io/name: test-log
        spec:
          containers:
          - command:
            - /bin/sh
            - -c
            - |
              echo '{ "Date": "2020-11-19 14:42:23", "Level": "Info", "Message": "Test LOG" }' > dev/stdout;
            image: alpine:latest
            imagePullPolicy: IfNotPresent
            name: test-log
          restartPolicy: OnFailure
  schedule: '*/1, * * * *'
  startingDeadlineSeconds: 100
  successfulJobsHistoryLimit: 3
  suspend: false

A workaround for me is to change the container's command to delay the exit :

          - command:
            - /bin/sh
            - -c
            - |
              echo '{ "Date": "2020-11-19 14:42:23", "Level": "Info", "Message": "Test LOG" }' > dev/stdout;
++            sleep 10;

image

@jsoriano
Copy link
Member

@MrLuje what is your filebeat configuration? Autodiscover providers have a cleanup_timeout option, that defaults to 60s, to continue reading logs for this time after pods stop.
Do you see something in the logs?

@MrLuje
Copy link

MrLuje commented Nov 20, 2020

filebeatConfig:
  filebeat.yml: |
    prospectors:
      # Mounted `filebeat-prospectors` configmap:
      path: $${path.config}/prospectors.d/*.yml
      # Reload prospectors configs as they change:
      reload.enabled: false
    modules:
      path: $${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false
    fields:
      tag: ${from}
    filebeat.modules:
      - module: nginx
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          templates:
            - condition.and:
              - not.equals:
                  kubernetes.labels.stack: "dotnet"
              - not.equals:
                  kubernetes.labels.stack: "js"
              config:
                - type: container
                  paths:
                    - /var/lib/docker/containers/$${data.kubernetes.container.id}/*-json.log
                  labels.dedot: true
                  annotations.dedot: true
                  in_cluster: true
                  include_annotations: ["*"]
                  hints.enabled: true
                  fields:
                    filebeat_config: default
                  fields_under_root: true

    processors:
      - add_cloud_metadata:
          providers: ["gcp"]
      - add_locale: ~
      - drop_fields:
          fields: ["agent.ephemeral_id", "agent.hostname", "agent.id", "agent.type", "agent.version", "agent.name", "ecs.version", "input.type", "log.offset", "stream"]
      - drop_event:
          when:
            contains:
              kubernetes.pod.name: 'oauth2-proxy'
    output.logstash:
      timeout: 120
      hosts: ["${hosts}"]

Not totally sure about the logs, the container id for one of the missing log is f9b726a9140eb60bdcc0a22a450a83999c76589785c7da5430e4536da4ccc502


2020-11-20T15:10:39.136Z    INFO    log/harvester.go:326    Reader was closed: /var/lib/docker/containers/56afe3eba7d76c14805cbcb8a1352701d79aa15d65874c9875a60b1545e979e9/56afe3eba7d76c14805cbcb8a1352701d79aa15d65874c9875a60b1545e979e9-json.log. Closing.
2020-11-20T15:10:39.139Z    ERROR    [autodiscover]    cfgfile/list.go:95    Error creating runner from config: Can only start an input when all related states are finished: {Id: native::1702340-2049, Finished: false, Fileinfo: &{56afe3eba7d76c14805cbcb8a1352701d79aa15d65874c9875a60b1545e979e9-json.log 15340 416 {486387471 63741481821 0x419f6c0} {2049 1702340 1 33184 0 0 0 0 15340 4096 32 {1605885009 609255325} {1605885021 486387471} {1605885021 486387471} [0 0 0]}}, Source: /var/lib/docker/containers/56afe3eba7d76c14805cbcb8a1352701d79aa15d65874c9875a60b1545e979e9/56afe3eba7d76c14805cbcb8a1352701d79aa15d65874c9875a60b1545e979e9-json.log, Offset: 15340, Timestamp: 2020-11-20 15:10:29.410292167 +0000 UTC m=+10732.861478993, TTL: -1ns, Type: container, Meta: map[], FileStateOS: 1702340-2049}
2020-11-20T15:10:49.142Z    INFO    log/input.go:157    Configured paths: [/var/lib/docker/containers/56afe3eba7d76c14805cbcb8a1352701d79aa15d65874c9875a60b1545e979e9/*-json.log]
2020-11-20T15:10:49.142Z    INFO    log/harvester.go:299    Harvester started for file: /var/lib/docker/containers/56afe3eba7d76c14805cbcb8a1352701d79aa15d65874c9875a60b1545e979e9/56afe3eba7d76c14805cbcb8a1352701d79aa15d65874c9875a60b1545e979e9-json.log
2020-11-20T15:10:49.857Z    INFO    log/harvester.go:299    Harvester started for file: /var/lib/docker/containers/7cfd9b92058080c3df8fac7563cbcbbfd0c70be62834a3679b23df072149a358/7cfd9b92058080c3df8fac7563cbcbbfd0c70be62834a3679b23df072149a358-json.log
2020-11-20T15:11:03.883Z    INFO    input/input.go:136    input ticker stopped
2020-11-20T15:11:03.883Z    INFO    log/harvester.go:326    Reader was closed: /var/lib/docker/containers/57d8d3511d0c9c82c1b4abafa8ec49e68b76bc23fefa0b98e4da4fe0fbdcbba4/57d8d3511d0c9c82c1b4abafa8ec49e68b76bc23fefa0b98e4da4fe0fbdcbba4-json.log. Closing.
2020-11-20T15:11:06.583Z    INFO    [monitoring]    log/log.go:145    Non-zero metrics in the last 30s    {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":27340,"time":{"ms":88}},"total":{"ticks":182750,"time":{"ms":603},"value":182750},"user":{"ticks":155410,"time":{"ms":515}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":27},"info":{"ephemeral_id":"aadb5678-d17a-402e-a9f8-ebe45d3f6617","uptime":{"ms":10770026}},"memstats":{"gc_next":55836768,"memory_alloc":43846000,"memory_total":26225323152},"runtime":{"goroutines":3961}},"filebeat":{"events":{"active":-6,"added":51,"done":57},"harvester":{"closed":3,"files":{"0254b33f-3c66-4614-91f2-e78b09f402ac":{"last_event_published_time":"2020-11-20T15:10:49.857Z","last_event_timestamp":"2020-11-20T15:10:45.887Z","name":"/var/lib/docker/containers/7cfd9b92058080c3df8fac7563cbcbbfd0c70be62834a3679b23df072149a358/7cfd9b92058080c3df8fac7563cbcbbfd0c70be62834a3679b23df072149a358-json.log","read_offset":896,"size":896,"start_time":"2020-11-20T15:10:49.857Z"},"058c90e8-ea92-42b1-b92d-1c3e3445a51e":{"last_event_published_time":"2020-11-20T15:11:02.286Z","last_event_timestamp":"2020-11-20T15:11:01.096Z","read_offset":776,"size":776},"0e221bf7-1631-4350-988b-7e3288f8c7a7":{"last_event_published_time":"2020-11-20T15:10:55.527Z","last_event_timestamp":"2020-11-20T15:10:49.857Z","read_offset":8778,"size":43869},"596b6c58-6aeb-462e-a5e7-57d3e35b84b4":{"last_event_published_time":"2020-11-20T15:11:06.125Z","last_event_timestamp":"2020-11-20T15:11:05.162Z","read_offset":774,"size":774},"6cc3a98a-6d34-4b9d-8bcc-2745d24e7456":{"size":980},"8d6a0f12-b4be-42f8-96f2-4cc7d51d44ba":{"last_event_published_time":"2020-11-20T15:11:00.251Z","last_event_timestamp":"2020-11-20T15:10:59.245Z","read_offset":773,"size":771},"957ee15f-fd26-4319-8fce-07ab80f97628":{"last_event_published_time":"2020-11-20T15:10:49.143Z","last_event_timestamp":"2020-11-20T15:10:43.166Z","name":"/var/lib/docker/containers/56afe3eba7d76c14805cbcb8a1352701d79aa15d65874c9875a60b1545e979e9/56afe3eba7d76c14805cbcb8a1352701d79aa15d65874c9875a60b1545e979e9-json.log","read_offset":17218,"size":17218,"start_time":"2020-11-20T15:10:49.142Z"},"a6796561-d50c-4185-b602-0cbc34131544":{"size":2171},"e5728abd-c40c-4351-8365-bb74b262582c":{"size":461}},"open_files":16,"running":15,"started":3}},"libbeat":{"config":{"module":{"running":35,"starts":1,"stops":3}},"output":{"events":{"acked":50,"batches":15,"total":50},"read":{"bytes":90},"write":{"bytes":21276}},"pipeline":{"clients":38,"events":{"active":1,"filtered":7,"published":44,"total":51},"queue":{"acked":50}}},"registrar":{"states":{"current":518,"update":57},"writes":{"success":26,"total":26}},"system":{"load":{"1":1.81,"15":1.46,"5":1.59,"norm":{"1":0.4525,"15":0.365,"5":0.3975}}}}}}
2020-11-20T15:11:09.065Z    INFO    log/input.go:157    Configured paths: [/var/lib/docker/containers/f9b726a9140eb60bdcc0a22a450a83999c76589785c7da5430e4536da4ccc502/*-json.log]
2020-11-20T15:11:09.067Z    INFO    log/input.go:157    Configured paths: [/var/lib/docker/containers/f9b726a9140eb60bdcc0a22a450a83999c76589785c7da5430e4536da4ccc502/*-json.log]
2020-11-20T15:11:09.069Z    INFO    log/harvester.go:299    Harvester started for file: /var/lib/docker/containers/f9b726a9140eb60bdcc0a22a450a83999c76589785c7da5430e4536da4ccc502/f9b726a9140eb60bdcc0a22a450a83999c76589785c7da5430e4536da4ccc502-json.log
2020-11-20T15:11:09.070Z    INFO    log/input.go:157    Configured paths: [/var/lib/docker/containers/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1/*-json.log]
2020-11-20T15:11:09.082Z    INFO    log/input.go:157    Configured paths: [/var/lib/docker/containers/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1/*-json.log]
2020-11-20T15:11:09.082Z    INFO    log/harvester.go:299    Harvester started for file: /var/lib/docker/containers/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1-json.log
2020-11-20T15:11:14.085Z    INFO    log/harvester.go:299    Harvester started for file: /var/lib/docker/containers/93e56b46e9654c4ba034bd945b409ed7a58b8932056f584c3ac1f162c7bc149f/93e56b46e9654c4ba034bd945b409ed7a58b8932056f584c3ac1f162c7bc149f-json.log
2020-11-20T15:11:14.759Z    INFO    input/input.go:136    input ticker stopped
2020-11-20T15:11:14.759Z    INFO    log/harvester.go:326    Reader was closed: /var/lib/docker/containers/b5c3b359d77caae27769751ebe9c88173c1d293e676615b6d21deacdb4f49315/b5c3b359d77caae27769751ebe9c88173c1d293e676615b6d21deacdb4f49315-json.log. Closing.
2020-11-20T15:11:18.674Z    INFO    input/input.go:136    input ticker stopped
2020-11-20T15:11:18.674Z    INFO    log/harvester.go:326    Reader was closed: /var/lib/docker/containers/f9b726a9140eb60bdcc0a22a450a83999c76589785c7da5430e4536da4ccc502/f9b726a9140eb60bdcc0a22a450a83999c76589785c7da5430e4536da4ccc502-json.log. Closing.
2020-11-20T15:11:18.676Z    INFO    input/input.go:136    input ticker stopped
2020-11-20T15:11:18.676Z    INFO    log/harvester.go:326    Reader was closed: /var/lib/docker/containers/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1-json.log. Closing.
2020-11-20T15:11:18.678Z    ERROR    [autodiscover]    cfgfile/list.go:95    Error creating runner from config: Can only start an input when all related states are finished: {Id: native::1841178-2049, Finished: false, Fileinfo: &{bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1-json.log 13458 416 {996908212 63741481868 0x419f6c0} {2049 1841178 1 33184 0 0 0 0 13458 4096 32 {1605885067 783792935} {1605885068 996908212} {1605885068 996908212} [0 0 0]}}, Source: /var/lib/docker/containers/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1-json.log, Offset: 14770, Timestamp: 2020-11-20 15:11:18.127062393 +0000 UTC m=+10781.578249214, TTL: -1ns, Type: container, Meta: map[], FileStateOS: 1841178-2049}
2020-11-20T15:11:19.404Z    INFO    log/harvester.go:299    Harvester started for file: /var/lib/docker/containers/28c87973a3e81d5f880b2b433440d3af24026c447f8360168a8a01a61686687e/28c87973a3e81d5f880b2b433440d3af24026c447f8360168a8a01a61686687e-json.log
2020-11-20T15:11:19.422Z    INFO    log/input.go:157    Configured paths: [/var/lib/docker/containers/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1/*-json.log]
2020-11-20T15:11:19.423Z    INFO    log/harvester.go:299    Harvester started for file: /var/lib/docker/containers/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1-json.log
2020-11-20T15:11:19.605Z    INFO    input/input.go:136    input ticker stopped
2020-11-20T15:11:19.605Z    INFO    log/harvester.go:326    Reader was closed: /var/lib/docker/containers/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1-json.log. Closing.
2020-11-20T15:11:19.606Z    INFO    log/input.go:157    Configured paths: [/var/lib/docker/containers/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1/*-json.log]
2020-11-20T15:11:19.608Z    INFO    log/input.go:157    Configured paths: [/var/lib/docker/containers/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1/*-json.log]
2020-11-20T15:11:19.609Z    INFO    log/harvester.go:299    Harvester started for file: /var/lib/docker/containers/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1-json.log
2020-11-20T15:11:33.858Z    INFO    input/input.go:136    input ticker stopped
2020-11-20T15:11:33.858Z    INFO    log/harvester.go:326    Reader was closed: /var/lib/docker/containers/778adccb91930e664f48065d1885a947ef697b7fe274b67c689e68ce9eea2b11/778adccb91930e664f48065d1885a947ef697b7fe274b67c689e68ce9eea2b11-json.log. Closing.
2020-11-20T15:11:34.135Z    INFO    log/harvester.go:299    Harvester started for file: /var/lib/docker/containers/8a557ccbf7d687ba27f683271807f440272de6f624164e625385f1b4fc8eb88d/8a557ccbf7d687ba27f683271807f440272de6f624164e625385f1b4fc8eb88d-json.log
2020-11-20T15:11:36.582Z    INFO    [monitoring]    log/log.go:145    Non-zero metrics in the last 30s    {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":27440,"time":{"ms":103}},"total":{"ticks":183620,"time":{"ms":874},"value":183620},"user":{"ticks":156180,"time":{"ms":771}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":29},"info":{"ephemeral_id":"aadb5678-d17a-402e-a9f8-ebe45d3f6617","uptime":{"ms":10800025}},"memstats":{"gc_next":57496864,"memory_alloc":54708008,"memory_total":26345120312},"runtime":{"goroutines":3978}},"filebeat":{"events":{"active":-1,"added":183,"done":184},"harvester":{"closed":5,"files":{"058c90e8-ea92-42b1-b92d-1c3e3445a51e":{"last_event_published_time":"2020-11-20T15:11:32.296Z","last_event_timestamp":"2020-11-20T15:11:31.097Z","read_offset":773,"size":773},"0e221bf7-1631-4350-988b-7e3288f8c7a7":{"last_event_published_time":"2020-11-20T15:11:35.532Z","last_event_timestamp":"2020-11-20T15:11:34.135Z","read_offset":11042,"size":6733},"2a81d063-d947-4f72-962c-391db15929c7":{"last_event_published_time":"2020-11-20T15:11:19.609Z","last_event_timestamp":"2020-11-20T15:11:19.590Z","name":"/var/lib/docker/containers/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1/bfcb2d69b8e54efa308f45483901b84296f9647a35fb2c95d8e8b95e4ccf60d1-json.log","read_offset":15252,"size":15252,"start_time":"2020-11-20T15:11:19.609Z"},"3a043b5c-b7c1-456d-8cc5-babc0c10ab70":{"last_event_published_time":"2020-11-20T15:11:20.405Z","last_event_timestamp":"2020-11-20T15:11:19.586Z","name":"/var/lib/docker/containers/28c87973a3e81d5f880b2b433440d3af24026c447f8360168a8a01a61686687e/28c87973a3e81d5f880b2b433440d3af24026c447f8360168a8a01a61686687e-json.log","read_offset":4631,"size":4449,"start_time":"2020-11-20T15:11:19.404Z"},"3b33682f-7c5f-4c58-9f8a-56bc4bc11403":{"last_event_published_time":"2020-11-20T15:11:34.136Z","last_event_timestamp":"2020-11-20T15:11:27.878Z","name":"/var/lib/docker/containers/8a557ccbf7d687ba27f683271807f440272de6f624164e625385f1b4fc8eb88d/8a557ccbf7d687ba27f683271807f440272de6f624164e625385f1b4fc8eb88d-json.log","read_offset":19553,"size":19553,"start_time":"2020-11-20T15:11:34.135Z"},"3d408cc3-0113-4c80-aa9a-ca19d5b64a66":{"last_event_published_time":"2020-11-20T15:11:14.085Z","last_event_timestamp":"2020-11-20T15:11:12.342Z","name":"/var/lib/docker/containers/93e56b46e9654c4ba034bd945b409ed7a58b8932056f584c3ac1f162c7bc149f/93e56b46e9654c4ba034bd945b409ed7a58b8932056f584c3ac1f162c7bc149f-json.log","read_offset":20687,"size":20687,"start_time":"2020-11-20T15:11:14.085Z"},"596b6c58-6aeb-462e-a5e7-57d3e35b84b4":{"last_event_published_time":"2020-11-20T15:11:36.130Z","last_event_timestamp":"2020-11-20T15:11:35.160Z","read_offset":774,"size":774},"8d6a0f12-b4be-42f8-96f2-4cc7d51d44ba":{"last_event_published_time":"2020-11-20T15:11:30.255Z","last_event_timestamp":"2020-11-20T15:11:29.247Z","read_offset":773,"size":772},"a6796561-d50c-4185-b602-0cbc34131544":{"last_event_published_time":"2020-11-20T15:11:19.603Z","last_event_timestamp":"2020-11-20T15:11:17.345Z","read_offset":6501,"size":4605}},"open_files":18,"running":17,"started":7}},"libbeat":{"config":{"module":{"running":34,"starts":4,"stops":5}},"output":{"events":{"acked":170,"batches":18,"total":170},"read":{"bytes":108},"write":{"bytes":39031}},"pipeline":{"clients":37,"events":{"active":0,"filtered":14,"published":169,"total":183},"queue":{"acked":170}}},"registrar":{"states":{"current":520,"update":184},"writes":{"success":43,"total":43}},"system":{"load":{"1":1.88,"15":1.47,"5":1.63,"norm":{"1":0.47,"15":0.3675,"5":0.4075}}}}}}
2020-11-20T15:11:44.460Z    INFO    input/input.go:136    input ticker stopped
2020-11-20T15:11:44.460Z    INFO    log/harvester.go:326    Reader was closed: /var/lib/docker/containers/56afe3eba7d76c14805cbcb8a1352701d79aa15d65874c9875a60b1545e979e9/56afe3eba7d76c14805cbcb8a1352701d79aa15d65874c9875a60b1545e979e9-json.log. Closing.

@jsoriano
Copy link
Member

jsoriano commented Nov 23, 2020

Hey @MrLuje,

I could reproduce some issues with cronjobs, I have created a separated issue linking to your comments: #22718

Thanks for reporting!

@jsoriano
Copy link
Member

I am going to lock this issue as it is starting to be a single point to report different issues with filebeat and autodiscover.

If you find some problem with Filebeat and Autodiscover, please open a new topic in https://discuss.elastic.co/, and if a new problem is confirmed then open a new issue in github.

@elastic elastic locked as resolved and limited conversation to collaborators Nov 23, 2020
@zube zube bot removed the [zube]: Done label Nov 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug containers Related to containers use case Filebeat Filebeat Team:Integrations Label for the Integrations team Team:Platforms Label for the Integrations - Platforms team
Projects
None yet
Development

No branches or pull requests