Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

filebeat error and excessive memory usage with autodiscover #9078

Closed
bwunderlich824 opened this issue Nov 14, 2018 · 16 comments
Closed

filebeat error and excessive memory usage with autodiscover #9078

bwunderlich824 opened this issue Nov 14, 2018 · 16 comments
Labels
bug containers Related to containers use case Filebeat Filebeat Team:Integrations Label for the Integrations team

Comments

@bwunderlich824
Copy link

This is a new ticket based on the closed but on going issues noted here (#6503) filebeat logs show several errors and memory usage grows until the pod is shut down by kubernetes. I am using filebeat v6.4.3.

filebeat logs show the following errors:
2018-11-14T16:50:43.002Z ERROR kubernetes/watcher.go:254 kubernetes: Watching API error EOF
2018-11-14T16:50:43.004Z ERROR kubernetes/watcher.go:254 kubernetes: Watching API error proto: wrong wireType = 6 for field ServiceAccountName

My configuration:

    setup.template.enabled: true
    setup.dashboards.enabled: false

    #Kubernetes AutoDiscover
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          templates:

            #JSON LOGS
            - condition:
                equals:
                  kubernetes.labels.json_logs: "true"
              config:
                - type: docker
                  containers.ids:
                    - "${data.kubernetes.container.id}"
                  json.keys_under_root: true
                  json.add_error_key: true
                  processors:
                    - add_kubernetes_metadata:
                        in_cluster: true
                    - drop_fields:
                        fields: ["OriginContentSize","Overhead","BackendURL.Fragment","BackendURL.Scheme","ClientUsername","source","request_Cookie","request_Proxy-Authenticate","downstream_X-Authentication-Jwt","downstream_Set-Cookie","origin_Set-Cookie","downstream_Cache-Control"]

            #Non-JSON logs
            - condition:
                not:
                  equals:
                    kubernetes.labels.json_logs: "true"
              config:
                - type: docker
                  containers.ids:
                    - "${data.kubernetes.container.id}"
                  processors:
                    - add_kubernetes_metadata:
                        in_cluster: true

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['<ELASTICSEARCH>:443']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
    tags: ["<ENV>"]
    logging.level: warning
    logging.json: false
@exekias
Copy link
Contributor

exekias commented Nov 26, 2018

Thanks for opening this @bwunderlich824, could you share a broader log? I'm particularly interested in other messages, errors and timings.

Best regards

@exekias exekias added bug Filebeat Filebeat containers Related to containers use case Team:Integrations Label for the Integrations team labels Nov 26, 2018
@bwunderlich824
Copy link
Author

Not alot else in there besides what was posted above.
logs.txt

@gamer22026
Copy link

gamer22026 commented Dec 4, 2018

Same issue running filebeat v6.5.1. OK when pods first restarted but after a while seeing the same issue on every filebeat pod in the daemonset. k8s version is 1.8.10. I'll enable debug and post when it happens again.

2018-12-03T23:00:46.938Z	WARN	[cfgwarn]	hints/logs.go:56	BETA: The hints builder is beta
2018-12-03T23:38:02.665Z	ERROR	kubernetes/watcher.go:254	kubernetes: Watching API error EOF
2018-12-04T00:34:28.544Z	ERROR	kubernetes/watcher.go:254	kubernetes: Watching API error EOF```

@gamer22026
Copy link

This seems to be the sequence every time the Watching API error hits:

2018-12-04T22:06:43.754Z        ERROR   kubernetes/watcher.go:254       kubernetes: Watching API error EOF
2018-12-04T22:06:43.754Z        INFO    kubernetes/watcher.go:238       kubernetes: Watching API for resource events
2018-12-04T22:06:43.758Z        INFO    input/input.go:149      input ticker stopped
2018-12-04T22:06:43.758Z        INFO    input/input.go:167      Stopping Input: 7990041433892801910
2018-12-04T22:06:43.758Z        INFO    log/harvester.go:275    Reader was closed: /var/lib/docker/containers/8e7cba8ad5631f38c62c2fdbeac1f73b4987ff69f4a7a3521b2d0ead35b5b268/8e7cba8ad5631f38c62c2fdbeac1f73b4987ff69f4a7a3521b2d0ead35b5b268-json.log. Closing.
2018-12-04T22:06:43.758Z        ERROR   [autodiscover]  cfgfile/list.go:96      Error creating runner from config: Can only start an input when all related states are finished: {Id:7870214-51713 Finished:false Fileinfo:0xc42045b1e0 Source:/var/lib/docker/containers/8e7cba8ad5631f38c62c2fdbeac1f73b4987ff69f4a7a3521b2d0ead35b5b268/8e7cba8ad5631f38c62c2fdbeac1f73b4987ff69f4a7a3521b2d0ead35b5b268-json.log Offset:230605 Timestamp:2018-12-04 22:06:20.083047508 +0000 UTC m=+4508.847580429 TTL:-1ns Type:docker Meta:map[] FileStateOS:7870214-51713}
2018-12-04T22:06:43.759Z        INFO    log/input.go:138        Configured paths: [/var/lib/docker/containers/8e7cba8ad5631f38c62c2fdbeac1f73b4987ff69f4a7a3521b2d0ead35b5b268/*.log]
2018-12-04T22:06:43.759Z        INFO    input/input.go:114      Starting input of type: docker; ID: 7990041433892801910
2018-12-04T22:06:53.760Z        INFO    log/harvester.go:254    Harvester started for file: /var/lib/docker/containers/8e7cba8ad5631f38c62c2fdbeac1f73b4987ff69f4a7a3521b2d0ead35b5b268/8e7cba8ad5631f38c62c2fdbeac1f73b4987ff69f4a7a3521b2d0ead35b5b268-json.log

@bwunderlich824
Copy link
Author

Issues still persists with filebeat 6.5.3

@bwunderlich824
Copy link
Author

Issues still persists with filebeat 6.5.4

@exekias
Copy link
Contributor

exekias commented Jan 8, 2019

Hi, we did some mayor refactor for 6.6: #8851 and I think it should help here, any chance you can give a try to the snapshot image?

I pushed it to exekias/filebeat:6.6-snapshot, please take into account that this is a yet unreleased version and it's not minded for production.

@bwunderlich824
Copy link
Author

I pushed that version out to our development environment and I've had the same issue, the kubernetes pod just sucks up more memory until it reboots. Same logs as before.

logs.txt

@ganeshv02
Copy link

We are having this problem with filebeat 6.4.2

2019-01-31T03:04:11.614Z ERROR kubernetes/watcher.go:254 kubernetes: Watching API error EOF

@cnelson
Copy link

cnelson commented Feb 6, 2019

I see a similar issue on my kubernetes clusters, filebeat will continue to use memory until exhausted, logging the messages described by @gamer22026.

It's a fairly linear leak, I don't see any huge steps in usage:

image

@ruflin
Copy link
Member

ruflin commented Feb 7, 2019

@cnelson Which version of filebeat are you using?

@bwunderlich824
Copy link
Author

bwunderlich824 commented Feb 11, 2019

I just tried with the latest 6.6.0 and Have the samee issues. The pods are each allowed up to 768mb of Memory, an enormous amount and they still run out. If it helps I'm a paying elasticsearch hosted customer and this issue has been going on for months, is there anything else I can do to get you guys to look more into this, it's getting really old.

screen shot 2019-02-11 at 11 24 03 am

@kovetskiy
Copy link

The issue persists with 7.0.0-alpha2 too.

@JCMais
Copy link

JCMais commented Mar 9, 2019

I'm having the same issue, and having the same messages being logged by filebeat, plus some other ones.

I've posted more details on this other issue here: #9302 (comment)

@bwunderlich824
Copy link
Author

Upgraded to v7.0.1 and still having the same issues.

@exekias
Copy link
Contributor

exekias commented Jun 3, 2019

this should be now fixed, more details can be found here: #9302

@exekias exekias closed this as completed Jun 3, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug containers Related to containers use case Filebeat Filebeat Team:Integrations Label for the Integrations team
Projects
None yet
Development

No branches or pull requests

8 participants