Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

filebeat disk usage execption #6497

Closed
timchenxiaoyu opened this issue Mar 5, 2018 · 5 comments
Closed

filebeat disk usage execption #6497

timchenxiaoyu opened this issue Mar 5, 2018 · 5 comments

Comments

@timchenxiaoyu
Copy link

  • Version: 6.0.0 alaph
  • Operating System: centos7.1
  • Steps to Reproduce:
    root@slave-219 ~]# lsof | grep delete | wc -l
    15963
    [root@slave-219 ~]# docker restart 6c093d9fbbae (filebeat container)
    6c093d9fbbae
    [root@slave-219 ~]# lsof | grep delete | wc -l
    696
@ewgRa
Copy link
Contributor

ewgRa commented Mar 5, 2018

@timchenxiaoyu looks like well known and expected, for example here is some explanations: #2395, also as there is more info in already closed issues, related to similar problem.

Do you know about close_* options? https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html#close-options . Is it something that can help you?

How is your config looks like?

@timchenxiaoyu
Copy link
Author

filebeat.prospectors:

  • type: log
    paths:
    • /var/lib/docker/containers//-json.log
    • /var/log/filelog/containers////.log
      close_timeout: 5h
      processors:
  • add_docker_metadata:
    host: "unix:///var/run/docker.sock"
  • add_fields:
    fields:
    log: '{message}'
  • decode_json_fields:
    when:
    regexp:
    message: "{*}"
    fields: ["message"]
    overwrite_keys: true
    target: ""
  • drop_fields:
    fields: ["docker.container.labels.annotation.io.kubernetes.container.terminationMessagePath", "docker.container.labels.annotation.io.kubernetes.container.hash", "docker.container.labels.annotation.io.kubernetes.container.terminationMessagePolicy", "docker.container.labels.annotation.io.kubernetes.pod.terminationGracePeriod", "beat.version", "docker.container.labels.annotation.io.kubernetes.container.ports", "docker.container.labels.io.kubernetes.container.terminationMessagePath", "docker.container.labels.io.kubernetes.container.restartCount", "docker.container.labels.io.kubernetes.container.ports", "docker.container.labels.io.kubernetes.container.hash", "docker.container.labels.io.kubernetes.pod.terminationGracePeriod", "docker.container.labels.annotation.io.kubernetes.container.restartCount", "message"]
  • parse_level:
    levels: ["fatal", "error", "warn", "info", "debug"]
    field: "log"

logging.level: info
setup.template.enabled: true
setup.template.name: "filebeat-%{+yyyy.MM.dd}"
setup.template.pattern: "filebeat-*"
setup.template.fields: "${path.config}/fields.yml"
setup.template.overwrite: true
setup.template.settings:
index:
analysis:
analyzer:
enncloud_analyzer:
filter: ["standard", "lowercase", "stop"]
char_filter: ["my_filter"]
type: custom
tokenizer: standard
char_filter:
my_filter:
type: mapping
mappings: ["-=>_"]

output:
elasticsearch:
hosts: ["es.xxx.xxx.cn:9200"]
protocol: http
worker: 1
bulk_max_size: 10000
index: "filebeat-%{+yyyy.MM.dd}"

@timchenxiaoyu
Copy link
Author

close_timeout: 5h was added tomorrow

@ewgRa
Copy link
Contributor

ewgRa commented Mar 6, 2018

@timchenxiaoyu so, is it help you?

Also I think 5 hours it is too much, but up to you. Also you can play with close_inactive option.
Value of this settings depends on your logs, how often you update it and so on. Read documentation, there is good explanations.

@ruflin
Copy link
Contributor

ruflin commented Mar 12, 2018

I'm closing this thread as we try to keep all questions on discuss: https://discuss.elastic.co/c/beats/filebeat Please post there with all the details if the issues persists.

@ewgRa Appreciate your help.

@ruflin ruflin closed this as completed Mar 12, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants