Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

filebeat DoS's kube-apiserver when no RBAC is configured #6960

Closed
jpds opened this issue Apr 26, 2018 · 6 comments
Closed

filebeat DoS's kube-apiserver when no RBAC is configured #6960

jpds opened this issue Apr 26, 2018 · 6 comments
Labels
containers Related to containers use case Filebeat Filebeat

Comments

@jpds
Copy link

jpds commented Apr 26, 2018

We deployed filebeat with the helm chart and noticed from prometheus that the CPU usage on filebeat and kube-apiserver was steadily increasing, looking at the filebeat logs we found:

2018-04-26T15:44:49.335Z	ERROR	kubernetes/watcher.go:145	kubernetes: Watching API error kubernetes api: Failure 403 pods is forbidden: User "system:serviceaccount:testing:filebeat" cannot watch pods at the cluster scope
2018-04-26T15:44:49.335Z	INFO	kubernetes/watcher.go:140	kubernetes: Watching API for pod events

...occuring multiple times a second. filebeat should try watching once and on 403 exponentially back off from trying again. Once we'd enabled RBAC CPU usage went back down again.

@exekias
Copy link
Contributor

exekias commented Apr 26, 2018

Hi @jpds,

Thank you for opening this issue. Could you please report your filebeat version? We fixed this by adding an exponential backoff here: #6504

@exekias
Copy link
Contributor

exekias commented Apr 26, 2018

Also please, can you paste a link to the helm chart you are using?

@exekias exekias added Filebeat Filebeat containers Related to containers use case labels Apr 26, 2018
@jpds
Copy link
Author

jpds commented Apr 27, 2018

We're using 6.2.3, with this helm chart: https://github.com/kubernetes/charts/tree/master/stable/filebeat

@exekias
Copy link
Contributor

exekias commented Apr 27, 2018

Uhm, could you please paste a larger log? I'm checking the code and I think backoff should be in place there

@jpds
Copy link
Author

jpds commented Apr 27, 2018

Hmm, we've seen redeployed our pods so the logs are gone; but I do see this in the pod start up:

2018-04-27T12:48:25.582Z	INFO	instance/beat.go:468	Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2018-04-27T12:48:25.584Z	INFO	instance/beat.go:475	Beat UUID: 4a8288b7-8746-41d4-bc03-2dcfc8280329
2018-04-27T12:48:25.584Z	INFO	instance/beat.go:213	Setup Beat: filebeat; Version: 6.2.3
2018-04-27T12:48:25.586Z	INFO	add_cloud_metadata/add_cloud_metadata.go:301	add_cloud_metadata: hosting provider type detected as ec2, metadata={"..."}
2018-04-27T12:48:25.587Z	INFO	pipeline/module.go:76	Beat name: pouring-rattlesnake-filebeat-c2dzz
2018-04-27T12:48:25.587Z	INFO	instance/beat.go:301	filebeat start running.
2018-04-27T12:48:25.587Z	INFO	registrar/registrar.go:108	Loading registrar data from /usr/share/filebeat/data/registry
2018-04-27T12:48:25.588Z	INFO	registrar/registrar.go:119	States Loaded from registrar: 48
2018-04-27T12:48:25.588Z	WARN	beater/filebeat.go:261	Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-04-27T12:48:25.588Z	INFO	crawler/crawler.go:48	Loading Prospectors: 1
2018-04-27T12:48:25.588Z	WARN	[cfgwarn]	docker/prospector.go:25	EXPERIMENTAL: Docker prospector is enabled.
2018-04-27T12:48:25.589Z	INFO	kubernetes/util.go:51	Using pod name ...-filebeat-c2dzz and namespace production-logging
2018-04-27T12:48:25.589Z	INFO	[monitoring]	log/log.go:97	Starting metrics logging every 30s
2018-04-27T12:48:25.613Z	INFO	kubernetes/watcher.go:77	kubernetes: Performing a pod sync
2018-04-27T12:48:25.629Z	INFO	kubernetes/watcher.go:108	kubernetes: Pod sync done
2018-04-27T12:48:25.629Z	INFO	kubernetes/watcher.go:140	kubernetes: Watching API for pod events

@exekias
Copy link
Contributor

exekias commented Dec 20, 2019

We moved to official client-go, which should do proper handling of this. closing now

@exekias exekias closed this as completed Dec 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
containers Related to containers use case Filebeat Filebeat
Projects
None yet
Development

No branches or pull requests

2 participants