-
Notifications
You must be signed in to change notification settings - Fork 912
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default ruleset does not ignore some Kubernetes containers #156
Comments
Thanks for the detailed report. The rule change seems sensible and I'll add that to the ruleset. I'll also look at the memory growth--I was seeing similar growth, although no memory leak under valgrind, in a different environment with lots of alerts. I'll investigate both adding some rate limiting on alerts as well as figuring out the source of the memory growth. |
It's not a memory leak. It's a bizarre allocation from what I could observe as a moderate amount of syscalls in the falco process. It's just shuffling some strings into file handles and looking at /etc/timezone. That shouldn't effect much in resident, but I'm not good at memory management in C++. For example, a screenshot during peak allocation. Notice I/O bottleneck in load. and later, trivial load on the same system. |
I did find one memory leak in sysdig: draios/sysdig#693. These libraries are used by falco to format events into strings that go into notifications. This would definitely be related for falco as almost all Did you only observe memory growth when falco was sending a bunch of (probably false positive) events, for example the k8s-related events you pointed out? If so, then the sysdig issue is probably the primary cause. |
Correct, I observed memory growth when falco was sending a bunch of events. Once I successfully filtered the kube-proxy/iptables warnings memory usage was level and quite low (around 5MB) |
Ok great, I'll get that fixed in sysdig then. I'll keep this open until the sysdig issue is fixed and I also make the other changes (rule update, rate limiting for events). |
Add google_containers/kube-proxy as a trusted image (can be run privileged, can mount sensitive filesystems). While our k8s deployments run kube-proxy via the hyperkube image, evidently it's sometimes run via its own image. This is one of the fixes for #156. Also update the output message for this rule.
Add google_containers/kube-proxy as a trusted image (can be run privileged, can mount sensitive filesystems). While our k8s deployments run kube-proxy via the hyperkube image, evidently it's sometimes run via its own image. This is one of the fixes for #156. Also update the output message for this rule.
The three fixes needed are all merged, so I'll close this issue. We should have a new falco release in the next week or so with these and other changes. |
u rule, thanks. |
Add google_containers/kube-proxy as a trusted image (can be run privileged, can mount sensitive filesystems). While our k8s deployments run kube-proxy via the hyperkube image, evidently it's sometimes run via its own image. This is one of the fixes for #156. Also update the output message for this rule.
Add google_containers/kube-proxy as a trusted image (can be run privileged, can mount sensitive filesystems). While our k8s deployments run kube-proxy via the hyperkube image, evidently it's sometimes run via its own image. This is one of the fixes for #156. Also update the output message for this rule.
sysdig and falco installed through the project's Debian repository on existing systems running Jessie and Kubernetes 1.4
falco process consumes 2.8 GB of resident memory after running on a cluster node for more than 3 hours. Syslog has many lines which look like the following
There are hundreds of thousands of warnings like this in
/var/log/syslog
. I'm unsure why these warnings consume resident memory within the falco process. When I add the following change to/etc/falco_rules.yaml
all warnings disappear and there is no longer resident memory growth.Not making a PR for this since it's a configuration default. I believe this decision lies on the side of the authors. It is worth noting this detail in the documentation since the default configuration makes falco non-functional on a Kubetnetes cluster of arbitrary size.
The text was updated successfully, but these errors were encountered: