-
Notifications
You must be signed in to change notification settings - Fork 905
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ignore k8s readiness and liveness exec probes #310
Comments
Hello, I deployed Falco in Kubernetes, and I also see multiple Here are two examples: {
"container.id": "host",
"evt.arg.uid": "root",
"evt.time": 1551295143913756823,
"k8s.ns.name": null,
"k8s.pod.name": null,
"proc.cmdline": "runc:[2:INIT] init",
"proc.pname": null,
"user.name": null,
"user.uid": 4294967295
} and {
"container.id": "host",
"evt.arg.uid": "root",
"evt.time": 1551294467843706541,
"k8s.ns.name": null,
"k8s.pod.name": null,
"proc.cmdline": "<NA>",
"proc.pname": null,
"user.name": null,
"user.uid": 4294967295
} Like @agilgur5 wrote, I could add a rule, but whitelisting What do we need to remove those two false positives without adding a macro that is too permisive? Thanks for the help! |
@mstemm is this fixed in draios/sysdig#1320? |
The sysdig change will help with identifying the threads associated with these probes, but we’ll need to make some rules changes as well. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
this is clearly not resolved.... |
That was the stale activity bot automatically closing the issue. I do have pending changes on the sysdig side that will help identify threads that are a part of liveness/readiness probes, so we should have some updates soon. I’ll reopen the issue. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
The sysdig changes have been merged, just need to prioritize the falco rules changes now. |
What kind of rules need to be changed @mstemm ? Can you elaborate a bit? |
We'll need to use new filtercheck fields |
@mstemm thanks for the update! I opened a PR for the rule update. I wasn't sure about updating "Terminal Shell in Container" as I don't believe I've experienced a false positive on that one before and this issue was specifically about "Non sudo setuid", though I created a macro to check for probes. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
arbitrary usage of the stale bot, especially when last comment is from issue author is very frustrating... |
I'm sorry @agilgur5 we are working on getting a smarter one :D |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Checking in to see if this has been resolved or not. |
Hi there, is there any news as to if it's possible to filter out k8s liveness or readiness probes ? I'm using the above rule: - macro: is_container_probe
condition: >
(proc.is_container_healthcheck=true or
proc.is_container_liveness_probe=true or
proc.is_container_readiness_probe=true)
- rule: A program run in a k8s container
desc: An event will trigger every time a program runs in a k8s container
condition: >
spawned_process
and container
and proc.pname in (shell_binaries)
and k8s.pod.id exists
and user.name exists
and not is_container_probe
output: >
"A program run in a k8s container (user=%user.name %container.info parent=%proc.pname cmdline=%proc.cmdline terminal=%proc.tty
healthcheck=%container.healthcheck liveness_probe=%container.liveness_probe readiness_probe=%container.readiness_probe
probes=%proc.is_container_healthcheck,%proc.is_container_liveness_probe,%proc.is_container_readiness_probe)"
priority: WARNING
tags: [users, container] Unfortunately i can't filter out liveness or readiness probes. The This is the kind of probes caught by falco readinessProbe:
exec:
command:
- sh
- -c
- put_a_command_here p1 p2
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 1 or readinessProbe:
exec:
command:
- put_a_command_here p1 p2
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 1 So the rule always sends alerts. We are trying to track situations where a person with kubectl access or possibly from a pod with a loose and forgotten serviceaccount executes or opens shells in to another container. I've also checked the file here https://github.com/draios/sysdig/blob/dev/userspace/libsinsp/filterchecks.cpp#L5999 to get these filter. I'm using the official falco chart and version 0.27.0. Any help is appreciated. |
Falco detects liveness and readiness exec probes as "Non sudo setuid" and individually whitelisting all the probes as rules is quite tedious. Since probes get called very frequently, this produces a ton of alerts. It would be great if Falco could by default ignore commands launched from readiness and liveness probes -- I'm not sure how to write a rule that would do this based on the existing output without being too permissive (
user
is<NA>
, Pod is different per probe, command is different per probe).The text was updated successfully, but these errors were encountered: