-
Notifications
You must be signed in to change notification settings - Fork 905
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running as kubernetes daemonset failing #225
Comments
If a daemonset specifies a command, this overrides the entrypoint. In falco's case, the entrypoint handles the details of loading the kernel driver, so specifying a command accidently prevents the driver from being loaded. This happens to work if you had a previously loaded sysdig_probe driver lying around. The fix is to specify args instead. In this case, the driver will be loaded via the entrypoint. This fixes #225.
Thanks. I found the problem--I was using |
I tried your fix on my own, and still seeing issues, though it does get further. Looks like the permissions on that bucket may be too locked down?
When trying to just curl that location I get an Access Denied error.
|
What's happening now is that when the falco container runs, it can't use dkms to build the kernel driver on the host. That's normal, because you're using coreos. As a backup, we provide pre-built versions of the kernel module for a variety of combinations of (sysdig version + coreos version). There's a longer description of this at the end of this blog post: https://sysdig.com/blog/coreos-sysdig-part-1-digging-into-coreos-environments/. However, we don't have a pre-built copy of that particular version because the sysdig driver version is older than the coreos version. The coreos version using kernel 4.9.9 came out after we migrated from sysdig 0.13.0 to sysdig 0.14.0. Normally, that wouldn't be a problem and the solution would be to use a newer version of the driver. However, falco itself doesn't currently have its own standalone driver. Instead, it borrows the sysdig kernel module, and since we haven't done a falco release lately, it depends on a too-old sysdig kernel module. I'm actually in the middle of making changes so falco has its own kernel module, so we don't have these kinds of kernel module dependencies between sysdig and falco--#224. In the meantime, if you downgrade coreos versions to an older one (something like 1262.0.0 should work), there will be a pre-built driver available. |
Ah ok, thanks. I'll try downgrading and see if that works for now. |
Confirmed that downgrading to an older CoreOS version resolved this. |
If a daemonset specifies a command, this overrides the entrypoint. In falco's case, the entrypoint handles the details of loading the kernel driver, so specifying a command accidently prevents the driver from being loaded. This happens to work if you had a previously loaded sysdig_probe driver lying around. The fix is to specify args instead. In this case, the driver will be loaded via the entrypoint. This fixes falcosecurity/falco#225.
If a daemonset specifies a command, this overrides the entrypoint. In falco's case, the entrypoint handles the details of loading the kernel driver, so specifying a command accidently prevents the driver from being loaded. This happens to work if you had a previously loaded sysdig_probe driver lying around. The fix is to specify args instead. In this case, the driver will be loaded via the entrypoint. This fixes falcosecurity/falco#225.
Attempting to run falco as a daemonset on kubernetes v1.5.2
I'm using the example yaml file in this github repo, subtracting the post out to slack messenger.
The daemonset is failing with the following logs:
I have the sysdig cloud agent running as a daemonset already. Anything I'm doing wrong?
The text was updated successfully, but these errors were encountered: