-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attacher and provisioner may need privileged too #32
Comments
I think it is because /var/lib/kubelet/plugins/csi-hostpath needs to be relabelled to allow random containers to read it. The hostpath plugin will not relabel it for us https://github.com/kubernetes/kubernetes/blob/7f0e04a089125901dce18c7d96507f2b60560e18/pkg/volume/host_path/host_path.go#L213 |
@wongma7, that's correct! On SELinux-enabled systems, Recently, I found out a better way to handle this. We can assign a certain SELinux label to
With that, all files created by this container will have the label above. Then we can assign the same labels to the attacher container:
This should prevent the mismatch and thus the permission error. |
What does |
The file |
That's what I feared. This is a problem for the example deployment and for E2E testing, because we cannot simply put the sections above in our .yaml files. Is there a certain set of commands that can be used to look up these values? Or is this simply something that a cluster admin needs to know and provide as parameter to the deployment script? |
I think it is up to the cluster admin to know and assign the meaning of categories (c0,c1) on their machines, then provide them. They would reserve categories for their csi driver deployment and then enforce that other pods use other categories via PodSecurityPolicies That is selinux levels/mls/multi-level security, but what about selinux types. On my system /var/lib/kubelet/plugins is system_u:object_r:var_lib_t:s0 and so containers with system_u:object_r:container_t:s0 can't read anything. So in this case the cluster admin must also relabel the socket file? cc @jsafrane |
I missed the fact that the driver creates the socket under /var/lib/kubelet. By default, new files inherit the type of their parent directories (even if the file was created by a
That's a possible solution, but it can be that can be cumbersome, specially in worker nodes. |
To summarize: Regarding the provisioner and attacher, I believe they don't need to be privileged because the driver shipped in the same pod (that should implement the Controller Service set of RPC calls) doesn't need to be privileged. (In kubernetes/kubernetes#69215 I also made the provisioner and attacher privileged because the driver (privileged) object was used with all sidecars. To prevent that we should have different driver objects specifically tailored each one of the 3 services/sidecars). As for the Node Service, the driver does need to run as privileged because it formats and mounts volumes. As a result, the socket file it exposes to the registrar has a SELinux context that's not accessible by non-privileged containers. |
So my question is: do we really want to allow a non-privileged container to access the socket file created by the driver (Node Service)? If we do so, we might have a potential security problem because:
Whatever solution we find, we must make sure that we don't allow this to happen. |
I triple-checked on a SELinux machine.
Edit: tested with cri-o as container runtime. |
For the record, I just tested with:
The registrar was able to connect to the driver without problems. The non-privileged registrar used to be a problem, but I suppose the SELinux context of containers in the same pod was fixed at some point recently. |
Why does only the hostpath driver need this? Because only this deployment runs the driver in a separate pod? There are other deployments which might do the same, for whatever reasons. |
Attacher + provisioner need to access a socket in /var/lib/kubelet/plugins/* on the host instead of EmptyDir. Any CSI driver that's going to use HostPath instead of EmptyDir will face the same issue. Basically, SELinux does not like any HostPath volumes: we don't want processes that escaped their container messing up the host, even if they run as root. So either admin (or a package) labels special directories as allowed to be used by containers or cluster admin can run these special pods with a special policy. This special policy will be either distro specific or even cluster specific and is then hard to configure from an e2e test. |
So "HostPath is the only driver" isn't about the csi-driver-host-path? You meant the builtin hostpath storage driver? |
It's so confusing. It affects only csi-driver-host-path, because that's the only one that uses in-tree HostPath volume in attacher/provisioner to get to driver socket created by a privileged container in another pod. |
In other words, this: csi-driver-host-path/deploy/kubernetes-1.13/hostpath/csi-hostpath-provisioner.yaml Lines 51 to 55 in 486074d
I can see how that is a bit special. Other CSI driver deployments probably have attacher/provisioner/driver all bundled up in a single pod. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Can we put all the sidecars in the same pod as the driver? Bundling them all together is our recommended way, and the fact that our sample driver is not doing that is confusing to driver devs. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
It seems that on SELinux enabled systems, the provisioner & attacher pod can't access the socket created by the privileged plugin pod. Previously: kubernetes/kubernetes#69215 . I am not sure the exact reason, hopefully @bertinatto can explain :))
The text was updated successfully, but these errors were encountered: