-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pods running at different nodes should have different product_uuid #2318
Comments
The nodes have different UUIDs, but stuff like this within the pods is going to have this without some fun hacks ... because the VFS is in the kernel which they share. |
I think you might be able to do this with a modified runc, injecting an extra ro bind mount here. We have to do some semi related stuff for rootless. |
Do you have a pointer to the runc integration ? I will take a look. |
https://github.com/kubernetes-sigs/kind/pull/1727/files#diff-a6e69429a2d466a1b2f9c247ac030aeeb513f83b2427b239ab46f4e2afc904e7 (not currently shipping) also some helpful docs hopefully: https://kind.sigs.k8s.io/docs/contributing/getting-started/ |
IMHO though it would probably be better to make SRIOV able to take input from some other location though (e.g. the kubernetes node name). I'm not sure we'd want to ship a hacked up container runtime just to manipulate this vfs, this is probably going to be fragile to maintain and kind is not really suitable for replacing virtualization / running workloads that deeply integrate against the kernel, it's mostly there so you can test against the kubernetes API ... Container leakiness is to be expected, IMO deepening the papering over of that fact is a feature. That said, pointers above should help you experiment with this, it should be possible to implement without patching kind even, just mount in your custom runc with an extraMount on the node and configure container to use it with a containerdConfigPatch in kind config at runtime. |
btw kubevirt/kubevirtci#570 ? |
Problem is PodPreset is deprecated after 1.19, we can always use an admission webhool to do the same, but before do that I want to check if there is something we can do at kind |
I have see that there is a
mount reference Maybe kind can modify it so it includes a mount from product_uuid to random_uuid ? |
Going with OCI spec hack I have try the following {"destination": "/sys/class/dmi/id/product_uuid", "source": "/proc/sys/kernel/random/uuid"} with the following result
|
After using "bind" option it works as expected {"destination": "/sys/class/dmi/id/product_uuid", "source": "/proc/sys/kernel/random/uuid", "options": ["bind"]} [ellorent@localhost Downloads]$ kubectl exec nginx-kind-worker cat /sys/class/dmi/id/product_uuid
053ba73c-3a24-4cfe-b7ca-5a938a4600d7
[ellorent@localhost Downloads]$ kubectl exec nginx-kind-worker2 cat /sys/class/dmi/id/product_uuid
db9f435b-0316-4f66-92a0-8d3632d6f69c |
This is really neat and seems very reasonable to maintain, sorry I've not been able to wrap this up yet. |
Unfortunately there is no API to enable and verify that the PodPreset feature-gate is enabled. As for today we check kube-apiserver process command on control-plane node to validate that the PodPeset feature is enabled. Since we upgraded kind node image to k8s-1.19, it seems that it takes more time for changes on kube-apiserver.yaml to propagate (e.g: enabling PodPreset). This workaround is temporary, once [1] will land at kubevirtci we can stop using PodPreset. [1] kubernetes-sigs/kind#2318 Signed-off-by: Or Mergi <ormergi@redhat.com>
Since we upgraded kind node image to k8s-1.19, it seems that it takes more time for changes on kube-apiserver.yaml to propagate (e.g: enabling PodPreset). Unfortunately there is no API to enable and verify that the PodPreset feature-gate is enabled. As for today we check kube-apiserver process command on control-plane node to validate that the PodPeset feature is enabled. This workaround is temporary, once [1] will land at kubevirtci we can stop using PodPreset. [1] kubernetes-sigs/kind#2318 Signed-off-by: Or Mergi <ormergi@redhat.com>
…669) Since we upgraded kind node image to k8s-1.19, it seems that it takes more time for changes on kube-apiserver.yaml to propagate (e.g: enabling PodPreset). Unfortunately there is no API to enable and verify that the PodPreset feature-gate is enabled. As for today we check kube-apiserver process command on control-plane node to validate that the PodPeset feature is enabled. This workaround is temporary, once [1] will land at kubevirtci we can stop using PodPreset. [1] kubernetes-sigs/kind#2318 Signed-off-by: Or Mergi <ormergi@redhat.com>
What happened:
At KubeVirt we use kind to test SRIOV migrations and it depends on pods having different product_uuid, I saw that nodes were fixed and they have different product_uuid already, but is not the case for pods.
What you expected to happen:
Pods running at different nodes having different product_uuid.
How to reproduce it (as minimally and precisely as possible):
1- Create a cluster with
1 - Create a pair of pods at each worker
3 - Check the product_uuid
Anything else we need to know?:
The node's issue was fixed with 78252a6, I suppose we cannot do the same with the pods since those containers are out of the kind's scope.
Environment:
kind version
):kind v0.11.1 go1.16.3 linux/amd64
kubectl version
):docker info
):/etc/os-release
):The text was updated successfully, but these errors were encountered: