-
Notifications
You must be signed in to change notification settings - Fork 591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ifname net1 is already exist #353
Comments
Thanks for the report @Elegant996 -- I'm going to try to replicate it here, I'm spinning up a lab environment to try it now... |
Alright, looks like I've been able to replicate it... Thanks for the great details in the report. I'm pretty sure this happens without me having to delete/restart the pod, it just happens the first time I launch the pod with this configuration. I haven't found a cause for this yet, but, wanted to document my steps... I'm on the same
But a later kube version...
Here's the steps I took to replicate:
And I wind up with approximately the same results...
|
fwiw... the quickstart method still works with this version...
|
@Elegant996 -- just noticed something in your network-attachment-definition config section...
Is the master actually intended to be Just caught my eye on that one. |
For what it's worth, I edited your net-attach-def to have a master of For reference:
|
Sorry for the slew of responses... now that I re-read... The error I created actually differs from yours. Yours:
Mine was:
@Elegant996 -- could you enable debug logging and hook me up with the logs? Add to your CNI configuration in
(or any location for Also if you can grab the contents of your |
@dougbtv Yes, it is. When I initially set up the OS way back when it decided to pick 'ens192' for the NIC instead of
Output from
|
Very bizarre... after MANY attempts over last night and today the pod just decided to work again during one of the crash restarts. It was working previously before all this but I didn't expect it to recover randomly. I literally got home to add the debug lines and noticed the pod was up and running. I'll close the issue for now as it's kind of difficult to troubleshoot if the issue is no longer present. Thanks! |
Issue is still on going. I just realized that I never stated that I can occasionally get the pod to come up by repeatedly deleting it but this is not a long-term solution. EDIT: @dougbtv could this be related? That error appears alongside mine in some of the logs.
Thanks! |
Appeared to be a really weird configuration issue. Re-creating my cluster resolved it. Now running Kubernernetes 1.16.2. Closing. Thanks! |
This bug STILL exists, and is causing a ton of problems for me. I'm using multus-cni in k3s, and while it usually works after first installing, it soon gets into a state where it's complaining that the interface already exists. So...if the interface already exists, either re-use it, or delete it and re-create it. Super simple solution. Literally a no-brainer. Why let this issue fester for so many years? |
I found that this bug happens when the configuration file of multus gets created in a weird way. Removing Before: {
"cniVersion": "0.3.1",
"name": "multus-cni-network",
"type": "multus",
"capabilities": {"bandwidth":true,"portMappings":true},
"kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig",
"delegates": [
{"capabilities":{"bandwidth":true,"portMappings":true},"cniVersion":"0.3.1","delegates":[{"capabilities":{"bandwidth":true,"portMappings":true},"cniVersion":"0.3.1","delegates":[{"capabilities":{"bandwidth":true,"portMappings":true},"cniVersion":"0.3.1","delegates":[{"cniVersion":"0.3.1","name":"k8s-pod-network","plugins":[{"datastore_type":"kubernetes","ipam":{"type":"calico-ipam"},"kubernetes":{"kubeconfig":"/etc/cni/net.d/calico-kubeconfig"},"log_file_path":"/var/log/calico/cni/cni.log","log_level":"info","mtu":0,"nodename":"rpi-srv02.gbw-5.okvm.de","policy":{"type":"k8s"},"type":"calico"},{"capabilities":{"portMappings":true},"snat":true,"type":"portmap"},{"capabilities":{"bandwidth":true},"type":"bandwidth"}]}],"kubeconfig":"/etc/cni/net.d/multus.d/multus.kubeconfig","name":"multus-cni-network","type":"multus"}],"kubeconfig":"/etc/cni/net.d/multus.d/multus.kubeconfig","name":"multus-cni-network","type":"multus"}],"kubeconfig":"/etc/cni/net.d/multus.d/multus.kubeconfig","name":"multus-cni-network","type":"multus"}
]
} After: {
"cniVersion": "0.3.1",
"name": "multus-cni-network",
"type": "multus",
"capabilities": {"bandwidth":true,"portMappings":true},
"kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig",
"delegates": [
{"cniVersion":"0.3.1","name":"k8s-pod-network","plugins":[{"datastore_type":"kubernetes","ipam":{"type":"calico-ipam"},"kubernetes":{"kubeconfig":"/etc/cni/net.d/calico-kubeconfig"},"log_file_path":"/var/log/calico/cni/cni.log","log_level":"info","mtu":0,"nodename":"rpi-srv02.gbw-5.okvm.de","policy":{"type":"k8s"},"type":"calico"},{"capabilities":{"portMappings":true},"snat":true,"type":"portmap"},{"capabilities":{"bandwidth":true},"type":"bandwidth"}]}
]
} As you can see, there is something recursive going on with the |
@toelke I got the same problem of recursive |
I have no more insight into this. I only know to delete the file on the node when I see the "pod is |
I am also setting the same issue with the same workaround. I was hoping it was a config issue I have. |
I just saw the same issue. Extra multus delegate. removing the file and restarting the multus pod cleared it. Can someone reopen this issue please? |
exactly the same here.
Yes please |
Just saw this repeat itself after a cluster reboot! :/ This is very concerning. How do we get it fixed? multus 4.0.2 |
Have you tried the thick version? I noticed after I switched the problem seems to have gone away. |
The thick version suffers from #1213 So, both versions seem to have issues with restarts/reboots. :/ |
Stracing it out, I don't understand how its not getting through... Its like its ignoring --multus-master-cni-file-name, and not filtering the 00-multus.conf. The code looks correct though, so not sure how this is possible.
|
What happened:
Restarted pod that was using multus annotations and received an error stating that the ifname already exists.
What you expected to happen:
Successfully restart the pod with multus annotations.
How to reproduce it (as minimally and precisely as possible):
Simply deleted the pod, I had done this several times prior but this is the first time I've received an error. No updates were ran since the previous pod restart. Performed a drain of all nodes then rebooted but the issue still persists.
Anything else we need to know?:
Environment:
nfvpe/multus:latest
image path and image ID (from 'docker images')
docker.io/nfvpe/multus
and9318454f544e
kubectl version
): 1.14.100-multus.conf
multus.kubeconfig
kubectl get net-attach-def -o yaml
)kubectl get pod <podname> -o yaml
)The text was updated successfully, but these errors were encountered: