Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes v1.24.0 install failed with calico node not ready #1282

Closed
willzhang opened this issue May 16, 2022 · 3 comments
Closed

kubernetes v1.24.0 install failed with calico node not ready #1282

willzhang opened this issue May 16, 2022 · 3 comments
Labels
bug Something isn't working

Comments

@willzhang
Copy link

willzhang commented May 16, 2022

What is version of KubeKey has the issue?

2.1.0

What is your os environment?

ubuntu 22.04

KubeKey config file

root@ubuntu:/data/kubekey# cat config-sample.yaml 

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master, address: 192.168.72.30, internalAddress: 192.168.72.30, user: root, password: "123456"}
  - {name: node1, address: 192.168.72.31, internalAddress: 192.168.72.31, user: root, password: "123456"}
  - {name: node2, address: 192.168.72.32, internalAddress: 192.168.72.32, user: root, password: "123456"}
  roleGroups:
    etcd:
    - master
    control-plane: 
    - master
    worker:
    - node1
    - node2
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.24.0
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: true
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []

A clear and concise description of what happend.

calico node pod not ready

root@master:~# kubectl -n kube-system get pods
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-588c4b86d-5fzz5   1/1     Running   0          6m22s
calico-node-2xdlm                         0/1     Running   0          6m22s
calico-node-ff996                         0/1     Running   0          6m22s
calico-node-vn7jd                         0/1     Running   0          6m22s
coredns-f657fccfd-dhbpz                   1/1     Running   0          6m37s
coredns-f657fccfd-fxckk                   1/1     Running   0          6m37s
kube-apiserver-master                     1/1     Running   0          6m51s
kube-controller-manager-master            1/1     Running   0          6m51s
kube-multus-ds-sn6wf                      1/1     Running   0          6m21s
kube-multus-ds-tvvn6                      1/1     Running   0          6m21s
kube-multus-ds-vs465                      1/1     Running   0          6m21s
kube-proxy-nl7ql                          1/1     Running   0          6m38s
kube-proxy-vbjjw                          1/1     Running   0          6m23s
kube-proxy-wtggs                          1/1     Running   0          6m23s
kube-scheduler-master                     1/1     Running   0          6m51s
nodelocaldns-5llph                        1/1     Running   0          6m38s
nodelocaldns-gshsw                        1/1     Running   0          6m23s
nodelocaldns-km2gs                        1/1     Running   0          6m23s
root@master:~# 
root@master:~# 

Relevant log output

root@master:~# 
root@master:~# kubectl -n kube-system describe pods calico-node-ff996 
Name:                 calico-node-ff996
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 node2/192.168.72.32
Start Time:           Mon, 16 May 2022 16:45:06 +0800
Labels:               controller-revision-hash=7ff8f5f454
                      k8s-app=calico-node
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   192.168.72.32
IPs:
  IP:           192.168.72.32
Controlled By:  DaemonSet/calico-node
Init Containers:
  upgrade-ipam:
    Container ID:  containerd://5afe80b4b8addc5597a4cf5b46f7f7dbec67b19fa3662b9c980171b5c4338fd6
    Image:         registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0
    Image ID:      registry.cn-beijing.aliyuncs.com/kubesphereio/cni@sha256:9906e2cca8006e1fe9fc3f358a3a06da6253afdd6fad05d594e884e8298ffe1d
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/calico-ipam
      -upgrade
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 16 May 2022 16:45:21 +0800
      Finished:     Mon, 16 May 2022 16:45:21 +0800
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      KUBERNETES_NODE_NAME:        (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:  <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
    Mounts:
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/lib/cni/networks from host-local-net-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vxhjx (ro)
  install-cni:
    Container ID:  containerd://414d04aa35abe3027111db3eaf9493d9acad5f9dc46b0bd5a1c772f48f60bd99
    Image:         registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0
    Image ID:      registry.cn-beijing.aliyuncs.com/kubesphereio/cni@sha256:9906e2cca8006e1fe9fc3f358a3a06da6253afdd6fad05d594e884e8298ffe1d
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/install
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 16 May 2022 16:45:22 +0800
      Finished:     Mon, 16 May 2022 16:45:23 +0800
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      CNI_CONF_NAME:         10-calico.conflist
      CNI_NETWORK_CONFIG:    <set to the key 'cni_network_config' of config map 'calico-config'>  Optional: false
      KUBERNETES_NODE_NAME:   (v1:spec.nodeName)
      CNI_MTU:               <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      SLEEP:                 false
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vxhjx (ro)
  flexvol-driver:
    Container ID:   containerd://7deca12b833e67a4b0516fd126437165123e98e172dff9ab4c11f78da52de271
    Image:          registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.20.0
    Image ID:       registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol@sha256:c17e3e9871682bed00bfd33f8d6f00db1d1a126034a25bf5380355978e0c548d
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 16 May 2022 16:45:23 +0800
      Finished:     Mon, 16 May 2022 16:45:23 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /host/driver from flexvol-driver-host (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vxhjx (ro)
Containers:
  calico-node:
    Container ID:   containerd://c9eab9fc7b5ab12a8b30777b8e4ef76fc021ade9054ef294e201b039c2c2dc17
    Image:          registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.20.0
    Image ID:       registry.cn-beijing.aliyuncs.com/kubesphereio/node@sha256:7f9aa7e31fbcea7be64b153f8bcfd494de023679ec10d851a05667f0adb42650
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 16 May 2022 16:45:24 +0800
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      250m
    Liveness:   exec [/bin/calico-node -felix-live -bird-live] delay=10s timeout=10s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/calico-node -felix-ready -bird-ready] delay=0s timeout=10s period=10s #success=1 #failure=3
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      DATASTORE_TYPE:                     kubernetes
      WAIT_FOR_DATASTORE:                 true
      NODENAME:                            (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:          <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
      CLUSTER_TYPE:                       k8s,bgp
      NODEIP:                              (v1:status.hostIP)
      IP_AUTODETECTION_METHOD:            can-reach=$(NODEIP)
      IP:                                 autodetect
      CALICO_IPV4POOL_IPIP:               Always
      CALICO_IPV4POOL_VXLAN:              Never
      FELIX_IPINIPMTU:                    <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_VXLANMTU:                     <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_WIREGUARDMTU:                 <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      CALICO_IPV4POOL_CIDR:               10.233.64.0/18
      CALICO_IPV4POOL_BLOCK_SIZE:         24
      CALICO_DISABLE_FILE_LOGGING:        true
      FELIX_DEFAULTENDPOINTTOHOSTACTION:  ACCEPT
      FELIX_IPV6SUPPORT:                  false
      FELIX_HEALTHENABLED:                true
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /sys/fs/ from sysfs (rw)
      /var/lib/calico from var-lib-calico (rw)
      /var/log/calico/cni from cni-log-dir (ro)
      /var/run/calico from var-run-calico (rw)
      /var/run/nodeagent from policysync (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vxhjx (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  var-run-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/calico
    HostPathType:  
  var-lib-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/calico
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  sysfs:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/
    HostPathType:  DirectoryOrCreate
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  cni-log-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/calico/cni
    HostPathType:  
  host-local-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/cni/networks
    HostPathType:  
  policysync:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/nodeagent
    HostPathType:  DirectoryOrCreate
  flexvol-driver-host:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
    HostPathType:  DirectoryOrCreate
  kube-api-access-vxhjx:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  6m31s                  default-scheduler  Successfully assigned kube-system/calico-node-ff996 to node2
  Warning  Failed     6m29s (x2 over 6m29s)  kubelet            Error: services have not yet been read at least once, cannot construct envvars
  Normal   Pulled     6m15s (x3 over 6m29s)  kubelet            Container image "registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0" already present on machine
  Normal   Created    6m15s                  kubelet            Created container upgrade-ipam
  Normal   Started    6m15s                  kubelet            Started container upgrade-ipam
  Normal   Pulled     6m14s                  kubelet            Container image "registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0" already present on machine
  Normal   Created    6m14s                  kubelet            Created container install-cni
  Normal   Started    6m14s                  kubelet            Started container install-cni
  Normal   Pulled     6m13s                  kubelet            Container image "registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.20.0" already present on machine
  Normal   Created    6m13s                  kubelet            Created container flexvol-driver
  Normal   Started    6m13s                  kubelet            Started container flexvol-driver
  Normal   Started    6m12s                  kubelet            Started container calico-node
  Normal   Created    6m12s                  kubelet            Created container calico-node
  Normal   Pulled     6m12s                  kubelet            Container image "registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.20.0" already present on machine
  Warning  Unhealthy  6m11s                  kubelet            Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/bird/bird.ctl: connect: no such file or directory
  Warning  Unhealthy  6m10s (x2 over 6m10s)  kubelet            Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused
  Warning  Unhealthy  5m50s                  kubelet            Readiness probe failed: 2022-05-16 08:45:46.680 [INFO][1011] confd/health.go 180: Number of node(s) with BGP peering established = 2
calico/node is not ready: felix is not ready: readiness probe reporting 503
  Warning  Unhealthy  5m40s  kubelet  Readiness probe failed: 2022-05-16 08:45:56.664 [INFO][1500] confd/health.go 180: Number of node(s) with BGP peering established = 2
calico/node is not ready: felix is not ready: readiness probe reporting 503
  Warning  Unhealthy  5m30s  kubelet  Readiness probe failed: 2022-05-16 08:46:06.686 [INFO][1942] confd/health.go 180: Number of node(s) with BGP peering established = 2
calico/node is not ready: felix is not ready: readiness probe reporting 503
  Warning  Unhealthy  5m30s  kubelet  Readiness probe failed: 2022-05-16 08:46:06.846 [INFO][1988] confd/health.go 180: Number of node(s) with BGP peering established = 2
calico/node is not ready: felix is not ready: readiness probe reporting 503
  Warning  Unhealthy  5m20s  kubelet  Readiness probe failed: 2022-05-16 08:46:16.674 [INFO][2455] confd/health.go 180: Number of node(s) with BGP peering established = 2
calico/node is not ready: felix is not ready: readiness probe reporting 503
  Warning  Unhealthy  5m  kubelet  Readiness probe failed: 2022-05-16 08:46:36.649 [INFO][3384] confd/health.go 180: Number of node(s) with BGP peering established = 2
calico/node is not ready: felix is not ready: readiness probe reporting 503
  Warning  Unhealthy  80s (x24 over 4m50s)  kubelet  (combined from similar events): Readiness probe failed: 2022-05-16 08:50:16.641 [INFO][13759] confd/health.go 180: Number of node(s) with BGP peering established = 2
calico/node is not ready: felix is not ready: readiness probe reporting 503

Additional information

root@master:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:ad:69:e1 brd ff:ff:ff:ff:ff:ff
    altname enp3s0
    inet 192.168.72.30/24 brd 192.168.72.255 scope global ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fead:69e1/64 scope link 
       valid_lft forever preferred_lft forever
3: nodelocaldns: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether aa:eb:1e:21:bf:ca brd ff:ff:ff:ff:ff:ff
    inet 169.254.25.10/32 brd 169.254.25.10 scope global nodelocaldns
       valid_lft forever preferred_lft forever
4: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 46:88:38:90:21:f1 brd ff:ff:ff:ff:ff:ff
    inet 10.233.0.3/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.233.0.1/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
5: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.233.70.0/32 scope global tunl0
       valid_lft forever preferred_lft forever
16: cali3700add4837@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-c46d6166-4c66-189e-5d51-ff1c7e082f6d
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
17: cali5fa6d46f293@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-15cc3581-a347-10a0-9ef2-629ecb17ec20
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
@willzhang willzhang added the bug Something isn't working label May 16, 2022
@willzhang
Copy link
Author

willzhang commented May 16, 2022

resolved by change calico version, maybe calico verison should update from v3.20.0 to v3.23.0

kubectl -n kube-system get all |grep calico | awk '{print $1}' | xargs kubectl -n kube-system delete

wget https://docs.projectcalico.org/archive/v3.23/manifests/calico.yaml
kubectl apply -f calico.yaml

@24sama
Copy link
Collaborator

24sama commented May 17, 2022

@willzhang Thanks for your feedback!
But in my environment, calico v3.20.0 works fine. Of course, we will support the latest versions of the built-in network plugin later.

@biqiangwu
Copy link

What causes this?
We also ran into Calico V3.16.10 and couldn't upgrade Calico for some reason
The calico-node with only one node in the cluster failed to start
And there are hundreds of Calico cards left on the node

`
kubectl describe pod calico-node-xxx
...
Warning Unhealthy 13m kubelet. dce-168-61-124-/18 Readiness probe failed: 2022-06-21 06:21:35.844 [INFO][425] confd/health.go 180: Number of node(s) with BGP peering establish calico/node is not ready: felix is not ready: readiness probe reporting 503
....

log

...
8243 2022-06-21 02:54:18.250 [INFO][87] felix/route_table.go 1073: Failed to access interface because it doesn't exist. error=Link not found ifaceName="cali3b66d03c6d7" ifaceRegex="^cali." i
8244 2022-06-21 02:54:18.250 [INFO][87] felix/route_table.go 1141: Failed to get interface; it's down/gone. error=Link not found ifaceName="cali3b66d03c6d7" ifaceRegex="^cali.
" ipVersion=0x4
8245 2022-06-21 02:54:18.250 [INFO][87] felix/route_table.go 527: Interface missing, will retry if it appears. ifaceName="cali3b66d03c6d7" ifaceRegex="^cali." ipVersion=0x4
8246 2022-06-21 02:54:18.853 [INFO][87] felix/route_table.go 1073: Failed to access interface because it doesn't exist. error=Link not found ifaceName="cali246d32246f1" ifaceRegex="^cali.
" i
8247 2022-06-21 02:54:18.853 [INFO][87] felix/route_table.go 1141: Failed to get interface; it's down/gone. error=Link not found ifaceName="cali246d32246f1" ifaceRegex="^cali." ipVersion=0x4
8248 2022-06-21 02:54:18.853 [INFO][87] felix/route_table.go 527: Interface missing, will retry if it appears. ifaceName="cali246d32246f1" ifaceRegex="^cali.
" ipVersion=0x4
8249 2022-06-21 02:54:20.988 [INFO][87] felix/route_table.go 1073: Failed to access interface because it doesn't exist. error=Link not found ifaceName="calie799a12f5b7" ifaceRegex="^cali." i
8250 2022-06-21 02:54:20.988 [INFO][87] felix/route_table.go 1141: Failed to get interface; it's down/gone. error=Link not found ifaceName="calie799a12f5b7" ifaceRegex="^cali.
" ipVersion=0x4
8251 2022-06-21 02:54:20.988 [INFO][87] felix/route_table.go 527: Interface missing, will retry if it appears. ifaceName="calie799a12f5b7" ifaceRegex="^cali." ipVersion=0x4
8252 2022-06-21 02:54:20.989 [INFO][87] felix/route_table.go 1073: Failed to access interface because it doesn't exist. error=Link not found ifaceName="calib752992d4dd" ifaceRegex="^cali.
" i
8253 2022-06-21 02:54:20.989 [INFO][87] felix/route_table.go 1141: Failed to get interface; it's down/gone. error=Link not found ifaceName="calib752992d4dd" ifaceRegex="^cali." ipVersion=0x4
8254 2022-06-21 02:54:20.989 [INFO][87] felix/route_table.go 527: Interface missing, will retry if it appears. ifaceName="calib752992d4dd" ifaceRegex="^cali.
" ipVersion=0x4
8255 2022-06-21 02:54:20.989 [INFO][87] felix/route_table.go 1073: Failed to access interface because it doesn't exist. error=Link not found ifaceName="calib999e8d2a31" ifaceRegex="^cali." i
8256 2022-06-21 02:54:20.990 [INFO][87] felix/route_table.go 1141: Failed to get interface; it's down/gone. error=Link not found ifaceName="calib999e8d2a31" ifaceRegex="^cali.
" ipVersion=0x4
8257 2022-06-21 02:54:20.990 [INFO][87] felix/route_table.go 527: Interface missing, will retry if it appears. ifaceName="calib999e8d2a31" ifaceRegex="^cali.*" ipVersion=0x4
...
`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants