Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

calico-node pods are failing after upgrade from 3.22 to 3.23: felix is not ready: readiness probe reporting 503 #6442

Closed
r0bj opened this issue Jul 27, 2022 · 22 comments · Fixed by #6498

Comments

@r0bj
Copy link

r0bj commented Jul 27, 2022

Expected Behavior

Calico is working after upgrade to version 3.23.

Current Behavior

calico-node pods are failing after upgrade from 3.22 to 3.23:

$ kubectl describe po calico-node-pg52g
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  2m35s                  default-scheduler  Successfully assigned kube-system/calico-node-pg52g to dev-k8s-worker-r1
  Normal   Pulling    2m35s                  kubelet            Pulling image "docker.io/calico/cni:v3.23.3"
  Normal   Pulled     2m31s                  kubelet            Successfully pulled image "docker.io/calico/cni:v3.23.3" in 3.507665044s
  Normal   Created    2m31s                  kubelet            Created container install-cni
  Normal   Started    2m31s                  kubelet            Started container install-cni
  Normal   Pulling    2m28s                  kubelet            Pulling image "docker.io/calico/node:v3.23.3"
  Normal   Pulled     2m24s                  kubelet            Successfully pulled image "docker.io/calico/node:v3.23.3" in 3.680538035s
  Normal   Created    2m24s                  kubelet            Created container mount-bpffs
  Normal   Started    2m24s                  kubelet            Started container mount-bpffs
  Normal   Pulled     2m23s                  kubelet            Container image "docker.io/calico/node:v3.23.3" already present on machine
  Normal   Created    2m23s                  kubelet            Created container calico-node
  Normal   Started    2m23s                  kubelet            Started container calico-node
  Warning  Unhealthy  2m21s (x2 over 2m22s)  kubelet            Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused
  Warning  Unhealthy  2m15s                  kubelet            Readiness probe failed: 2022-07-27 18:18:21.875 [INFO][438] confd/health.go 180: Number of node(s) with BGP peering established = 10
calico/node is not ready: felix is not ready: readiness probe reporting 503
  Warning  Unhealthy  2m5s  kubelet  Readiness probe failed: 2022-07-27 18:18:31.882 [INFO][665] confd/health.go 180: Number of node(s) with BGP peering established = 10
calico/node is not ready: felix is not ready: readiness probe reporting 503
  Warning  Unhealthy  115s  kubelet  Readiness probe failed: 2022-07-27 18:18:41.881 [INFO][975] confd/health.go 180: Number of node(s) with BGP peering established = 10
calico/node is not ready: felix is not ready: readiness probe reporting 503
  Warning  Unhealthy  105s  kubelet  Readiness probe failed: 2022-07-27 18:18:51.904 [INFO][1252] confd/health.go 180: Number of node(s) with BGP peering established = 10
calico/node is not ready: felix is not ready: readiness probe reporting 503
  Warning  Unhealthy  95s  kubelet  Readiness probe failed: 2022-07-27 18:19:01.907 [INFO][1565] confd/health.go 180: Number of node(s) with BGP peering established = 10
calico/node is not ready: felix is not ready: readiness probe reporting 503
  Warning  Unhealthy  85s  kubelet  Readiness probe failed: 2022-07-27 18:19:11.894 [INFO][1810] confd/health.go 180: Number of node(s) with BGP peering established = 10
calico/node is not ready: felix is not ready: readiness probe reporting 503
  Warning  Unhealthy  79s  kubelet  Readiness probe failed: 2022-07-27 18:19:17.203 [INFO][2018] confd/health.go 180: Number of node(s) with BGP peering established = 10
calico/node is not ready: felix is not ready: readiness probe reporting 503
  Warning  Unhealthy  75s  kubelet  Readiness probe failed: 2022-07-27 18:19:21.868 [INFO][2146] confd/health.go 180: Number of node(s) with BGP peering established = 10
calico/node is not ready: felix is not ready: readiness probe reporting 503
  Warning  Unhealthy  43s (x4 over 65s)  kubelet  (combined from similar events): Readiness probe failed: 2022-07-27 18:19:51.889 [INFO][3017] confd/health.go 180: Number of node(s) with BGP peering established = 10
calico/node is not ready: felix is not ready: readiness probe reporting 503

The only logs that are not INFO:

2022-07-27 18:19:21.025 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.150.64
2022-07-27 18:19:21.025 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.150.78
2022-07-27 18:19:21.025 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.150.79
2022-07-27 18:19:21.025 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.150.80
2022-07-27 18:19:21.025 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.151.192
2022-07-27 18:19:21.025 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.161.192
2022-07-27 18:19:21.026 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.182.192
2022-07-27 18:19:21.026 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.183.192
2022-07-27 18:19:21.027 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.187.64
2022-07-27 18:19:21.027 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.24.192
2022-07-27 18:19:21.027 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.25.0
2022-07-27 18:19:21.027 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.50.64
2022-07-27 18:19:21.028 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.59.0
2022-07-27 18:19:21.028 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.59.4
2022-07-27 18:19:21.028 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.59.5
2022-07-27 18:19:21.028 [WARNING][2094] felix/l3_route_resolver.go 645: Unable to create route for IP; the node it belongs to was not recorded in IPAM IP=10.205.64.64
2022-07-27 18:19:21.066 [WARNING][2094] felix/daemon.go 1209: IPIP and/or VXLAN encapsulation changed, need to restart.
2022-07-27 18:19:21.066 [WARNING][2094] felix/daemon.go 715: Felix is shutting down reason="encapsulation changed"
2022-07-27 18:19:21.868 [WARNING][2094] felix/health.go 211: Reporter is not ready. name="int_dataplane"
2022-07-27 18:19:21.869 [WARNING][2094] felix/health.go 173: Health: not ready
2022-07-27 18:19:21.898 [WARNING][2094] felix/health.go 211: Reporter is not ready. name="int_dataplane"

Full log: https://gist.github.com/r0bj/1df72959f5f992efba3544fa5eb89d47

Calico manifest: https://projectcalico.docs.tigera.io/manifests/calico-etcd.yaml

Steps to Reproduce (for bugs)

  1. Running calico in version 3.22.1
  2. Upgrade calico to version 3.23.3 from manifest https://projectcalico.docs.tigera.io/manifests/calico-etcd.yaml
  3. calico-node pods are failing

Your Environment

  • Calico version: 3.23.3
  • Orchestrator version (e.g. kubernetes, mesos, rkt):
    kubernetes:
    Client Version: v1.24.3
    Kustomize Version: v4.5.4
    Server Version: v1.24.3
  • Operating System and version:
    Ubuntu 18.04
@mikesplain
Copy link

We ran into this as well, modifying the default-ipv4-ippool ippools.crd.projectcalico.org object to add vxlanMode: Never fixed it for us. We're probably going to roll back to 3.22 though for for now until this is fixed.

This did not happen in recent cluster but did happen when rolling back our code about 6 months and upgrading a cluster to our current configs.

@caseydavenport
Copy link
Member

Sounds like this might have been introduced by this PR? #5576

Fits the code area as well as the time frame. Perhaps we're not properly handling the difference between vxlanMode: "" and vxlanMode: Never for older clusters that don't have that field set.

@caseydavenport
Copy link
Member

CC @coutinhop - WDYT?

@coutinhop
Copy link
Member

Sounds like this might have been introduced by this PR? #5576

Fits the code area as well as the time frame. Perhaps we're not properly handling the difference between vxlanMode: "" and vxlanMode: Never for older clusters that don't have that field set.

@caseydavenport very possibly so, I'm trying to understand exactly how that's happening. Currently taking a look at the logs, will try to figure this out an fix it ASAP.

@coutinhop
Copy link
Member

coutinhop commented Jul 29, 2022

@r0bj or @mikesplain could you post the output for kubectl get felixconfigurations.crd.projectcalico.org -o yaml and kubectl get ippools.crd.projectcalico.org -o yaml so I can try to repro/better understand the issue?

It would also help if you could enable debug logging on felix (LogSeverityScreen:"Debug") and post those.

I have somewhat of a theory, considering the changes from #5576 make felix decide whether IPIP and/or VXLAN should be enabled based on the encapsulation of the existing IP pools, using the values from FelixConfiguration as overrides for those. Additionally, when the encapsulations do change, felix needs to restart to apply those, and we're seeing it do that multiple times in the log you posted, so it's likely there is a bug that didn't foresee an upgrade. I'd like to see if there's any conflicting configuration that could be causing this restart loop...

Thanks!

@r0bj
Copy link
Author

r0bj commented Jul 29, 2022

I'm using calico with etcd datastore (installed from the manifest https://projectcalico.docs.tigera.io/manifests/calico-etcd.yaml) so there are no felixconfigurations.crd.projectcalico.org and ippools.crd.projectcalico.org CRDs in my clusters. There are just no calico related CRDs in this YAML manifest to install.
Can I change Felix log level without felixconfig CRD?

@coutinhop
Copy link
Member

I think you can still get those with calicoctl:
calicoctl get felixconfigurations -o yaml and calicoctl get ippools -o yaml, could you try that?

@r0bj
Copy link
Author

r0bj commented Jul 29, 2022

Sure, this is the data:

$ calicoctl get felixconfigurations -o yaml
apiVersion: projectcalico.org/v3
items:
- apiVersion: projectcalico.org/v3
  kind: FelixConfiguration
  metadata:
    creationTimestamp: "2019-02-11T16:01:23Z"
    name: default
    resourceVersion: "7"
    uid: 47412911-2e16-11e9-92ae-5254008bbbc5
  spec:
    bpfLogLevel: ""
    ipipEnabled: true
    logSeverityScreen: Info
    reportingInterval: 0s
- apiVersion: projectcalico.org/v3
  kind: FelixConfiguration
  metadata:
    creationTimestamp: "2022-05-02T19:03:00Z"
    name: node.dev-k8s-controller-r1
    resourceVersion: "2998691"
    uid: 7a77cd98-4f36-410c-9bcd-1ffbe9720f91
  spec:
    bpfLogLevel: ""
    defaultEndpointToHostAction: Return
- apiVersion: projectcalico.org/v3
  kind: FelixConfiguration
  metadata:
    creationTimestamp: "2019-02-11T16:01:23Z"
    name: node.dev-k8s-controller-r2
    resourceVersion: "16"
    uid: 475bb9a6-2e16-11e9-9e0d-5254008403d4
  spec:
    bpfLogLevel: ""
    defaultEndpointToHostAction: Return
- apiVersion: projectcalico.org/v3
  kind: FelixConfiguration
  metadata:
    creationTimestamp: "2019-02-11T16:01:24Z"
    name: node.dev-k8s-controller-r3
    resourceVersion: "24"
    uid: 47df2ca4-2e16-11e9-9e0d-525400bfbe5d
  spec:
    bpfLogLevel: ""
    defaultEndpointToHostAction: Return
- apiVersion: projectcalico.org/v3
  kind: FelixConfiguration
  metadata:
    creationTimestamp: "2021-07-28T18:00:18Z"
    name: node.dev-k8s-external-lb-a-r1
    resourceVersion: "2857508"
    uid: 8008bab6-94cf-4f90-a7e4-536808fafdaf
  spec:
    bpfLogLevel: ""
    defaultEndpointToHostAction: Return
- apiVersion: projectcalico.org/v3
  kind: FelixConfiguration
  metadata:
    creationTimestamp: "2022-04-18T21:05:14Z"
    name: node.dev-k8s-external-lb-a-r2
    resourceVersion: "2988703"
    uid: 92ab10a1-9646-4b26-a22d-4c3b22b73fa9
  spec:
    bpfLogLevel: ""
    defaultEndpointToHostAction: Return
- apiVersion: projectcalico.org/v3
  kind: FelixConfiguration
  metadata:
    creationTimestamp: "2020-03-20T17:21:33Z"
    name: node.dev-k8s-storage-r5
    resourceVersion: "2593494"
    uid: a7a278d5-5ed2-4670-bb82-cd2d34dc79ce
  spec:
    bpfLogLevel: ""
    defaultEndpointToHostAction: Return
- apiVersion: projectcalico.org/v3
  kind: FelixConfiguration
  metadata:
    creationTimestamp: "2020-03-20T17:21:34Z"
    name: node.dev-k8s-storage-r6
    resourceVersion: "2593504"
    uid: ed3066fe-d697-49a6-9b55-56bc322d0825
  spec:
    bpfLogLevel: ""
    defaultEndpointToHostAction: Return
- apiVersion: projectcalico.org/v3
  kind: FelixConfiguration
  metadata:
    creationTimestamp: "2020-03-20T17:20:08Z"
    name: node.dev-k8s-storage-r7
    resourceVersion: "2593478"
    uid: f719f361-4417-42fd-82d1-9f4fbcb0adb7
  spec:
    bpfLogLevel: ""
    defaultEndpointToHostAction: Return
- apiVersion: projectcalico.org/v3
  kind: FelixConfiguration
  metadata:
    creationTimestamp: "2019-02-11T16:14:19Z"
    name: node.dev-k8s-worker-r1
    resourceVersion: "32"
    uid: 15c183fa-2e18-11e9-93df-002590d66e06
  spec:
    bpfLogLevel: ""
    defaultEndpointToHostAction: Return
- apiVersion: projectcalico.org/v3
  kind: FelixConfiguration
  metadata:
    creationTimestamp: "2019-02-11T16:16:46Z"
    name: node.dev-k8s-worker-r2
    resourceVersion: "89"
    uid: 6d804a97-2e18-11e9-b2a9-0cc47a0ae03c
  spec:
    bpfLogLevel: ""
    defaultEndpointToHostAction: Return
- apiVersion: projectcalico.org/v3
  kind: FelixConfiguration
  metadata:
    creationTimestamp: "2021-06-09T20:49:15Z"
    name: node.dev-k8s-worker-r3
    resourceVersion: "2830390"
    uid: 63664c62-4288-4f32-a8c3-8afbe0bff0cd
  spec:
    bpfLogLevel: ""
    defaultEndpointToHostAction: Return
kind: FelixConfigurationList
metadata:
  resourceVersion: "3046937"
# calicoctl get ippools -o yaml
apiVersion: projectcalico.org/v3
items:
- apiVersion: projectcalico.org/v3
  kind: IPPool
  metadata:
    creationTimestamp: "2019-02-11T16:01:23Z"
    name: default-ipv4-ippool
    resourceVersion: "5"
    uid: 47417284-2e16-11e9-92ae-5254008bbbc5
  spec:
    allowedUses:
    - Workload
    - Tunnel
    blockSize: 26
    cidr: 10.205.0.0/16
    ipipMode: Always
    natOutgoing: true
    nodeSelector: all()
kind: IPPoolList
metadata:
  resourceVersion: "3046937"

@mikesplainsonos
Copy link

Thank you all for jumping in! Let me know if theres anything else we can do to help.

# kubectl get felixconfigurations.crd.projectcalico.org -o yaml
apiVersion: v1
items:
- apiVersion: crd.projectcalico.org/v1
  kind: FelixConfiguration
  metadata:
    annotations:
      projectcalico.org/metadata: '{"uid":"8c9cd4c8-ab43-4a00-9032-03d6146c8686","creationTimestamp":"2022-07-28T15:49:16Z"}'
    creationTimestamp: "2022-07-28T15:49:16Z"
    generation: 3
    name: default
    resourceVersion: "595230"
    uid: 8c9cd4c8-ab43-4a00-9032-03d6146c8686
  spec:
    bpfLogLevel: ""
    floatingIPs: Disabled
    ipipEnabled: true
    logSeverityScreen: Debug
    reportingInterval: 0s
kind: List
metadata:
  resourceVersion: ""

and:

# kubectl get ippools.crd.projectcalico.org -o yaml

apiVersion: v1
items:
- apiVersion: crd.projectcalico.org/v1
  kind: IPPool
  metadata:
    annotations:
      projectcalico.org/metadata: '{"uid":"e21999c6-97f1-46de-b690-dd87600ef07a","creationTimestamp":"2022-07-28T15:49:16Z"}'
    creationTimestamp: "2022-07-28T15:49:16Z"
    generation: 2
    name: default-ipv4-ippool
    resourceVersion: "1463"
    uid: 98a9c83a-c0bb-4b95-9f77-458d5d9eb5a1
  spec:
    blockSize: 26
    cidr: 100.96.0.0/11
    ipipMode: CrossSubnet
    natOutgoing: true
kind: List
metadata:
  resourceVersion: ""

@coutinhop
Copy link
Member

Thanks @r0bj and @mikesplain! @r0bj, if it's not too much to ask, could you send the full calico-log after enabling LogSeverityScreen:"Debug"? Thanks!

@coutinhop
Copy link
Member

@caseydavenport do you mean the problem could be that pool.Spec.VXLANMode could be "" here?

c.updatePool(poolKey, pool.Spec.IPIPMode != apiv3.IPIPModeNever, pool.Spec.VXLANMode != apiv3.VXLANModeNever)

That line is run at felix startup (in daemon.go) on all IP pools retrieved from the client.

I was lead to believe that it would never be an empty string (it would be defaulted to "Never" as the comment says):

// Contains configuration for VXLAN tunneling for this pool. If not specified,
// then this is defaulted to "Never" (i.e. VXLAN tunneling is disabled).
VXLANMode VXLANMode `json:"vxlanMode,omitempty" validate:"omitempty,vxlanMode"`

Let's look at the full debug logs to be sure, but do you think changing the check to pool.Spec.VXLANMode == apiv3.VXLANModeAlways || pool.Spec.VXLANMode == apiv3.VXLANModeCrossSubnet (and the same for IPIP) would solve it? (either that or pool.Spec.VXLANMode != "" && pool.Spec.VXLANMode != apiv3.VXLANModeNever)

@r0bj
Copy link
Author

r0bj commented Jul 29, 2022

@coutinhop Sure, there is calico-node log with LogSeverityScreen:"Debug":
calico-node.log

@caseydavenport
Copy link
Member

caseydavenport commented Aug 3, 2022

@caseydavenport do you mean the problem could be that pool.Spec.VXLANMode could be "" here?

Yep, if the pool was created prior to that field existing or created / edited with an older version of calicoctl that isn't aware of the field (or potentially another reason).

  spec:
    allowedUses:
    - Workload
    - Tunnel
    blockSize: 26
    cidr: 10.205.0.0/16
    ipipMode: Always
    natOutgoing: true
    nodeSelector: all()

^ Can see it's not set here.

I'm using calico with etcd datastore

and

kubectl get felixconfigurations.crd.projectcalico.org -o yaml

One thing I'm not sure about - does the cluster use etcd mode or k8s CRD mode? Or are we talking about two different clusters here?

Do you recall what the original version of Calico that was installed on these clusters was? Like, the version used when the cluster was originally provisioned?

@coutinhop
Copy link
Member

coutinhop commented Aug 3, 2022

Thanks @caseydavenport, so it seems like the issue is indeed that. I'll work on the fix!

One thing I'm not sure about - does the cluster use etcd mode or k8s CRD mode? Or are we talking about two different clusters here?

@r0bj is using etcd mode and used calicoctl to get the felixconfig and ip pools, @mikesplain is using kdd and used kubectl, so yeah 2 different clusters with the same issue, I think...

@r0bj
Copy link
Author

r0bj commented Aug 3, 2022

One thing I'm not sure about - does the cluster use etcd mode or k8s CRD mode? Or are we talking about two different clusters here?

In my case it's etcd mode so I used calicoctl get felixconfigurations -o yaml

Do you recall what the original version of Calico that was installed on these clusters was? Like, the version used when the cluster was originally provisioned?

Git history for my cluster shows that it was created 6 years ago (it's a bare metal cluster upgraded in-place) with calico in version 1.4.2 at that time.

@mikesplain
Copy link

One thing I'm not sure about - does the cluster use etcd mode or k8s CRD mode? Or are we talking about two different clusters here?

Confirming, we are using whatever the default was in kops now. At the time I think it may have been etcd but currently I believe it's crd mode.

Do you recall what the original version of Calico that was installed on these clusters was? Like, the version used when the cluster was originally provisioned?

Our git history shows the cluster is 3 years old, initially created with kops 1.11.1 & calico v3.3.1. It was installed with this config file:

https://github.com/kubernetes/kops/blob/1.11.1/upup/models/cloudup/resources/addons/networking.projectcalico.org/k8s-1.7-v3.yaml.template

@caseydavenport
Copy link
Member

Perfect, so that supports the theory that these pools were created prior to VXLANMode being an option and the newest release is just not properly handling that case, so I think @coutinhop's fix for this in #6494 is probably good.

@coutinhop it occurs to me that we should look at doing read-time defaulting of that field in case there is any other code that might be hit by the same issue. We should be able to handle that in the Calico client code so any users of the client see "VXLANMode: Never" even if the underlying data doesn't include the field.

@coutinhop
Copy link
Member

@caseydavenport makes sense, will look into doing that!

@caseydavenport
Copy link
Member

@coutinhop looks like we already have a good hook to do this in:

// Default pool values when reading from storage
func convertIpPoolFromStorage(pool *apiv3.IPPool) error {

@rastislavs
Copy link

Hey, do you plan to cherry-pick the fix to previous releases as well? We just hit it on 3.23, it looks like the fix was merged only into the master branch.

@mgleung
Copy link
Contributor

mgleung commented Dec 28, 2022

As per @caseydavenport 's comment on the fix PR

This was cherry-picked to v3.24 and v3.23 - should be present in v3.23.4+

@zakariais
Copy link

Hey i am getting this issue and with these logs
EKS Version: 1.24
Calico Node: 3.25
When i enable BPF mode i get readiness probe 503 and below warning logs and pod is stuck starting

2023-03-01 08:42:18.393 [WARNING][1132] felix/daemon.go 719: Felix is shutting down reason="config changed"
2023-03-01 08:42:19.911 [WARNING][1132] felix/health.go 266: Reporter is not ready. name="CalculationGraph"
2023-03-01 08:42:19.911 [WARNING][1132] felix/health.go 266: Reporter is not ready. name="InternalDataplaneMainLoop"
2023-03-01 08:42:19.911 [WARNING][1132] felix/health.go 228: Health: not ready
2023-03-01 08:42:20.465 [WARNING][1165] felix/config_params.go 1049: Proceeding with `RouteTableRange` config option. This field has been deprecated in favor of `RouteTableRanges`.
2023-03-01 08:42:20.501 [WARNING][1165] felix/daemon.go 719: Felix is shutting down reason="config changed"
2023-03-01 08:42:22.598 [WARNING][1184] felix/config_params.go 1049: Proceeding with `RouteTableRange` config option. This field has been deprecated in favor of `RouteTableRanges`.
2023-03-01 08:42:22.630 [WARNING][1184] felix/daemon.go 719: Felix is shutting down reason="config changed"
2023-03-01 08:42:24.698 [WARNING][1203] felix/config_params.go 1049: Proceeding with `RouteTableRange` config option. This field has been deprecated in favor of `RouteTableRanges`.
2023-03-01 08:42:24.732 [WARNING][1203] felix/daemon.go 719: Felix is shutting down reason="config changed"
2023-03-01 08:42:26.802 [WARNING][1222] felix/config_params.go 1049: Proceeding with `RouteTableRange` config option. This field has been deprecated in favor of `RouteTableRanges`.
2023-03-01 08:42:26.835 [WARNING][1222] felix/daemon.go 719: Felix is shutting down reason="config changed"
2023-03-01 08:42:28.010 [WARNING][1222] felix/health.go 266: Reporter is not ready. name="CalculationGraph"
2023-03-01 08:42:28.010 [WARNING][1222] felix/health.go 266: Reporter is not ready. name="InternalDataplaneMainLoop"
2023-03-01 08:42:28.902 [WARNING][1242] felix/config_params.go 1049: Proceeding with `RouteTableRange` config option. This field has been deprecated in favor of `RouteTableRanges`.
2023-03-01 08:42:28.957 [WARNING][1242] felix/daemon.go 719: Felix is shutting down reason="config changed"
2023-03-01 08:42:29.947 [WARNING][1242] felix/health.go 266: Reporter is not ready. name="InternalDataplaneMainLoop"
2023-03-01 08:42:29.947 [WARNING][1242] felix/health.go 266: Reporter is not ready. name="CalculationGraph"
2023-03-01 08:42:29.947 [WARNING][1242] felix/health.go 228: Health: not ready
2023-03-01 08:42:31.027 [WARNING][1276] felix/config_params.go 1049: Proceeding with `RouteTableRange` config option. This field has been deprecated in favor of `RouteTableRanges`.
2023-03-01 08:42:31.072 [WARNING][1276] felix/daemon.go 719: Felix is shutting down reason="config changed"
2023-03-01 08:42:33.141 [WARNING][1295] felix/config_params.go 1049: Proceeding with `RouteTableRange` config option. This field has been deprecated in favor of `RouteTableRanges`.
2023-03-01 08:42:33.182 [WARNING][1295] felix/daemon.go 719: Felix is shutting down reason="config changed"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants