-
Notifications
You must be signed in to change notification settings - Fork 419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to parse Node PodCIDR #993
Comments
Hi @pierluigilenoci, thanks for this report. which cni/network plugin is in use by the cluster? |
I'm experiencing the same issue and I'm running Calico on my EKS cluster |
cc @mbolt35 in case he has not seen |
@pierluigilenoci @andronux Just for what it's worth, this is nothing more than a really verbose log - it occurs when a pod/node comes up and isn't immediately assigned an IP. If you upgrade to v15.5 and this is still occurring, let me know! |
@kbrwn on AKS clusters we use kubectl get pod -n calico-system NAME READY STATUS RESTARTS AGE
calico-kube-controllers-688cf4bc8b-lz65q 1/1 Running 0 16d
calico-node-7zvsl 1/1 Running 0 17d
calico-node-hgfzs 1/1 Running 0 17d
calico-node-m6zn9 1/1 Running 0 16d
calico-node-vhqs2 1/1 Running 0 17d
calico-node-zqztt 1/1 Running 0 17d
calico-typha-95dddd9cf-9bm69 1/1 Running 0 17d
calico-typha-95dddd9cf-chccb 1/1 Running 0 16d
calico-typha-95dddd9cf-m5kqm 1/1 Running 0 17d kubectl get pod -n tigera-operator
NAME READY STATUS RESTARTS AGE
tigera-operator-5cc64b87bd-mfqhl 1/1 Running 6 17d On EKS clusters we use kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
aws-node-bwtbc 1/1 Running 0 17d
aws-node-dbxb7 1/1 Running 0 17d
aws-node-npnqb 1/1 Running 0 17d
aws-node-scpx5 1/1 Running 0 17d
aws-node-wmpk5 1/1 Running 0 17d
calico-node-22scv 1/1 Running 0 63s
calico-node-4x2c9 1/1 Running 0 17d
calico-node-7pkjl 1/1 Running 0 63s
calico-node-c72dm 1/1 Running 0 63s
calico-node-nljm5 1/1 Running 0 63s
calico-typha-76cddff5d8-rjzkd 1/1 Running 0 17d
calico-typha-horizontal-autoscaler-57f4c9d57d-8ptgg 1/1 Running 0 17d We get the same notification from both sides, Azure and AWS. [1] https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni |
@mbolt35 I'm sorry but I have no good news. I upgraded in AKS cluster and the logs are still there: I0726 10:43:44.092459 1 conntrackwatcher.go:112] Initial Load: 2804 entries
I0726 10:43:47.897901 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 10:44:27.929924 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 10:44:37.944101 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 10:44:37.977465 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 10:44:44.422083 1 cachingnetworkmap.go:144] Removing Cached Pod: [REDACTED]/redis-cluster-5
I0726 10:44:47.965558 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 10:45:09.484462 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 10:45:27.977721 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 10:45:28.081084 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 10:45:28.684958 1 cachingnetworkmap.go:144] Removing Cached Pod: [REDACTED]/redis-cluster-4 The same in EKS: I0726 11:07:27.329934 1 watchcontroller.go:202] Starting *v1.Service controller
I0726 11:07:27.330975 1 netroutes.go:61] +----------------------- Routing Table -----------------------------
I0726 11:07:27.330987 1 netroutes.go:63] | Destination: 10.241.0.0, Route: 0.0.0.0
I0726 11:07:27.330991 1 netroutes.go:63] | Destination: 10.241.11.226, Route: 0.0.0.0
I0726 11:07:27.330995 1 netroutes.go:63] | Destination: 10.241.13.24, Route: 0.0.0.0
I0726 11:07:27.330999 1 netroutes.go:63] | Destination: 10.241.16.22, Route: 0.0.0.0
I0726 11:07:27.331003 1 netroutes.go:63] | Destination: 10.241.31.65, Route: 0.0.0.0
I0726 11:07:27.331007 1 netroutes.go:63] | Destination: 169.254.169.254, Route: 0.0.0.0
I0726 11:07:27.331010 1 netroutes.go:63] | Destination: 0.0.0.0, Route: 10.241.0.1
I0726 11:07:27.331015 1 netroutes.go:65] +-------------------------------------------------------------------
I0726 11:07:32.354791 1 conntrackwatcher.go:112] Initial Load: 1036 entries
I0726 11:07:35.646003 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 11:07:47.960608 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 11:07:57.933567 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 11:08:12.544521 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 11:08:13.910989 1 cachingnetworkmap.go:144] Removing Cached Pod: kube-system/ebs-csi-node-j567n
I0726 11:08:20.191671 1 cachingnetworkmap.go:144] Removing Cached Pod: [REDACTED]/csi-secrets-store-provider-aws-v4g6v
I0726 11:12:05.392821 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 11:12:13.710993 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 11:12:37.114505 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 11:12:49.174769 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 11:12:59.214960 1 cachingnetworkmap.go:356] Warning: Failed to parse Node PodCIDR: due to: invalid CIDR address:
I0726 11:13:20.188612 1 cachingnetworkmap.go:144] Removing Cached Pod: kube-system/ebs-csi-controller-68cf7bd986-zjp5d
I0726 11:13:21.044803 1 cachingnetworkmap.go:144] Removing Cached Pod: kube-system/ebs-csi-node-n4jsp |
@pierluigilenoci Ok, this would take care of the pod specific logs, but I'm curious, do your Node's not have the |
By the way, if it's continuing to spam logs, then that's a problem, but the pure existence of the log is ok, especially on the Node. I will reduce the severity, but I don't expect that log to be as noisy as the pod specific log. |
@mbolt35 our AKS and EKS clusters do not have [1] https://docs.microsoft.com/en-us/azure/templates/microsoft.containerservice/managedclusters?tabs=json#containerservicenetworkprofile-object |
@pierluigilenoci Sigh, yeah, I've caught up a bit on those implementations - definitely an oversight on my end. This was added as just a secondary mechanism for identifying traffic, but isn't a hard requirement. Are you seeing any unusual classifications of network traffic? I'll update the logging severity to avoid spamming. Thanks for the feedback! |
I've cut the network-costs image |
thanks @mbolt35 ! @pierluigilenoci , let us know when you've had a chance to confirm! |
Resolved in #1011 |
@kirbsauce the warning messages disappeared with version |
thanks @pierluigilenoci ! closing |
Describe the bug
The network-costs pod produces thousands of logs like this:
How to solve the problem?
To Reproduce
Expected behavior
Fewer error messages
Screenshots
Not relevant
Collect logs (please complete the following information):
helm ls
and paste the output here:kubectl logs <kubecost-cost-analyzer pod name> -n kubecost -c cost-analyzer-init
and paste output here:gz#595
The text was updated successfully, but these errors were encountered: