-
Notifications
You must be signed in to change notification settings - Fork 264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request to use Debugf instead of Infof for redundant logs in isFeatureInRange #1043
Comments
I would just drop those log messages entirely. They don't say what feature they were testing, so they're never actually going to be useful. They look like they were accidentally left-in debugging code in #723 while implementing unit tests. |
Yea I'm not sure the logs are very helpful.. Ideally we'd do two things if we were going to keep them, they should be updated to say what feature we even care about this for (e.g. that we're checking the version range for) and swapped to debug logs. (or we can just axe them like @TBBle suggests) |
Kubeproxy logs are filled with redudnant version check spam from an unexported call that's invoked as part of checking if a feature is supported. The logs don't detail what feature(s) are even being checked so it just seems like spam. With the way things are implemented all of the hcn features are checked for support in any of the `hcn.XSupported()` calls not just the one being checked, so these logs come up quite a bit if there's many features that aren't supported on the machine. Should remedy microsoft#1043 Signed-off-by: Daniel Canter <dcanter@microsoft.com>
Kubeproxy logs are filled with redudnant version check spam from an unexported call that's invoked as part of checking if a feature is supported. The logs don't detail what feature(s) are even being checked so it just seems like spam. With the way things are implemented all of the hcn features are checked for support in any of the `hcn.XSupported()` calls not just the one being checked, so these logs come up quite a bit if there's many features that aren't supported on the machine. Add two new logs in a sync.Once that logs the HNS version and supported features. This should be enough to investigate version issues. Should remedy microsoft#1043 Signed-off-by: Daniel Canter <dcanter@microsoft.com>
Kubeproxy logs are filled with redudnant version check spam from an unexported call that's invoked as part of checking if a feature is supported. The logs don't detail what feature(s) are even being checked so it just seems like spam. With the way things are implemented all of the hcn features are checked for support in any of the `hcn.XSupported()` calls not just the one being checked, so these logs come up quite a bit if there's many features that aren't supported on the machine. Add two new logs in a sync.Once that logs the HNS version and supported features. This should be enough to investigate version issues. Should remedy microsoft#1043 Signed-off-by: Daniel Canter <dcanter@microsoft.com>
Kubeproxy logs are filled with redudnant version check spam from an unexported call that's invoked as part of checking if a feature is supported. The logs don't detail what feature(s) are even being checked so it just seems like spam. With the way things are implemented all of the hcn features are checked for support in any of the `hcn.XSupported()` calls not just the one being checked, so these logs come up quite a bit if there's many features that aren't supported on the machine. Add two new logs in a sync.Once that logs the HNS version and supported features. This should be enough to investigate version issues. Should remedy microsoft#1043 Signed-off-by: Daniel Canter <dcanter@microsoft.com>
Kubeproxy logs are filled with redudnant version check spam from an unexported call that's invoked as part of checking if a feature is supported. The logs don't detail what feature(s) are even being checked so it just seems like spam. With the way things are implemented all of the hcn features are checked for support in any of the `hcn.XSupported()` calls not just the one being checked, so these logs come up quite a bit if there's many features that aren't supported on the machine. Add two new logs in a sync.Once that logs the HNS version and supported features. This should be enough to investigate version issues. Should remedy microsoft#1043 Signed-off-by: Daniel Canter <dcanter@microsoft.com>
Kubeproxy logs are filled with redudnant version check spam from an unexported call that's invoked as part of checking if a feature is supported. The logs don't detail what feature(s) are even being checked so it just seems like spam. With the way things are implemented all of the hcn features are checked for support in any of the `hcn.XSupported()` calls not just the one being checked, so these logs come up quite a bit if there's many features that aren't supported on the machine. Add two new logs in a sync.Once that logs the HNS version and supported features. This should be enough to investigate version issues. Should remedy microsoft#1043 Signed-off-by: Daniel Canter <dcanter@microsoft.com> (cherry picked from commit 6288bb9) Signed-off-by: Daniel Canter <dcanter@microsoft.com>
Kubeproxy logs are filled with redudnant version check spam from an unexported call that's invoked as part of checking if a feature is supported. The logs don't detail what feature(s) are even being checked so it just seems like spam. With the way things are implemented all of the hcn features are checked for support in any of the `hcn.XSupported()` calls not just the one being checked, so these logs come up quite a bit if there's many features that aren't supported on the machine. Add two new logs in a sync.Once that logs the HNS version and supported features. This should be enough to investigate version issues. Should remedy microsoft#1043 Signed-off-by: Daniel Canter <dcanter@microsoft.com> (cherry picked from commit 6288bb9) Signed-off-by: Daniel Canter <dcanter@microsoft.com>
Kubeproxy logs are filled with redudnant version check spam from an unexported call that's invoked as part of checking if a feature is supported. The logs don't detail what feature(s) are even being checked so it just seems like spam. With the way things are implemented all of the hcn features are checked for support in any of the `hcn.XSupported()` calls not just the one being checked, so these logs come up quite a bit if there's many features that aren't supported on the machine. Add two new logs in a sync.Once that logs the HNS version and supported features. This should be enough to investigate version issues. Should remedy microsoft#1043 Signed-off-by: Daniel Canter <dcanter@microsoft.com> (cherry picked from commit 6288bb9) Signed-off-by: Daniel Canter <dcanter@microsoft.com>
@AbelHu |
We are using Windows Containerd, the version is 1.5.8 which is using hcsshim 0.8.23. |
For kube-proxy here's the PR that would've gotten rid of these kubernetes/kubernetes#104880. It looks like it's going into 1.23 |
It seems like that we need to cherry-pick kubernetes/kubernetes#104880 to old versions. |
We are investigating an issue about hns load balancer rule missing for kube-dns service on Windows nodes, these logs are really annoying. |
@AbelHu |
@zhiweiv What is the issue about hns load balancer rule missing for kube-dns service? You can try to upgrade your cluster or create new Windows agent pools to get the fix in https://github.com/Azure/AgentBaker/pull/1352/files. There is a known issue caused by overlap port.
The port was already reserved in the tcpip stack so the reservation would fail leading to failure to apply policy. Due to this failure the HNS load balancer policy for LoadBalancer IP would not be applied leading to the pods behind that load balancer to not being reachable. |
I found same error during the problem period, it should be the root cause, I updated AKS to 1.21.7(AKSWindows-2019-containerd-17763.2366.211215) today, will see if it gets fixed. |
Seems my error is slightly different with yours:cry: My problem is: at certain time, all pods can't resolve dns suddenly, you can find the hns load balancer rule is missing for kube-dns service by hnsdiag list loadbalancers, restart kube-proxy will recover(I think it rebuild lb rules after reboot). It may occurs every 2~3 weeks on any Window node since September. |
@AbelHu Should I create a SR or open a new issue somewhere? |
@zhiweiv Yes. Your issue is not related to this github issue. Please file a support ticket for your issue |
I found not just kube-dns, some non-system svc are also missing, I plan to get more evidence before file SR. Got a lot of useful information from this thread, thanks. |
Kubeproxy logs are filled with redudnant version check spam from an unexported call that's invoked as part of checking if a feature is supported. The logs don't detail what feature(s) are even being checked so it just seems like spam. With the way things are implemented all of the hcn features are checked for support in any of the `hcn.XSupported()` calls not just the one being checked, so these logs come up quite a bit if there's many features that aren't supported on the machine. Add two new logs in a sync.Once that logs the HNS version and supported features. This should be enough to investigate version issues. Should remedy microsoft#1043 Signed-off-by: Daniel Canter <dcanter@microsoft.com>
Kubeproxy logs are filled with redudnant version check spam from an unexported call that's invoked as part of checking if a feature is supported. The logs don't detail what feature(s) are even being checked so it just seems like spam. With the way things are implemented all of the hcn features are checked for support in any of the `hcn.XSupported()` calls not just the one being checked, so these logs come up quite a bit if there's many features that aren't supported on the machine. Add two new logs in a sync.Once that logs the HNS version and supported features. This should be enough to investigate version issues. Should remedy microsoft/hcsshim#1043 Signed-off-by: Daniel Canter <dcanter@microsoft.com>
k8s version: v1.19.6
https://github.com/microsoft/hcsshim/blob/master/hcn/hcnsupport.go#L88-L106
We found that there are too many redundant logs in kubeproxy logs. It rotates kubeproxy logs too fast. We think that it should use Debugf instead of Infof for these logs in isFeatureInRange
The text was updated successfully, but these errors were encountered: