-
Notifications
You must be signed in to change notification settings - Fork 40.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
do a better job of validating IP family of kube-proxy config #119003
Conversation
There's no need to do this any more: proxyutil.NodePortAddresses does it itself.
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: danwinship The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
LTGM - I would take the localmode tweak as a followup or in here (or argue me down) and if 0.0.0.0 works. oops, gofmt fail. |
LGTM label has been added. Git tree hash: 03fc27b9a595d5d8944b853d2b44b0368d50c519
|
Rather than having this as part of createProxier(), explicitly figure out what IP families the proxier can support beforehand, and bail out if this conflicts with the detected IP family.
915b9d9
to
3070ca5
Compare
errors = append(errors, fmt.Errorf("cluster is %s but ipvs.excludeCIDRs contains only IPv%s addresses", clusterType, badFamily)) | ||
} | ||
|
||
if badBindAddress(s.Config.HealthzBindAddress, badFamily) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this handle "0.0.0.0:" and ":" correctly? It reads as 4 but accepts on 6, too:
$ cat main.go
package main
import (
"fmt"
"net"
"os"
)
func main() {
l, err := net.Listen("tcp", "0.0.0.0:9376")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
l.Accept()
fmt.Println("done")
}
$ go run main.go &
[1] 3125929
$ netcat -6 localhost 9376
done
[1]+ Done go run main.go
$ go run main.go &
[1] 3126055
$ netcat -4 localhost 9376
done
[1]+ Done go run main.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry, yes, I addressed that in the update: badBindAddress
checks ip.IsUnspecified()
now, and considers that acceptable regardless of family; the unit tests check that.
/retest-required |
3070ca5
to
37c3cbd
Compare
Kubernetes e2e suite: [It] [sig-cli] Kubectl client Simple pod should contain last line of the log expand_less 23s /test pull-kubernetes-e2e-gce |
37c3cbd
to
a966d18
Compare
/lgtm |
LGTM label has been added. Git tree hash: a3f6223e2717fa56956a1ea195cfda0608ae7aae
|
Kubernetes e2e suite: [It] [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] expand_less 54s 🤔 👀 /test pull-kubernetes-conformance-kind-ga-only-parallel |
/hold cancel |
if badCIDRs(s.podCIDRs, badFamily) { | ||
errors = append(errors, fmt.Errorf("cluster is %s but node.spec.podCIDRs contains only IPv%s addresses", clusterType, badFamily)) | ||
if s.Config.DetectLocalMode == kubeproxyconfig.LocalModeNodeCIDR { | ||
// This has always been a fatal error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@danwinship I just opened #120999
I believe this PR causes kube-proxy to fail to start with Azure single stack IPv6 in v1.28+
This same configuration wasn't causing issues before
What type of PR is this?
/kind bug
/kind cleanup
What this PR does / why we need it:
spun out of #118408; this centralizes figuring out what mode kube-proxy is running in (single-stack IPv4, single-stack IPv6, dual-stack IPv4-primary or dual-stack IPv6-primary), and then checks that the configuration actually makes sense for that mode.
Alas, we don't want to retroactively make previously-working configurations be broken, so we merely warn rather than erroring out on most errors.
Does this PR introduce a user-facing change?
/sig network
/assign @aojea