-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add --ipv6 experimental cli flag #11629
Add --ipv6 experimental cli flag #11629
Conversation
cmd/kops/create_cluster.go
Outdated
@@ -260,6 +260,10 @@ func NewCmdCreateCluster(f *util.Factory, out io.Writer) *cobra.Command { | |||
// TODO: Can we deprecate this flag - it is awkward? | |||
cmd.Flags().BoolVar(&associatePublicIP, "associate-public-ip", false, "Specify --associate-public-ip=[true|false] to enable/disable association of public IP for master ASG and nodes. Default is 'true'.") | |||
|
|||
if featureflag.AWSIPv6.Enabled() { | |||
cmd.Flags().BoolVar(&options.IPv6, "ipv6", false, "Add IPv6 CIDRs to AWS clusters with public topology") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see any restriction to public topology.
cmd.Flags().BoolVar(&options.IPv6, "ipv6", false, "Add IPv6 CIDRs to AWS clusters with public topology") | |
cmd.Flags().BoolVar(&options.IPv6, "ipv6", false, "Allocate IPv6 CIDRs to subnets") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no restriction, but there is no allocation of subnets either.
At the moment, there is egress only internet gateway created so, private instances will have some trouble with internet access.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm failing to see why this shouldn't allocate IPv6 CIDRs to subnets in Private topology.
There is a separate task to provide IPv6 egress for the private route table. But even so, they should be able to use IPv6 to communicate within the VPC or out the Transit Gateway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The node could not be bootstrapped because of the missing egress gateway.
I am trying to get this working with public topologies and then enable other use cases.
Other than that, this is a helper flag, for convenience. Once an always experiment by modifying the cluster config, where there isn't any limitation.
014ec7c
to
05415bb
Compare
/retest |
upup/pkg/fi/cloudup/new_cluster.go
Outdated
if api.CloudProviderID(cluster.Spec.CloudProvider) == api.CloudProviderAWS { | ||
klog.Warningf("IPv6 support is EXPERIMENTAL and can be changed or removed at any time in the future!!!") | ||
for i := range cluster.Spec.Subnets { | ||
cluster.Spec.Subnets[i].IPv6CIDR = fmt.Sprintf("/64#%x", i) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should reserve /64#0
for the ClusterCIDR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me, will change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For future reference, read NonMasqueradeCIDR
instead of ClusterCIDR
. The latter is carved out of the former.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, got it, though, we should discuss (again) about it. NonMasqueradeCIDR
may go away soon as it's part of the dockershim implementation IIRC. Seems used just by the kubenet family.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main use that is relevant here is that it's what populateClusterSpec.assignSubnets()
carves the default ClusterCIDR
and ServiceClusterIPRange
out from. We could invent a new, better named field and have NonMasqueradeCIDR
default to that.
05415bb
to
754da7b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/hold cancel
754da7b
to
9bf62f4
Compare
@johngmyers This should be good to go now. Please take another look. |
cmd/kops/create_cluster.go
Outdated
@@ -500,6 +504,10 @@ func RunCreateCluster(ctx context.Context, f *util.Factory, out io.Writer, c *Cr | |||
cluster.Spec.NetworkCIDR = c.NetworkCIDR | |||
} | |||
|
|||
if c.IPv6 { | |||
cluster.Spec.NonMasqueradeCIDR = "fd00:10:96::/56" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this address? The 10:96
is not random as the spec requires.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we are going to automatically assign the NonMasquereadeCIDR
it should come from the VPC's IPv6 allocation. We should not automatically assign a ULA, especially if it isn't U.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For IPv4 we were using a fixed IP, but don't mind making it random.
What would be the benefit of using a block from the VPC IPv6 allocation?
Should I just remove this and discuss separately?
Is this a good case for making NonMasquereadeCIDR
user configurable on create cluster
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove this and discuss separately.
A benefit of using a block from the VPC IPv6 allocation is that one could then choose to make pods directly accessible from outside the cluster. Also, one wouldn't have to NAT the IPv6 outbound from pods, so the IPv6 destinations would be logging the IPv6 addresses of the pods.
I believe NonMasqueradeCIDR
could be set using SpecOverride. Possible future work would be to default it to "/64#0" (and make that work).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, making "/64#0" work for NonMasqueradeCIDR
depends on work in my #9229 track. In that track I plan to refactor the writing of the completed cluster spec into a ManagedFile
task. Once that is done, PopulateClusterSpec
would need to be turned into a Task
and made dependent on VPCAmazonIPv6CIDRBlock
for managed VPCs.
9bf62f4
to
2a11fa7
Compare
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: johngmyers The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold for #11523
/cc @johngmyers @justinsb