-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Kubeadm-DinD-Cluster based kubetest deployer for IPv6 CI #7529
Conversation
/assign @Q-Lee |
/assign @Q-Lee |
fe37505
to
2714be9
Compare
prow/config.yaml
Outdated
secret: | ||
secretName: ssh-key-secret | ||
defaultMode: 0400 | ||
- name: cache-ssd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do not use this.
prow/config.yaml
Outdated
mountPath: /etc/ssh-key-secret | ||
readOnly: true | ||
- name: cache-ssd | ||
mountPath: /root/.cache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please remove this mount and the containerPort, I've just finished eliminating this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will remove these, thanks.
I don't see anything around selecting a project or existing VM so I assume this brings up a VM in the current project. I don't think this works well with our resource cleanup (IE boskos etc) @krzyzacy We need to have something that is guaranteed to be cleaned up later even if the job aborts. |
@BenTheElder - Yes, it creates a GCE VM in the current project via docker-machine create. I'm relying on Boskos to pick the project (gcp-project isn't defined in the jobs config). Do I need to register some cleanup with Boskos? |
@leblancd if it's just a gce instance then boskos can handle it fine. you can run locally with |
2714be9
to
18f4001
Compare
/ok-to-test |
18f4001
to
1edadfc
Compare
images/gce-dind/Dockerfile
Outdated
apt-get clean | ||
|
||
# Install kubeadm-dind-cluster scripts | ||
RUN cd /root && \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than baking it in, we should pull git repos at point of use, so it's easier to change versions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can do this with a second --repo bootstrap flag on the job instead of baking it into an image
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, thanks. Will change this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Added a --repo flag for the kubeadm-dind-cluster repo.
images/gce-dind/Dockerfile
Outdated
git clone -b k8s_ipv6_ci https://github.com/leblancd/kubeadm-dind-cluster.git | ||
|
||
# Install docker-machine | ||
RUN curl -L https://github.com/docker/machine/releases/download/v0.13.0/docker-machine-`uname -s`-`uname -m` >/tmp/docker-machine && install /tmp/docker-machine /usr/local/bin/docker-machine |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need docker machine, and shouldn't this be managed by apt-get?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The kubeadm-dind-cluster scripts use docker-machine to both create the instance on GCE, and to create a connection to the docker daemon running on that instance. Once the docker-machine connection is set up, the docker client in the Prow container will use the docker daemon in the GCE instance, e.g. for commands like 'docker run ...', 'docker ps ...', etc.
The recommended way to install docker-machine is to curl the binary.
kubetest/gce_dind.go
Outdated
// to create a containerized, Docker-in-Docker (DinD) based Kubernetes cluster | ||
// that is running in a remote GCE instance. | ||
|
||
package main |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're trying to avoid putting new kubetest implementations into the main package. It's been a non-trivial effort to begin teasing them apart.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will move this out of the main package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Moved gce_dind.go to its own "gcedind" package (under kubetest).
c06ae3e
to
ad9952f
Compare
ad9952f
to
23bd255
Compare
61d5e0e
to
d20cdb9
Compare
2437afe
to
b45c3b8
Compare
@amwat , thanks for the review! I moved the build of test binaries to TestSetup() as suggested. I also made a small change to eliminate the use of the DIND_STORAGE_DRIVER environmental variable when running the K-D-C script, and therefore was able to get rid of the deployer.runWithEnv() method. |
This change adds prow config for a CI test job for testing Kubernetes IPv6-only functionality. This config uses the Kubeadm-DinD-Cluster (K-D-C) kubetest deployer that was added via PR kubernetes#7529. The K-D-C scripts provide some built-in functionality that is important for testing IPv6-only support: Bridge CNI and host local IPAM plugins are loaded (both plugins support IPv6) Static routes are provided from each node to pod networks on other nodes. When the scripts are run in IPv6 mode, the following is configured for the cluster: IPv6 node addresses and service subnets IPv6 CNI configuration for pods IPv6 DNS nameserver in kubelet startup config Bind9 container for DNS64 functionality Tayga container for NAT64 functionality The End-to-End test suite that is used to test functionality is described here: https://github.com/CiscoSystems/kube-v6-test I've tested this CI by running its Prow container locally (via 'docker run ...'), sample results and logs can be found here: https://k8s-gubernator.appspot.com/builds/my-kubernetes-jenkins/pull/65951/pull-kubernetes-e2e-kubeadm-dind-ipv6/
@@ -0,0 +1,30 @@ | |||
load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also add an OWNERS file for kubeadmdind
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krzyzacy - Whom should I list as reviewers/approvers, and do we need labels in the OWNERS file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no labels are needed, and I would put yourself & people from your project as reviewers/approvers (don't need to put test-infra people here since we are already top-level approvers)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will add, thanks.
I think we should unblock this? (I'm fine since the deployer code are within it's own package)... maybe also worth make an issue to make ipv6 work with kind if we have future plans? @BenTheElder ^^ |
37a54c4
to
c03d2fc
Compare
kubetest/main.go
Outdated
if o.ipMode == "ipv6" || o.ipMode == "dual-stack" { | ||
log.Printf("Enabling IPv6") | ||
if err := control.FinishRunning(exec.Command("sysctl", "-w", "net.ipv6.conf.all.disable_ipv6=0")); err != nil { | ||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
last nit: seems (at least now) only kubeadmdind deployer cares about the ipMode flag? Do you think it makes more sense to put it into kubeadmdind package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krzyzacy - Will we need ipMode for KinD deployment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know the design about that yet, I would hope each deployer can handle specific logic like this rather than in main.go...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I'll move this flag to kubeadmdind.
/lint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krzyzacy: 0 warnings.
In response to this:
/lint
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This change adds a CI test job for testing Kubernetes IPv6-only functionality. This CI test uses deployment scripts from the kubernetes-sigs/kubeadm-dind-cluster (K-D-C) github repo to set up a multi-node, containerized (Docker-in-Docker-in-Docker, or DinDinD) Kubernetes cluster that is running inside a Prow container. (A containerized environment is needed because the Prow infrastructure does not support IPv6 connectivity.) Note that when the K-D-C scripts are used to spin up a Kubernetes cluster in IPv6 mode, it will create the following container on the GCE instance: - bind9 container that performs DNS64 functionality - tayga container that performs NAT64 functionality - kube-master node container - kube-node-1 node container - kube-node-2 node container K-D-C uses the IPv6-capable Bridge CNI network plugin for inter-node connectivity. The K-D-C scripts also add host routes on each worker node container so that pod networks on worker pod networks are reachable, and configure IPv6 pod, service, and node addresses/prefixes.
c03d2fc
to
890c284
Compare
@krzyzacy - I moved the IP mode flag to make it kubeadm-dind specific, added a UT test case for IP mode validation, and made a minor improvement to the existing UT (replaced if/else with case statement). |
great! thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krzyzacy: 0 warnings.
In response to this:
great! thanks!
/lint
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/lgtm |
LGTM label has been added. Git tree hash: 80bb9b8f0f1378914a28a79b657854d14eaa33f1
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: krzyzacy, leblancd, MrHohn The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@leblancd feel free to prep a PR for prowjob config, preferably a periodic job, I'll push a new kubetest image later today. |
This change adds prow config for a CI test job for testing Kubernetes IPv6-only functionality. This config uses the Kubeadm-DinD-Cluster (K-D-C) kubetest deployer that was added via PR kubernetes#7529. The K-D-C scripts provide some built-in functionality that is important for testing IPv6-only support: Bridge CNI and host local IPAM plugins are loaded (both plugins support IPv6) Static routes are provided from each node to pod networks on other nodes. When the scripts are run in IPv6 mode, the following is configured for the cluster: IPv6 node addresses and service subnets IPv6 CNI configuration for pods IPv6 DNS nameserver in kubelet startup config Bind9 container for DNS64 functionality Tayga container for NAT64 functionality The End-to-End test suite that is used to test functionality is described here: https://github.com/CiscoSystems/kube-v6-test I've tested this CI by running its Prow container locally (via 'docker run ...'), sample results and logs can be found here: https://k8s-gubernator.appspot.com/builds/my-kubernetes-jenkins/pull/65951/pull-kubernetes-e2e-kubeadm-dind-ipv6/
@krzyzacy - I have a PR for a presubmit prowjob config. BenTheElder mentioned in this comment on Aug 27 that there was a way to manually trigger presubmit jobs with a fake PR (and job is configured with always-run=false, so it won't auto trigger). If manually triggering a presubmit job isn't practical, let me know, and I'll convert the prow config to a periodic job. |
This change adds prow config for a CI test job for testing Kubernetes IPv6-only functionality. This config uses the Kubeadm-DinD-Cluster (K-D-C) kubetest deployer that was added via PR kubernetes#7529. The K-D-C scripts provide some built-in functionality that is important for testing IPv6-only support: Bridge CNI and host local IPAM plugins are loaded (both plugins support IPv6) Static routes are provided from each node to pod networks on other nodes. When the scripts are run in IPv6 mode, the following is configured for the cluster: IPv6 node addresses and service subnets IPv6 CNI configuration for pods IPv6 DNS nameserver in kubelet startup config Bind9 container for DNS64 functionality Tayga container for NAT64 functionality The End-to-End test suite that is used to test functionality is described here: https://github.com/CiscoSystems/kube-v6-test I've tested this CI by running its Prow container locally (via 'docker run ...'), sample results and logs can be found here: https://k8s-gubernator.appspot.com/builds/my-kubernetes-jenkins/pull/65951/pull-kubernetes-e2e-kubeadm-dind-ipv6/
This change adds prow config for a CI test job for testing Kubernetes IPv6-only functionality. This config uses the Kubeadm-DinD-Cluster (K-D-C) kubetest deployer that was added via PR kubernetes#7529. The K-D-C scripts provide some built-in functionality that is important for testing IPv6-only support: Bridge CNI and host local IPAM plugins are loaded (both plugins support IPv6) Static routes are provided from each node to pod networks on other nodes. When the scripts are run in IPv6 mode, the following is configured for the cluster: IPv6 node addresses and service subnets IPv6 CNI configuration for pods IPv6 DNS nameserver in kubelet startup config Bind9 container for DNS64 functionality Tayga container for NAT64 functionality The End-to-End test suite that is used to test functionality is described here: https://github.com/CiscoSystems/kube-v6-test I've tested this CI by running its Prow container locally (via 'docker run ...'), sample results and logs can be found here: https://k8s-gubernator.appspot.com/builds/my-kubernetes-jenkins/pull/65951/pull-kubernetes-e2e-kubeadm-dind-ipv6/
This change adds prow config for a CI test job for testing Kubernetes IPv6-only functionality. This config uses the Kubeadm-DinD-Cluster (K-D-C) kubetest deployer that was added via PR kubernetes#7529. The K-D-C scripts provide some built-in functionality that is important for testing IPv6-only support: Bridge CNI and host local IPAM plugins are loaded (both plugins support IPv6) Static routes are provided from each node to pod networks on other nodes. When the scripts are run in IPv6 mode, the following is configured for the cluster: IPv6 node addresses and service subnets IPv6 CNI configuration for pods IPv6 DNS nameserver in kubelet startup config Bind9 container for DNS64 functionality Tayga container for NAT64 functionality The End-to-End test suite that is used to test functionality is described here: https://github.com/CiscoSystems/kube-v6-test I've tested this CI by running its Prow container locally (via 'docker run ...'), sample results and logs can be found here: https://k8s-gubernator.appspot.com/builds/my-kubernetes-jenkins/pull/65951/pull-kubernetes-e2e-kubeadm-dind-ipv6/
This change adds prow config for a CI test job for testing Kubernetes IPv6-only functionality. This config uses the Kubeadm-DinD-Cluster (K-D-C) kubetest deployer that was added via PR kubernetes#7529. The K-D-C scripts provide some built-in functionality that is important for testing IPv6-only support: Bridge CNI and host local IPAM plugins are loaded (both plugins support IPv6) Static routes are provided from each node to pod networks on other nodes. When the scripts are run in IPv6 mode, the following is configured for the cluster: IPv6 node addresses and service subnets IPv6 CNI configuration for pods IPv6 DNS nameserver in kubelet startup config Bind9 container for DNS64 functionality Tayga container for NAT64 functionality The End-to-End test suite that is used to test functionality is described here: https://github.com/CiscoSystems/kube-v6-test I've tested this CI by running its Prow container locally (via 'docker run ...'), sample results and logs can be found here: https://k8s-gubernator.appspot.com/builds/my-kubernetes-jenkins/pull/65951/pull-kubernetes-e2e-kubeadm-dind-ipv6/
This change adds a Kubeadm-DinD-Cluster based kubetest deployer that will be used to create a
CI for Kubernetes IPv6-only functionality.
The Prow job config for this CI will be added via a separate PR (see PR #9640).
This CI test uses deployment scripts from the kubernetes-sigs/kubeadm-dind-cluster
github (K-D-C) repo to set up a containerized (Docker-in-Docker-in-Docker), multi-node Kubernetes
cluster that can be run Prow container.
The K-D-C deployment scripts provide some built-in functionality that is important for testing IPv6-only support (see https://github.com/leblancd/kube-v6 for details):
The End-to-End test suite that is used to test functionality is described here:
https://github.com/CiscoSystems/kube-v6-test
I've tested this CI by running its Prow container locally (via 'docker run ...'), sample results and logs can be found here
Note: With this current change set, kubeadm-dind-cluster always does its own build of Kubernetes.
As a potential future CI performance improvement, this can be changed to have kubeadm-dind-cluster use a shared build from GCS.